Scanning two-dimensional codes developed by iOS

Since iOS7, iOS scan two-dimensional code does not rely on third party framework, apple supports a two-dimensional code scanning API AVFoundation in the Central Plains, mainly related to the 5 class, the 5 class in the custom camera or video is used, there are a lot of the Internet, the 5 classes respectively:

AVCaptureSession: the media capture session is responsible for outputting captured audio and video data to the output device. AVCaptureDevice: input devices, such as microphones and cameras. AVCaptureDeviceInput: device input data management object, you can create the corresponding AVCaptureDeviceInput object according to AVCaptureDevice, the object will be added to the management of AVCaptureSession. AVCaptureOutput: output data management object for receiving all kinds of output data, there are many subclasses, each subclass is not the same purpose, the object will be added to the management of AVCaptureSession. AVCaptureVideoPreviewLayer: the camera preview layer is the subclass of CALayer, which allows you to view video or video recording in real time. After you set the size, you need to add it to the parent view's layer.

I refer to a lot of blog on the Internet, and find out later, I wrote a specific implementation case, encountered a lot of pits in the process, in this record and share.

Operating environment: Xcode 8.3.2 + iOS 8.4

Core step:

1, create a AVCaptureSession session
2, create a AVCaptureDevice equipment
3, AVCaptureDeviceInput and AVCaptureMetadataOutput create an input output device, and add to the above
session 4, create a preview of
5, set the layer scanning area

Realization

From the above description, in addition to the preview layer, and other UI interface seems not to have what relation, but in the actual development, scanning interface are generally design more humane, such as Alipay, WeChat, the middle has a small box, a line sweep, this is actually UI with the scan two-dimensional code, give the user a good experience.

Interface layout

Scanning two-dimensional codes developed by iOS
interface layout.Png

Main code

#import "ViewController.h" #import < AVFoundation/AVFoundation.h> @interface; ViewController (<); AVCaptureMetadataOutputObjectsDelegate, CALayerDelegate> / * * * UI * @property (weak, nonatomic) IBOutlet UIView *scanView; @property (weak, nonatomic) IBOutlet UIImageView *scanline; @property (weak, nonatomic) IBOutlet UILabel *result; / * * * height constraint scanning area (uniform width value * / @property) (weak, nonatomic) IBOutlet NSLayoutConstraint * scanViewH; / * * * Top constraint scanning line value / @property (weak, nonatomic) IBOutlet NSLayoutConstraint *scanlineTop; / * * * * / @property scan line height (weak, nonatomic) IBOutlet NSLayoutConstraint *scanlineH (nonatomic, strong); @property CALayer *maskLayer; / * * * Five class / @property (nonatomic, strong) AVCaptureDevice *device @property (nonatomic, strong); AVCaptureDeviceInput *input; @property (nonatomic, strong) AVCaptureMetadataOutput *output @property (nonatomic, strong); AVCaptureSession *session; @property (nonatomic, strong) AVCaptureVideoPreviewLayer *layer @end @implementation ViewController #pragma; Mark - lazy loading (AVCaptureDevice *) device{if (_device = = Nil) {_device} = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; return _device;} - (AVCaptureDeviceInput * input{) if (_input = = Nil) {_input [AVCaptureDeviceInput = deviceInputWithDevice:self.device error:nil] return;}} - (AVCaptureMetadataOutput * _input; Output{(_output) if = = Nil) {_output = [[AVCaptureMetadataOutput alloc]init]; [_output setMetadataObjectsDelegate:self (queue:dispatch_get_main_queue)]}; return _output;} #pragma mark - ViewController / * * * the life cycle of the line scan move * / - (void) viewWillAppear: (BOOL) animated{[super viewWillAppear:animated] [UIView animateWithDuration:3.0 delay:0 options:UIViewAnimationOptionRepeat animations:^{; self.scanlineTop.constant = self.scanViewH.constant - 4 [self.scanline; layoutIfNeeded]; completion:nil]};} - {(void) viewDidLoad [super viewDidLoad]; //1 AVCaptureSession *session = [[AVCaptureSession, create a session ([sessi alloc]init]; if On canSetSessionPreset:AVCaptureSessionPresetHigh]) {[session setSessionPreset:AVCaptureSessionPresetHigh];} //2, add the input and output devices if ([session canAddInput:self.input]) {[session addInput:self.input];} if ([session canAddOutput: self.output]) {[session addOutput:self.output];} //3, set the scan data type self.output.metadataObjectTypes = self.output.availableMetadataObjectTypes; //4 = *layer AVCaptureVideoPreviewLayer, create a preview layer [AVCaptureVideoPreviewLayer layerWithSession:session]; layer.frame = self.view.bounds; [self.view.layer insertSublayer:layer atIndex:0]; //5, created around the mask layer CALayer *maskLayer = " CALayer alloc]init]; maskLayer.frame = self.view.bounds; / / set at this time is the color of the middle scan area of the final color = maskLayer.backgroundColor [UIColor colorWithRed:0.1 green:0.1 blue:0.1 alpha:0.2].CGColor; maskLayer.delegate = self; [self.view.layer insertSublayer:maskLayer above:layer]; / / let the agent will be called around the mask color [maskLayer setNeedsDisplay]; //6, the key region set a: its own calculation / scan CGFloat = x (self.view.bounds.size.width - self.scanViewH.constant) * 0.5; / / / / CGFloat = y (self.view.bounds.size.height- self.scanViewH.constant) * 0.5; / / / / CGFloat / / CGFloat / / w = self.scanViewH.constant; H = w; / / / / / / self.output.rectOfInterest = CGRectMake (y/self.view.bounds.size.height, x/self.view.bounds.size.width, h/self.view.bounds.size.height, //6, w/self.view.bounds.size.width); key set scan area, two methods: direct conversion, but should be set in the AVCaptureInputPortFormatDescriptionDidChangeNotification notice, otherwise the metadataOutputRectOfInterestForRect: conversion method returns (0, 0, 0, 0). __weak __typeof (& *self) weakSelf = self; [[NSNotificationCenter defaultCenter] addObserverForName:AVCaptureInputPortFormatDescriptionDidChangeNotification object:nil queue:[NSOperationQueue currentQueue] usingBlock: (NSNotification *_Nonnull note) ^ {weakSelf.output.rectOfInterest [weakSelf.layer = metadataOutputRectOfInterestForRect:self.scanView.frame];}]; //7, [session startRunning] self.session began scanning; = session; self.layer = layer; self.maskLayer = maskLayer;} - (void) dealloc{[[NSNotificationCenter defaultCenter] removeObserver:self] #pragma mark;} the method of scanning to two-dimensional / * * * if the agent code callback method * / - (void) captureOutput: (AVCaptureOutput * captur) EOutput didOutputMetadataObjects: (NSArray * metadataObjects) fromConnection: (AVCaptureConnection *) connection{if (metadataObjects.count > 0 & & metadataObjects! = Nil) {AVMetadataMachineReadableCodeObject *metadataObject = [metadataObjects lastObject] NSString; *result = metadataObject.stringValue; self.result.text = result; [self.session stopRunning]; [self.scanline removeFromSuperview];}} / * * * a mask to empty out. - (void) drawLayer: (CALayer *) layer inContext: (CGContextRef) ctx{if (layer = = self.maskLayer) {UIGraphicsBeginImageContextWithOptions (self.maskLayer.frame.size, NO, 1); / / CGContextSetFillColor new color mask WithColor (CTX [UIColor colorWithRed:0.1 green:0.1 blue:0.1 alpha:0.8].CGColor); CGContextFillRect (CTX, self.maskLayer.frame); / / scanFrame = [self.view convertRect:self.scanView.frame CGRect coordinates fromView:self.scanView.superview]; / / a CGContextClearRect space the middle (CTX, scanFrame);}} @end

Different iOS versions in Info.plist need to add the appropriate permissions

Final effect

Scanning two-dimensional codes developed by iOS
scan two-dimensional code.Gif

summary

First, the pit encountered

1, set up AutoLayout, want to do animation, then it is best not to use bounds, frame, with specific constraints, but directly in the UIView animation to modify constraint is no effect, need after setting up constraints, coupled with [self.scanline layoutIfNeeded];.

But the actual test is as follows, unresolved iOS 8.4 real machine both on viewWillAppear and viewDidAppear, the scanning lines are effective both in iOS 10.3 viewWillAppear and viewDidAppear, the scanning lines are wood effect, but the result is that some animation went directly below.

2, set up the scanning area, the rectOfInterest property is set to AVCaptureMetadataOutput, it is a CGRect type, but its four values and traditions are not the same, is (y, x, and the proportion is high, wide) value, the range of 0~1. Well, there are two schemes, the first one that needs to calculate the specific location of its own, such as those annotated in the code. The second option uses the AVCaptureVideoPreviewLayer metadataOutputRectOfInterestForRect method, but the direct setting is ineffective and must be placed in the notification, as shown in this article.

3, the middle box is through two steps of implementation of CALayer, the first step to set the background color, the color is set according to the middle to display the style; the second step to set a background color in the color agent method, to set up according to the middle except outside the region, and then dig out the middle. However, you must call the setNeedsDisplay method, otherwise the proxy method will not call.

Two. References

Scanning
1, iOS development series – audio playback, sound recording, video recording, video playback, photo
2, iOS – 3, iOS two-dimensional code and two-dimensional code scanning generation (start optimization Caton)

Three, the source code