IOS development – two-dimensional code scanning

Preface

Before doing two-dimensional code scanning, have used ZBar, ZXing, and did not feel the integration or use of how convenient. After iOS7, apple provides scanning code related interface, today tried, scan code identification efficiency is relatively high, providing API is also more convenient to use. This article mainly explains: 1.. Using native API to realize scan code function. 2. how to set the scan range?. 3. transparent area graphics rendering.

Related interfaces:

One of the simplest two-dimensional code scanning functions requires the following classes:
  1. AVCaptureDevice: physical devices that provide real-time input of media data, such as video and audio, _device = [AVCaptureDevice, defaultDeviceWithMediaType:AVMediaTypeVideo];
  2. AVCaptureDeviceInput: provides interfaces to capture data from the AVCaptureDevice
  3. AVCaptureMetadataOutput: output class, output read data
  4. AVCaptureSession: the hub of relational input, output, and real time input and output of information
  5. AVCaptureVideoPreviewLayer: displays the data captured by the camera
Scan code initialization:
@property (strong, nonatomic) AVCaptureDevice *device @property (strong, nonatomic); AVCaptureDeviceInput *input; @property (strong, nonatomic) AVCaptureMetadataOutput *output; @property (strong, nonatomic) AVCaptureSession * session; @property (strong, nonatomic) AVCaptureVideoPreviewLayer *previewLayer;
_device [AVCaptureDevice = defaultDeviceWithMediaType:AVMediaTypeVideo]; _input = [AVCaptureDeviceInput deviceInputWithDevice:self.device error:nil] = [[AVCaptureMetadataOutput; _output alloc]init]; [_output setMetadataObjectsDelegate:self queue: (dispatch_get_main_queue)]; _session = [[AVCaptureSession alloc]init]; [_session setSessionPreset:AVCaptureSessionPresetHigh]; if ([_session canAddInput:self.input]) {[_session addInput:self.input];} if ([_session canAddOutput: self.output]) {[_session} addOutput:self.output]; _output.metadataObjectTypes = @[AVMetadataObjectTypeQRCode]; _previewLayer = [AVCaptureVideoPreviewLayer = layerWithSession:_session]; _previewLayer.videoGravity AVLayerVideoGravityResizeAspectFill; _previewLayer .frame = self.view.layer.bounds; [self.view.layer, insertSublayer:_previewLayer, atIndex:0]; [_session startRunning];
After identifying the two-dimensional code information, the agent method executes, where the parsed data is executed:
- (void) captureOutput: (AVCaptureOutput * captureOutput) didOutputMetadataObjects: (NSArray * metadataObjects) fromConnection: (AVCaptureConnection * connection) {if ([metadataObjects count] > 0) {[_session stopRunning]; AVMetadataMachineReadableCodeObject * metadataObject = [metadataObjects objectAtIndex:0]; NSString *stringValue = metadataObject.stringValue; NSLog (@ "data analysis to:% @", stringValue); NSURL *url [NSURL = URLWithString:stringValue]; if ([[UIApplication sharedApplication]canOpenURL:url]) {[[UIApplication sharedApplication]openURL:url]}}};

The above two sections of code can realize two-dimensional code scanning function.
but we’ll see that our entire screen is in the area of two-dimensional code scanning. If there are multiple two-dimensional codes in one block, this may affect the accuracy of your scan. At this point, we need to set the scan identification area.

Scan range settings:

IOS also provides an interface that lets you set the identification area.

@property (nonatomic) CGRect / rectOfInterest; the default value is CGRectMake (0, 0, 1, 1)

When we first started setting up this property, we might find that the settings were wrong. A strange phenomenon is found later: the origin seems to be in the upper right corner, and the corresponding values of X and Y seem to be inverse. Following chart:

IOS development - two-dimensional code scanning,
, figure 1

The values of ABCD four are marked in the graph, respectively. Focus on the A point, understand the A point, and the rest of the points are understood. The rect corresponding to the scan area above is set to:

_output.rectOfInterest = CGRectMake (0.35, 0.3, 0.6, 0.7);

If you still don’t understand the coordinates, imagine: if your cell phone is transparent, look at your phone from the location of the two-dimensional code. From the back of the phone, does the picture look the same as the coordinate system (origin is left upper)?.

Finally, attach a section to draw the code for the transparent area:

- (void) drawRect: (CGRect) rect colorWithWhite:0 alpha:0.5] {[[UIColor setFill]; / / UIRectFill translucent area (rect); / / holeRection = CGRectMake CGRect transparent regions (85180200200); CGRect holeiInterSection = CGRectIntersection (holeRection, rect); [[UIColor clearColor] setFill]; UIRectFill (holeiInterSection);}