IOS native two-dimensional code [sweep code] and [generation]


The two-dimensional code is now very common, there are many App are equipped with this feature, there are also many online to explain the iOS code, but the configuration code sweep scope of this problem did not seem to speak clearly. The author today write a two-dimensional code.

Introduction to a two-dimensional code

IOS native two-dimensional code [sweep code] and [generation]
two-dimensional code

Three back to the big box, for camera positioning; black and white, black and white blocks represent 1, on behalf of the 0, a group of eight, composed of binary information. Science: two-dimensional code is what principle? This small video, a brief introduction of the two-dimensional code

IOS native two-dimensional code [sweep code] and [generation]

Two iOS scan code two-dimensional code

  • A project General -> Linked and Libraries -> the introduction of AVFoundation.framework Frameworks
  • Two. Part of the code analysis

1) header file

The header file #import < / / AVFoundation/AVFoundation.h> / / View;; the author view custom, inherits the UIView #import "ShadowView.h" #define kWidth [UIScreen mainScreen].bounds.size.width #define kHeight [UIScreen mainScreen].bounds.size.height define customShowSize CGSizeMake # (200, 200);

2) defining attributes

/ / ScanCodeViewController is the author of the creation of VC, with the launch of Navi, written agreement (UIImagePickerControllerDelegate, UINavigationControllerDelegate in order to direct scan two-dimensional code, the code in the Navi Gallery in the upper right corner of the button @interface ScanCodeViewController (create) <); AVCaptureMetadataOutputObjectsDelegate, UIImagePickerControllerDelegate, UINavigationControllerDelegate> / * * * input data source @property (nonatomic, strong) AVCaptureDeviceInput *input; output data source / * * * / @property (nonatomic, strong) AVCaptureMetadataOutput *output; / * * middle bridge input and output of the captured audio and video data for output to the output device / @property (nonatomic, strong) AVCaptureSession *session; / * * camera Preview (nonatomic, strong / @property layer AVCaptureVideoPreviewLayer *layerView); / * * * / @property layer size preview (nonatomic, assign) CGSize layerViewSize; / * * * / @property effective range scan code (nonatomic, assign) CGSize showSize; View @property / * * * / view author custom (nonatomic, strong) ShadowView *shadowView; @end

3) to create two-dimensional code scan code

- (void) creatScanQR{/ * * * / AVCaptureDevice create input data source *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; / / self.input = [AVCaptureDeviceInput deviceInputWithDevice:device acquisition camera equipment error:nil]; / / create the output stream to create output data source / * * * / self.output = [[AVCaptureMetadataOutput alloc] init]; [self.output setMetadataObjectsDelegate:self (queue:dispatch_get_main_queue)]; / / set the proxy in the main thread in Session / * * * / self.session = refresh settings the "AVCaptureSession alloc] init]; [self.session setSessionPreset:AVCaptureSessionPresetHigh]; / / high quality acquisition [self.session addInput:self.input]; [self.session addOutput:self.output]; Set / scan code support encoding format of self.output.metadataObjectTypes = @[AVMetadataObjectTypeQRCode, AVMetadataObjectTypeEAN13Code, AVMetadataObjectTypeEAN8Code, AVMetadataObjectTypeCode128Code]; / * * * / / / scan code view scanning frame size and location of the self.layerView layerWithSession:self.session] = [AVCaptureVideoPreviewLayer; self.layerView.videoGravity = AVLayerVideoGravityResizeAspectFill; self.layerView.frame = CGRectMake (0, 64, self.view.frame.size.width, self.view.frame.size.height - 64); / / the box size is defined as the scanning line. The following will be called self.layerViewSize = CGSizeMake (_layerView.frame.size.width, _layerView.frame.size.height);} #pragma mark - Implementation of proxy method, complete two-dimensional code scanning (void) captureOutput: (AVCaptureOutput * captureOutput) didOutputMetadataObjects: (NSArray * metadataObjects) fromConnection: (AVCaptureConnection *) connection{if (metadataObjects.count > 0) {/ / stop animation, I read remember to open notes, or scan bar will always have animation //[self.shadowView stopTimer]; / / stop scanning [self.session stopRunning]; AVMetadataMachineReadableCodeObject * metadataObject = [metadataObjects objectAtIndex: 0]; / / output scan string NSLog (@ "% @", metadataObject.stringValue); UIAlertView * alert = [[UIAl ErtView alloc]initWithTitle:@ "message:[NSString stringWithFormat:@" that "metadataObject.stringValue] delegate:nil cancelButtonTitle:@"% @ "," otherButtonTitles:nil, nil]; [alert show];}}

4) call method

- (void) viewDidLoad viewDidLoad] self.view.backgroundColor {[super; [UIColor = whiteColor]; / / creatScanQR] / / call [self; add shot layer [self.view.layer addSublayer:self.layerView]; / / startRunning] / / [self.session to two-dimensional code; Do any additional setup after loading the view.}

So far to complete the two-dimensional code scan code, you will find the entire screen can scan code two-dimensional code, and WeChat’s two-dimensional code sweep code too much difference

5) custom film layer

#import < UIKit/UIKit.h> @interface ShadowView: UIView @property (nonatomic, assign) CGSize showSize; (void) stopTimer; @end
#import "ShadowView.h" @interface (ShadowView) @property (nonatomic, strong) UIImageView *lineView @property (nonatomic, strong); NSTimer *timer; @end @implementation ShadowView (instancetype) - initWithFrame: (CGRect frame) {self = [super initWithFrame:frame]; if (self) {self.backgroundColor = [UIColor / clearColor]; self.lineView = [[UIImageView alloc] attach the picture below init]; self.lineView.image [UIImage imageNamed:@ = "line"]; [self addSubview: self.lineView];} return self;} - (void) playAnimation{[UIView animateWithDuration:2.4 delay:0 options:UIViewAnimationOptionCurveLinear animations:^{self.lineView (.Frame = CGRectMake (self.frame.size.width - self.sho WSize.width) / 2 (self.frame.size.height + self.showSize.height) / 2, self.showSize.width, 2);} completion:^ (BOOL finished) {self.lineView (.Frame = CGRectMake (self.frame.size.width - self.showSize.width) / 2 (self.frame.size.height - self.showSize.height) / 2, self.showSize.width, 2);}];} - (void) stopTimer {[_timer invalidate]; _timer = nil;} - (void) layoutSubviews{[super layoutSubviews]; self.lineView.Frame = (CGRectMake (self.frame.size.width - self.showSize.width) / 2 (self.frame.size.height - self.showSize.height) / 2, self.showSize.width, 2); if (! _timer) {[self playAnimation]; / * * / self.timer = [NSTimer scheduledTimerWithTimeInterval:2.5 automatic play Target:self selector:@selector (playAnimation) userInfo:nil repeats:YES];}} - (void) drawRect: (CGRect) rect{[super drawRect:rect]; CGContextRef CTX = UIGraphicsGetCurrentContext (CGContextSetRGBFillColor); / / overall color (CTX, 0.15, 0.15, 0.15, 0.6); CGContextFillRect (CTX, rect); //draw the transparent layer CGRect clearDrawRect / / middle empty rectangle = CGRectMake ((rect.size.width - self.showSize.width) / 2 (rect.size.height - self.showSize.height) / 2, self.showSize.width, self.showSize.height); CGContextClearRect (CTX, clearDrawRect); / / CGContextStrokeRect frame (CTX, clearDrawRect); CGContextSetRGBStrokeColor (CTX, 1, 1, 1, 1); / / color CGContextSetLineWidth (CTX, 0.5); The linewidth of CGContextAddRect (CTX, clearDrawRect / /); / / rectangular CGContextStrokePath (CTX); [self addCornerLineWithContext:ctx rect:clearDrawRect];} - (void) addCornerLineWithContext: (CGContextRef) CTX rect: (CGRect) rect{float cornerWidth float = 4; cornerLong = 16; / / draw four corners of the linewidth of CGContextSetLineWidth (CTX, cornerWidth); / / CGContextSetRGBStrokeColor (CTX color 83, /255.0, 239/255.0, 111/255.0, 1); / / Green / upper left CGPoint poinsTopLeftA[] = {CGPointMake (rect.origin.x + cornerWidth/2, rect.origin.y), CGPointMake (rect.origin.x + cornerWidth/2, rect.origin.y + cornerLong)}; CGPoint poinsTopLeftB[] = {CGPointMake (rect.origin.x, rect.origin .y + cornerWidth/2 (rect.origin.x), CGPointMake + cornerLong, rect.origin.y + cornerWidth/2) [self addLine:poinsTopLeftA pointB:poinsTopLeftB ctx:ctx];}; / / the lower left corner of CGPoint poinsBottomLeftA[] = {CGPointMake (rect.origin.x + cornerWidth/2, rect.origin.y + rect.size.height - cornerLong), CGPointMake (rect.origin.x + cornerWidth/2, rect.origin.y + rect.size.height)}; CGPoint poinsBottomLeftB[] = {CGPointMake (rect.origin.x, rect.origin.y + rect.size.height - cornerWidth/2), CGPointMake (rect.origin.x + cornerLong, rect.origin.y + rect.size.height - cornerWidth/2 [self addLine:poinsBottomLeftA pointB:poinsBottomLeftB ctx:c}); Tx]; / / poinsTopRightA[] = {CGPointMake the upper right corner of CGPoint (rect.origin.x+ rect.size.width - cornerLong, rect.origin.y + cornerWidth/2), CGPointMake (rect.origin.x + rect.size.width, rect.origin.y + cornerWidth/2)}; CGPoint poinsTopRightB[] = {CGPointMake (rect.origin.x+ rect.size.width - cornerWidth/2, rect.origin.y, CGPointMake (rect.size.width-) rect.origin.x + cornerWidth/2, rect.origin.y + cornerLong) [self addLine:poinsTopRightA pointB:poinsTopRightB ctx:ctx];}; the lower right corner of CGPoint / poinsBottomRightA[] = {CGPointMake (rect.origin.x+ rect.size.width - cornerWidth/2, rect.origin.y+rect.size.height - cornerLong, CGPo) IntMake (rect.origin.x- cornerWidth/2 + rect.size.width, rect.origin.y +rect.size.height)}; poinsBottomRightB[] = CGPoint {CGPointMake (rect.origin.x+ rect.size.width - cornerLong, rect.origin.y + rect.size.height - cornerWidth/2), CGPointMake (rect.origin.x + rect.size.width, rect.origin.y + rect.size.height - cornerWidth/2 [self addLine:poinsBottomRightA pointB:poinsBottomRightB}); ctx:ctx]; CGContextStrokePath (CTX);} - (void) addLine: (CGPoint[]) pointA pointB: (CGPoint[]) pointB ctx: (CGContextRef CTX) {CGContextAddLines (CTX, pointA, 2); CGContextAddLines (CTX, pointB, 2);} @end
IOS native two-dimensional code [sweep code] and [generation]

Note: if you write the corresponding demo, and testing the system in more than ios7, will find the initial position may scan line problems. Give a small hint layoutSubviews is executed two times. Leave a small tail. So we think.

Portal: the above angle drawing method, the author cited Raul7777 and attached links, the author made a small improvement, and add comments. If you are helpful, please give him a Star

Add: scan line animation, with NSTimer, the occasional cell phone to deal with Caton, it will be the pit. The author saw useful CABasicAnimation write, I feel quite good

- (void) addAnimationAboutScan{self.lineView.hidden = NO; CABasicAnimation = *animation [ShadowView moveYTime:2.5 fromY:[NSNumber numberWithFloat:0] toY:[NSNumber numberWithFloat: (self.showSize.height-1)] rep:OPEN_MAX]; [self.lineView.layer addAnimation:animation forKey:@ "LineAnimation"];} + (CABasicAnimation *) moveYTime: (float) time fromY: (NSNumber * fromY) toY: (NSNumber *) toY rep: (int) rep{CABasicAnimation *animationMove = [CABasicAnimation animationWithKeyPath:@ "transform.translation.y"]; [animationMove setFromValue:fromY]; [animationMove setToValue:toY]; animationMove.duration = time; animationMove.delegate = self; animationMove.repeatCount = rep; animationMove.fillMode = kCAFillModeForw ARDS; animationMove.removedOnCompletion = NO; animationMove.timingFunction = [CAMediaTimingFunction functionWithName:kCAMediaTimingFunctionEaseInEaseOut]; return animationMove;} - (void) removeAnimationAboutScan{[self.lineView.layer removeAnimationForKey:@ "LineAnimation"]; self.lineView.hidden = YES;}

6) back to the ScanCodeViewController configuration scan code range

Key: This is the author most want to say. With the official
rectOfInterest Property
A rectangle of interest for limiting the search area for visual metadata.
The value of this property is a CGRect value that determines the object s rectangle of interest for “each frame of video.
The rectangle’s origin is top left and is relative to the coordinate space of the device providing the metadata.
Specifying a rectangle of interest may improve detection performance for certain types of metadata. Metadata objects whose bounds do not intersect with the rectOfInterest will not be returned.
The default value of this property is a rectangle (of 0, 0 , 1, 1). Note:
see the origin of the rectangle is the upper left corner, but the real test you will find is in the upper right corner, because the scan code is the default screen, so the original upper right into the upper left corner of the original width become high, the high value is in accordance with the camera. A wide resolution to take the proportion rate rather than the screen aspect ratio. Screen aspect ratio:
iPhone4: [320 480]; iPhone5: [320 568; iPhone 6: [375 667]; iPhone 6plus: [414 736]. AVCaptureSessionPresetHigh settings so the model resolution was 1920 x 1080 iPhone4 in addition to basically accord with the aspect ratio resolution ratio. There will be errors, but the effect is not obvious. If you need support including iPhone4 models need to be so high and wide screen resolution of unity. This will ShadowVie the method below. The empty rectangle in the middle of the w corresponds to the valid sweep range

The configuration code sweep range - * / / * * / * * (void) allowScanRect{scan is the default is a horizontal screen in the upper right corner of the origin [] * rectOfInterest = CGRectMake (0, 0, 1, 1); AVCaptureSessionPresetHigh = 1920 * * 1080 * need to coordinate the resolution of camera screen and resolution of uniform size CGRect * / / / shear position shearRect = CGRectMake need ((self.layerViewSize.width - self.showSize.width) / 2 (self.layerViewSize.height - self.showSize.height) / 2, self.showSize.height, self.showSize.height); CGFloat deviceProportion = 1920 / 1080; CGFloat screenProportion = self.layerViewSize.height / self.layerViewSize.width Resolution; / / > ratio; screen ratio (equivalent to screen high enough) if (deviceProportion > screenProportion) {/ / conversion resolution than the corresponding screen high CGFloat finalHeight = self.layerViewSize.width * deviceProportion; / / get the deviation of CGFloat addNum = (finalHeight - self.layerViewSize.height) / 2; / / (+ deviation value corresponding to the actual position) / converted screen high self.output.rectOfInterest = CGRectMake ((shearRect.origin.y + addNum) / finalHeight, shearRect.origin.x / self.layerViewSize.width, shearRect.size.height/ finalHeight, ShearRect.size.width/ self.layerViewSize.width}else{CGFloat); finalWidth = self.layerViewSize.height / deviceProportion; CGFloat = addNum (finalWidth - self.layerViewSize.width) / 2; self.output.rectOfInterest = CGRectMake (shearRect.origin.y / self.layerViewSize.height (shearRect.origin.x + addNum) / finalWidth, shearRect.size.height / self.layerViewSize.height, shearRect.size.width / finalWidth);}}

7) read the two-dimensional code in the album

Note: ios8 above the system is only open to read the album two-dimensional code function (CIDetectorTypeQRCode), so such as the iPhone4 function can not be achieved, you need to have a judgment

Button * / / * NAVI #pragma mark read two-dimensional code - Album - (void) takeQRCodeFromPic: (UIBarButtonItem *) leftBar{if ([[[UIDevice currentDevice] systemVersion] doubleValue] < 8) {UIAlertView alert = [[UIAlertView * alloc]initWithTitle:@ "tips" message:@ "please update the system to more than 8 delegate:nil cancelButtonTitle:@!" "OK" otherButtonTitles:nil, nil]; [alert show]}else{; if ([UIImagePickerController isSourceTypeAvailable: UIImagePickerControllerSourceTypePhotoLibrary]) {UIImagePickerController *pickerC = [[UIImagePickerController alloc] init]; pickerC.delegate = self; pickerC.sourceType = UIImagePickerControllerSourceTypeSavedPhotosAlbum; / / from the album [self presentViewController:pickerC animated:YES completion:NULL];}else{UIAlertView alert = [[UIAlertView * alloc]initWithTitle:@ "prompt" message:@ "device does not support access to the album, please set in -> -> is set in the photo privacy! "Delegate:nil cancelButtonTitle:@" otherButtonTitles:nil nil] [alert ", identified; show];}}} - (void) imagePickerController: (UIImagePickerController * picker) didFinishPickingMediaWithInfo: (NSDictionary * info) {//1. get the selected picture UIImage *image = info[UIImagePickerControllerEditedImage]; if (! Image) {image = info[UIImagePickerControllerOriginalImage];} //2. to initialize a monitor CIDetector*detector = [CIDetector detectorOfType:CIDetectorTypeQRCode context:nil options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh [picker dismissViewControllerAnimated:YES / completion:^{}]; monitoring results of array data NSArray *features = after finished placing identification [detector featuresInImage:[CIImage imageWithCGImage:image.CGImage]]; / / judge whether data (i.e. whether it is two-dimensional code) if (features.count > =1) {/ * * * CIQRCodeFeature *feature result object = [features objectAtIndex:0]; NSString *scannedResult = feature.messageString; UIAlertView = [[UIAlertView * alertView alloc]initWithTitle:@ message:scannedResult delegate:nil cancelButtonTitle:@ "tip" "OK" otherButtonTitles:nil, nil] [alertView; show] else{UIAlertView;} alertView = [[UIAlertView * alloc]initWithTitle:@ "tip" message:@ "the image does not contain two-dimensional code! "Delegate:nil cancelButtonTitle:@" otherButtonTitles:nil nil] [alertView ", identified; show];}]}};

8) Rewrite call

- (void) viewDidLoad viewDidLoad] self.view.backgroundColor {[super; [UIColor = whiteColor]; self.showSize = customShowSize / / display range; / / creatScanQR] / / call [self; add shot layer [self.view.layer addSublayer:self.layerView]; / / startRunning] / / [self.session to two-dimensional code scan code set available; [self allowScanRect]; / / add the upper shadow view self.shadowView = [[ShadowView (0, 64, alloc] initWithFrame:CGRectMake kWidth, kHeight - 64)]; [self.view addSubview: self.shadowView]; self.shadowView.showSize = self.showSize; / / add scan code button self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@ album "album" style:UIBarBu TtonItemStylePlain target:self action:@selector (takeQRCodeFromPic:)]; / / Do any additional setup after loading the view.}

8) rights issue

Permission to use the camera needs to obtain the appropriate permissions, such as the user is not open, you can set the reminder, leaving a small tail, the reader set up their own

Three iOS two-dimensional code generation

Hear [iOS] to generate a two-dimensional code system for cloud channel of this article to write, very simple, two-dimensional code generation is HD, but in the ios7 system, is not clear, that is to say if your application support to ios7. This method has some problems.

The two-dimensional code generation of iOS _ Captain (color + shadow) writing this article, support ios7 system, HD two-dimensional code.

Therefore, the two-dimensional code generation problem will not repeat the author of the two-dimensional code in iOS to complete the explanation

43 dimensional code link

Additional three party portal: