IOS using CoreImage for face recognition

Update: should be the needs of friends, fill in the OC version of the demo, OC version download address
also attached: swift version download address

CoreImage is a powerful Cocoa in API Touch, is also a key part of iOS SDK, but it is often overlooked. In this tutorial, I will take you to verify the CoreImage face recognition features. Before we start, we must first understand the composition of the next CoreImage framework

Composition of CoreImage framework

Apple has helped us sort out the image processing and look at its structure:

IOS using CoreImage for face recognition
core Image.png

is mainly divided into three parts:

  • Definition: CoreImage and CoreImageDefines. See the name meaning, represents the CoreImage framework and its definition.
  • Operation: filter (CIFliter): CIFilter generates a CIImage. Typically, one or more images are taken as input, and after a number of filtering operations, the output is specified. Detection (CIDetector): CIDetector detect the characteristics of the image processing, such as the use of the image to detect the face of the eyes, mouth, etc.. Features (CIFeature): CIFeature represents the characteristics produced by detector processing.
  • Image part: Canvas (CIContext): Canvas class can be used with Quartz 2D or OpenGL. It can be used to associate the CoreImage class. Such as filters, color rendering. Color (CIColor): the image associated with the canvas, image pixel color processing. Vector (CIVector): the coordinate vector of image processing. Image (CIImage): represents an image that represents the output of the associated image.

After understanding the above basic knowledge, we start by creating a project to take you step by step to verify the face recognition Core Image features.

Applications to be built

IOS face recognition from iOS 5 (2011), but has not been concerned about how. Face recognition API allows developers can not only detect the face, but also can detect some of the special features of the face, such as a smile or wink.

First of all, in order to understand the Core Image face recognition technology, we will create a app to identify the face in the photo and mark it with a box. In the second demo, let the user take a photo, detect the face and retrieve the location of the face. As a result, we fully grasp the face recognition in iOS, and learn how to use this powerful but always ignored API.

If you don’t talk much, open up!

Build a project (I use Xcode8.0)


It provides the initial engineering, of course, you can also create your own (mainly for the convenience of people) I download open after downloading the project with Xcode, you can see there is only one associated with IBOutlet and imageView StoryBoard.

IOS using CoreImage for face recognition
1.png

Face recognition using CoreImage


In the beginning of the project, the story of the imageView component and the code in the IBOutlet has been associated with the next part of the code to achieve the realization of face recognition. Write the following code in the ViewController.swift file:

Import UIKit import CoreImage CoreImage class ViewController: UIViewController {/ / the @IBOutlet weak var personPic: UIImageView override func! ViewDidLoad () {super.viewDidLoad (personPic.image) = UIImage (named: Face-1) detect (detect)} / / call //MARK: func (detect) - face recognition {/ / create personciImage variables extracted from UIImageView image saved in the storyboard and convert it to CIImage, using Core Image CIImage guard let personciImage with CIImage (image: = personPic.image! Else) {return} / / create a accuracy variable and can be set to CIDetectorAccuracyHigh, in CIDetectorAccuracyHigh (ability) and CIDetectorAccuracyLow (weak The processing ability of) choice, because I want some high accuracy CIDetectorAccuracyHigh let accuracy here = [CIDetectorAccuracy: CIDetectorAccuracyHigh] / / here the definition of a CIDetector class belongs to faceDetector variables, accuracy variables let faceDetector = CIDetector and enter the previously created (ofType: CIDetectorTypeFace, context: nil, options: accuracy) / / featuresInImage call the faceDetector method. The recognizer will find the face in the image, and finally return to face an array of let faces = faceDetector?.features (in: personciImage) / / all face faces array in circulation, and will face strong recognition to the CIFaceFeature type for face in faces as! [CIFaceFeature] {print ("Found boun DS / are (face.bounds)) "/ / create named faceBox UIView, frame faces.first frame is set to return, draw a rectangle to identification to face let faceBox = UIView (frame: face.bounds) / / set faceBox border width is 3 faceBox.layer.borderWidth = 3 / / set the border color to red faceBox.layer.borderColor = UIColor.red.cgColor / / sets the background color for the clear, means that this view has no visible background faceBox.backgroundColor = UIColor.clear / / finally, this view is added to the personPic imageView personPic.addSubview (faceBox) / / API can not only help you face recognition, but also can identify the face around the eyes, we Do not identify the eyes in the image, just show you CIFaceFeature if face.hasLeftEyePosition attributes {print ("Left eye bounds are (face.leftEyePosition) / if face.hasRightEyePosition {print}") ("Right eye bounds are (face.rightEyePosition)}}}}")

Compile and run app, the results should be as follows:

IOS using CoreImage for face recognition
2.png

According to the console output, it seems that the recognizer recognizes the face:
Found are (314, 243, 196, 196)

The current implementation does not solve the problem:

  • Face recognition is carried out on the original image, because the original image resolution is higher than image view, so the need to set the image view mode content aspect (to maintain the aspect ratio of the scale of the picture). In order to draw a rectangular box, you need to calculate the actual location and size of the face in image view
  • It should also be noted that CoreImage and UIView use two different coordinate systems (see figure below), so to achieve a CoreImage coordinates to UIView coordinates conversion.

UIView coordinate system:

IOS using CoreImage for face recognition
UIView coordinate system

CoreImage coordinate system:

IOS using CoreImage for face recognition
CoreImage coordinate system

now uses the following code to replace the detect () method:

Func detect1 (guard) {let personciImage = CIImage (image: personPic.image!) else {return} let accuracy CIDetectorAccuracyHigh] = [CIDetectorAccuracy: let faceDetector = CIDetector (ofType: CIDetectorTypeFace, context: nil, options: accuracy) let faces = faceDetector?.features (in: personciImage) / let personciImage.extent.size coordinate system ciImageSize = var = CGAffineTransform (scaleX: 1, transform y: -1 transform transform.translatedBy (x:) = 0, y: -ciImageSize.height) for face in faces as! [CIFaceFeature] {print ("Found bounds / are (face.bounds)") / / application transformation coordinates var faceViewBounds = face.bounds.applying (transform) / / calculation of rectangular in image view The actual position and size of let viewSize = personPic.bounds.size let scale = min (viewSize.width / ciImageSize.width, viewSize.height / ciImageSize.height) let (offsetX = viewSize.width * scale / ciImageSize.width - 2) let = offsetY (viewSize.height - ciImageSize.height * scale / faceViewBounds = faceViewBounds.applying (2) CGAffineTransform (scaleX: scale, y: scale) faceViewBounds.origin.x = offsetX = faceViewBounds.origin.y) offsetY let faceBox UIView (frame: = faceViewBounds) faceBox.layer.borderWidth = 3 faceBox.layer.borderColor = UIColor.red.cgColor faceBox.backgroundColor = UIColor.clear personPic.addSubview (faceBox) if face.hasLeft EyePosition {print ("Left eye bounds are (face.leftEyePosition) / if face.hasRightEyePosition {print}") ("Right eye bounds are (face.rightEyePosition) /")}}}

The above code, the first use of affine transformation (AffineTransform) will be converted to UIKit coordinates Core Image coordinates, and then write a calculation of the actual location and rectangular view of the size of the code.

Run app again, you should see a box around the person’s face. OK, you have successfully identified the face using Core Image.

IOS using CoreImage for face recognition
3.png

but some children’s shoes in the use of the above code may appear after the box does not exist (that is, without recognizing the face) this situation, which is due to forget to close the Auto Layout and Size Classes. Select the storyBoard in ViewController, select the view under imageView. Then the first tab on the right side of the panel found in the use Auto Layout, will be up front, remove it

IOS using CoreImage for face recognition
4.png

after the above settings we run App again, you will see the effect of figure three.

Building a human face recognition camera application


Imagine you have a camera to take pictures of app, after the end of the picture you want to run a face recognition to detect whether there is a face. If there are a number of faces, you might want to use a number of tags to classify these photos. We will not build a save photos after processing app, but a real-time camera App, so the need to integrate the UIImagePicker class, in the face when the face immediately after the identification.

In the beginning of the project has been created in the CameraViewController class, use the following code to achieve the function of the camera:

Class CameraViewController: UIViewController, UIImagePickerControllerDelegate UINavigationControllerDelegate, @IBOutlet var imageView: UIImageView {imagePicker = UIImagePickerController (let!) override func (viewDidLoad) super.viewDidLoad (imagePicker.delegate = self) {} @IBAction func takePhoto (_ sender: AnyObject) {if! UIImagePickerController.isSourceTypeAvailable (.Camera) {return} imagePicker.allowsEditing = false imagePicker.sourceType =.Camera present (imagePicker animated:, true, completion: nil func (imagePickerController)} _ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String: Any] {if let) PickedImage = info[UIImagePickerControllerOriginalImage] as? UIImage.ScaleAspectFit imageView.image = imageView.contentMode = {pickedImage} dismiss (animated: true, completion: nil self.detect (func) imagePickerControllerDidCancel (picker:)} _ {dismiss (UIImagePickerController) animated: true, completion: Nil)}}

The first few lines set a UIImagePicker delegate to the current view, in the method of didFinishPickingMediaWithInfo (principal UIImagePicker method) in imageView is set to the selected image in the method, and then call the detect function on a back view.

Detect function is not implemented, insert the following code and analyze:

Func detect (let) {imageOptions = NSDictionary (object: NSNumber (value: 5) as NSNumber, forKey: CIDetectorImageOrientation as NSString let personciImage CIImage (cgImage:) = imageView.image!.cgImage!) let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh] let faceDetector = CIDetector (ofType: CIDetectorTypeFace, context: nil, options: accuracy) let faces.Features (in: personciImage = faceDetector? Options: imageOptions, as? [String: AnyObject]) if let face = faces?.first as? CIFaceFeature {print ("found bounds / are (face.bounds)" let alert UIAlertController (title:) = "prompt" message: "to face detection, preferredStyle: UIAlertControllerStyle.alert alert.addAction (UIAlertAction (TIT) Le: "OK", style: UIAlertActionStyle.default, handler: Nil)) self.present (alert, animated: true, completion: nil, if face.hasSmile) {print ("face is smiling");} if {face.hasLeftEyePosition print (left position: / (face.leftEyePosition) if face.hasRightEyePosition {print} ") (" right eye position: / (face.rightEyePosition)} else {let} ") alert = UIAlertController (" title: message: ", suggesting that" not detected face ", preferredStyle: UIAlertControllerStyle.alert (alert.addAction) UIAlertAction (title:, style: UIAlertActionStyle.default," "handler: nil self.present (ALERT)), animated: true, completion: Nil)}}

The detect () function is very similar to the previous detect function, but this time it is used to get the image without transformation. When the face is recognized, a warning message is displayed!” Otherwise, the face is not detected”. Run app test:

IOS using CoreImage for face recognition
detected face.Png
IOS using CoreImage for face recognition
was not detected face.Png

we have used some of the properties and methods of CIFaceFeature, for example, if you want to check whether the characters smile, you can call.HasSmile, it will return a Boolean value. .hasLeftEyePosition and.HasRightEyePosition can be used to detect the existence of the left and right eye.

Similarly, you can call hasMouthPosition to detect the presence of a mouth, and if so, you can use the mouthPosition property, as shown below:

If (face.hasMouthPosition) {print ("mouth detected")}

As you can see, it is very easy to use Core Image to detect facial features. In addition to the mouth, smile, eyes, you can also call leftEyeClosed and rightEyeClosed to detect whether or not to open the eyes, here is not posted code.

summary


In this tutorial, I tried CoreImage’s face recognition API and how to use it in a camera App, constructed a simple UIImagePicker to select the photo and detect the presence of characters in the image.

As you can see, Core Image face recognition is a powerful API! I hope this tutorial will give you some useful information about this little-known iOS API.

Click on the swift version of the address, OC version of the address to download the final project, if you feel that you have to help, please help me point a star Oh, your star is my greatest support. (^__^) hee hee……