AVAudioFoundation (4): audio video recording

This article from: AVAudioFoundation (4): www.samirchen.com | audio and video recording

The main content of this paper is from AVFoundation Programming Guide.

When we pick up the audio and video of the device, we need to assemble the various data, which can then be coordinated using the AVCaptureSession object.

  • A AVCaptureDevice object represents an input device, such as a camera or a microphone.
  • An instance of a AVCaptureInput specific subclass can be used to configure the ports of the output device.
  • An instance of a AVCaptureOutput specific subclass can be used to output audio and video data to a video file or a static picture.
  • A AVCaptureSession instance used to coordinate input and output data streams.

In recording videos, we can use AVCaptureVideoPreviewLayer to allow the user to see preview effects.

The following figure shows how to coordinate multiple input and output data via a capture session instance:

AVAudioFoundation (4): audio video recording
image

For most application scenarios, these details are enough for us to use. But for some operations, such as when we want to monitor an audio channel strength, we need to understand the object of different input device corresponding to the port, and the port and the output is connected to.

In audio and video recording, the connection between input and output is represented by AVCaptureConnection. Input (AVCaptureInput) contains one or more input ports (AVCaptureInputPort), output (AVCaptureOutput) can receive data from one or more data sources, such as a AVCaptureMovieFileOutput can also receive video and audio data.

When you add an input or output to a recording session, the session generates connections between all compatible input and output ports, represented by the AVCaptureConnection object.

AVAudioFoundation (4): audio video recording
image

You can use a capture connection to enable or disable the flow of data from a given input or to a given output. You can also use a connection to monitor the average and peak power levels in an audio channel.

We can use connetion objects to control the disconnection or connection of data streams between input and output, and we can use it to monitor the average and peak values of audio channels.

Use Capture Session to coordinate data flow

AVCaptureSession is the core coordinator we use to manage data capture, and we use it to tune the data stream at the video input and output. We can add the capture device we need to the session, then start the data stream with the startRunning interface and stop the data stream with the stopRunning.

AVCaptureSession *session = [[AVCaptureSession alloc] init]; inputs and outputs. [session startRunning] / / Add;

Configuring Session

We can set the image quality and resolution we need for the session. Here are a few of the configuration options:

  • AVCaptureSessionPresetHigh, high resolution, the specific value depends on the maximum resolution that the device can provide.
  • AVCaptureSessionPresetMedium, medium resolution, the exact value depends on the device.
  • AVCaptureSessionPresetLow, low resolution, specifically depending on the device.
  • AVCaptureSessionPreset640x480, with a resolution of 640×480, is often referred to as 480P.
  • AVCaptureSessionPreset1280x720, with a resolution of 1280×720, is often referred to as 720P.
  • AVCaptureSessionPresetPhoto, full size photo resolution, this option does not support output video.

If you want to use some resolution options, we need to test the device for support first:

If ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {session.sessionPreset = AVCaptureSessionPreset1280x720;} else {failure.} / / Handle the

If you want to configure the parameters of the session to do more fine-grained control, configuration parameters or to modify the already running state of the session, we need to do direct modifications to the beginConfiguration and commitConfiguration methods. The combination of these two methods allows our team’s device modifications to be submitted in a group manner, so as to avoid visual or state inconsistencies. After the beginConfiguration is called, we can add or remove the output, modify the sessionPreset value, and configure the input or output parameters of the video capture separately. Knowing that we called commitConfiguration, these modifications are committed and applied together.

[session beginConfiguration] Remove an existing capture; / / device. / / Add a new capture device. the preset. [session commitConfiguration] / / Reset;

Monitor Session status

During recording, session sends notifications to inform its corresponding status, such as session, start, end, and break. We can receive the session runtime error from the AVCaptureSessionRuntimeErrorNotification. We can also choose the runtime attribute of session to get its status running or interrupted. In addition, these attributes support KVO monitoring, and notifications will be sent to the main thread.

Use AVCaptureDevice to represent input devices

AVCaptureDevice is our physical reality to provide input data (such as audio or video input device) abstract, each AVCaptureDevice object corresponds to an input device, such as a front camera, rear camera, our common microphone. The data they collect will be exported to the AVCaptureSession instance.

We can use AVCaptureDevice’s devices and devicesWithMediaType: class methods to check which devices are currently available. If necessary, we can also get what functions the device supports. The list of available devices is dynamic, some devices will be used because other applications become unavailable, some devices may also suddenly available, so we need to register the changes of AVCaptureDeviceWasConnectedNotification and AVCaptureDeviceWasDisconnectedNotification to inform the perception of currently available equipment.

We can use capture input to want to add an input device to a AVCaptureSession.

Equipment characteristics

We can query the different characteristics of the acquisition device. For example, we can judge whether the device supports a collection of media types using the hasMediaType: interface, it can be judged whether the device supports acquisition of the default session preset using the supportsAVCaptureSessionPreset: interface. We can also get information about the location, localization and naming of the devices so that they can be displayed to the user.

AVAudioFoundation (4): audio video recording
image

The following example code shows how to traverse the device and print the device name:

NSArray *devices = [AVCaptureDevice devices]; for (AVCaptureDevice *device in devices) {NSLog (@ Device name:, [device localizedName]% @ "); if ([device hasMediaType:AVMediaTypeVideo]) {if ([device position] = = AVCaptureDevicePositionBack) {NSLog (@ Device position: back");} else {NSLog (@ Device position: front}}});

In addition, you can also find the model, ID, and unique ID devices.

Device recording settings

Different devices have different capabilities, such as some devices that support different focusing or refresh modes. The following code shows how to find a device that supports flashlights and preset preset.

NSArray *devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; NSMutableArray *torchDevices = [[NSMutableArray alloc] init]; for (AVCaptureDevice *device in devices ([device) {[if hasTorch] & & [device; supportsAVCaptureSessionPreset:AVCaptureSessionPreset640x480]) {[torchDevices addObject:device];}}

If you find multiple devices, you can display the device’s localizedName to the user to select the devices they need. When using the corresponding device, we can set up the various modes of operation of the device, but one thing to note is that the device needs to be locked before setting up.

focus mode

The focused modes currently supported are as follows:

  • AVCaptureFocusModeLocked: focus fixing. The focus is usually taken as the focus of the shooting scene.
  • AVCaptureFocusModeAutoFocus: Auto focus. This mode allows the user to select one thing for focus, even if the location is not the midpoint of the shooting scene.
  • AVCaptureFocusModeContinuousAutoFocus: the camera will continue to focus automatically.

We can use the isFocusModeSupported: interface to check whether the device supports the corresponding focus mode, and then sets the corresponding focusMode property.

We can also use focusPointOfInterestSupported to check if the device supports the specified focus, and if enabled, we can set the corresponding focusPointOfInterest property. Where {0, 0} stands for the upper left corner, {1, and 1} for the lower right corner.

We can access the adjustingFocus property to see if the camera is focusing, and this property supports KVO, so we can monitor it to learn the changes in the focus state.

If you have modified the settings associated with the camera’s focus mode, you can use the following code to revert to the default state:

If ([currentDevice isFocusModeSupported:AVCaptureFocusModeContinuousAutoFocus]) {CGPoint autofocusPoint = CGPointMake (0.5f, 0.5f); [currentDevice setFocusPointOfInterest:autofocusPoint]; [currentDevice setFocusMode: AVCaptureFocusModeContinuousAutoFocus];}

Exposure mode

The currently supported exposure patterns are as follows:

  • AVCaptureExposureModeLocked, exposure level lockout.
  • AVCaptureExposureModeAutoExpose, the camera adjusts the exposure level automatically, then switches the exposure mode to AVCaptureExposureModeLocked.
  • AVCaptureExposureModeContinuousAutoExposure, the camera adjusts the exposure level at any time.
  • AVCaptureExposureModeCustom, user defined exposure levels.

We can check whether the device supports the corresponding schema by isExposureModeSupported:, and then set the corresponding exposureMode property.

We can also use exposurePointOfInterestSupported to check if the device supports the specified exposure point, and if enabled, we can set the corresponding exposurePointOfInterest property. Where {0, 0} stands for the upper left corner, {1, and 1} for the lower right corner.

We can access the adjustingExposure property to see if the camera is changing the exposure settings. This property supports KVO, so we can monitor it to see the changes in the exposure state.

If you have modified the camera’s exposure mode settings, you can use the following code to restore it to the default state:

If ([currentDevice isExposureModeSupported:AVCaptureExposureModeContinuousAutoExposure]) {CGPoint exposurePoint = CGPointMake (0.5f, 0.5f); [currentDevice setExposurePointOfInterest:exposurePoint]; [currentDevice setExposureMode: AVCaptureExposureModeContinuousAutoExposure];}

flash mode

The currently supported flash modes are as follows:

  • AVCaptureFlashModeOff won’t flash.
  • AVCaptureFlashModeOn will flash.
  • AVCaptureFlashModeAuto decides whether to flash or not depending on the situation.

We can check whether the device has a flash through hasFlash, and you can check whether the device supports the corresponding mode by isFlashModeSupported:, and set the corresponding mode by flashMode.

Flashlight mode

In the flashlight mode, the flash will continue to open in low power mode to illuminate the image recording. The current supported patterns are as follows:

  • AVCaptureTorchModeOff, shut down.
  • AVCaptureTorchModeOn, turn on.
  • AVCaptureTorchModeAuto, auto.

We can check that the device has a flash through the hasTorch, and you can check whether the device supports the corresponding flashlight mode by isTorchModeSupported:, and set the corresponding mode by torchMode.

Steady Video

Video shake prevention is mostly dependent on hardware, though, not all video formats and resolutions support it. In addition, opening the shake will also bring some delay for video recording. We can check to see if the anti shake has been started by videoStabilizationEnabled, and allow the application to automatically turn on the enablesVideoStabilizationWhenAvailable under condition support.

white balance

There are currently several white balance patterns supported:

  • AVCaptureWhiteBalanceModeLocked, fixed mode.
  • AVCaptureWhiteBalanceModeAutoWhiteBalance, auto mode. The camera adjusts the white balance as it is, and then switches to the AVCaptureWhiteBalanceModeLocked mode.
  • AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance, the camera adjusts the white balance at any time.

We can check whether the given schema is supported by isWhiteBalanceModeSupported:, and then set the corresponding pattern by whiteBalanceMode.

We can access the adjustingWhiteBalance property to see if the camera is changing the white balance setting. This property supports KVO, so we can monitor it to see the changes in the white balance.

Set direction

We can set the image direction that we want to get in AVCaptureOutput (AVCaptureMovieFileOutput, AVCaptureStillImageOutput, AVCaptureVideoDataOutput) through the AVCaptureConnection instance.

We checked through isVideoOrientationSupported to see if we support changing the video direction and set the direction through the videoOrientation.

Here is the sample code:

AVCaptureConnection *captureConnection = < #A; capture; connection#> if ([captureConnection isVideoOrientationSupported]) {AVCaptureVideoOrientation orientation = AVCaptureVideoOrientationLandscapeLeft; [captureConnection setVideoOrientation:orientation];}

Configuration device

The related attributes set, we need a lock to obtain the operation of the device using lockForConfiguration:, so that you can avoid problems by the other application of changing the setting device while in use due to incompatible.

If ([device isFocusModeSupported:AVCaptureFocusModeLocked]) {NSError *error = nil; if ([device lockForConfiguration:& error]) {device.focusMode = AVCaptureFocusModeLocked; [device unlockForConfiguration];} else {/ / Respond to the failure as appropriate.

You should keep the device lock only if you want the device settings not to be modified. Incorrect holding of the device locks may affect the recording quality of other programs.

Device switching

Sometimes you may need to switch devices when you use them, such as switching front and rear cameras. At this point, in order to avoid Caton or flash screen, we can reconfigure a session that is being executed, but we need to use beginConfiguration and commitConfiguration separately before and after modification:

AVCaptureSession *session = < #A capture session#> [session; beginConfiguration]; [session removeInput:frontFacingCameraDeviceInput]; [session addInput:backFacingCameraDeviceInput]; [session commitConfiguration];

When the outermost commitConfiguration is called, all configuration changes are committed together, so that smooth switching is guaranteed.

Use Capture Inputs to add a device to a Session

We can use AVCaptureDeviceInput to add a device to a session. AVCaptureDeviceInput is the port used to manage the device.

NSError *error; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:& error]; if (input!) the error} appropriately. {/ / Handle

We use addInput: to add devices. We can also check it with canAddInput: before adding.

AVCaptureSession *captureSession = < #Get a; capture session#> AVCaptureDeviceInput; *captureDeviceInput = < #Get a capture; device input#> if ([captureSession canAddInput:captureDeviceInput]) {[captureSession} else {addInput: captureDeviceInput]; / / Handle the failure.}

AVCaptureInput declares one or more media data streams. For example, an input device can provide audio and video data at the same time. Each media stream provided by the input is represented by an AVCaptureInputPort. A session uses a AVCaptureConnection object to define a mapping between a set of AVCaptureInputPort objects and a AVCaptureOutput.

Use Capture Outputs to get output from an Session

To get output from a capture session, we need to add one or more output instances to it, and the output instances are subclasses of AVCaptureOutput. Such as:

  • AVCaptureMovieFileOutput, output movie files.
  • AVCaptureVideoDataOutput, we can use this when we need to process the video frames we have received (such as creating a custom rendering layer).
  • AVCaptureAudioDataOutput, you can use this when we need to process the recorded audio data.
  • AVCaptureStillImageOutput, you can use this when we need to get the static picture and the corresponding metadata.

We can use addOutput: to add output instances to session, and we can also use canAddOutput: to check whether the output instances to be compatible are added before adding. We can add output instances to a running session.

AVCaptureSession *captureSession = < #Get a; capture session#> AVCaptureMovieFileOutput; *movieOutput = < #Create and configure; movie a output#> if ([captureSession canAddOutput:movieOutput]) {[captureSession} else {addOutput: movieOutput]; / / Handle the failure.}

Save movie files

When you use AVCaptureMovieFileOutput to save audio and video data to a file, we can do some output settings, such as maximum recording time, maximum file size, and so on. We can also stop recording when the hard disk space is not enough.

AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init]; CMTime maxDuration = < #Create a CMTime to represent the maximum duration#>; aMovieFileOutput.maxRecordedDuration = maxDuration; aMovieFileOutput.minFreeDiskSpaceLimit = < #An appropriate minimum given the quality of the movie format and the duration#>;

When recording, the output resolution and rate are based on the sessionPreset property of the capture session. The video encoding type is H.264, the audio encoding type is AAC, and this is also determined by the different device types.

start recording

We start recording via startRecordingToOutputFileURL:recordingDelegate:, when you need to provide a file URL and a delegate. This URL cannot point to an existing file because the audio and video output here does not overwrite existing resources. At the same time, we must also have write permissions on the location of the URL. The delegate here must follow the AVCaptureFileOutputRecordingDelegate protocol and must implement the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

AVCaptureMovieFileOutput *aMovieFileOutput #Get a movie file = < output#> NSURL; *fileURL = < #A file URL that identifies the output location#> [aMovieFileOutput; startRecordingToOutputFileURL:fileURL recordingDelegate:< #The delegate#>];

In the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method, the delegate can be the final output file is written to the album, but also need to check and deal with various possible errors.

Make sure the file is written successfully

To ensure that the file is successful, we can check the AVErrorRecordingSuccessfullyFinishedKey in the captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error: method.

- (void) captureOutput: (AVCaptureFileOutput * captureOutput) didFinishRecordingToOutputFileAtURL: (NSURL * outputFileURL) fromConnections: (NSArray * connections) error: (NSError * error) {BOOL recordedSuccessfully = YES; if ([error code]! = noErr) {/ / A problem occurred: Find out if the recording was successful. ID value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey]; if (value recordedSuccessfully = [value boolValue]) {};} / / Continue as appropriate...

We need to check the corresponding AVErrorRecordingSuccessfullyFinishedKey error userInfo value, because sometimes even if we got the wrong file, may also be saved successfully, at this time, there may be some restrictions touches we set, such as AVErrorMaximumDurationReached or AVErrorMaximumFileSizeReached, there are other reasons:

  • AVErrorDiskFull, there’s not enough disk space.
  • AVErrorDeviceWasDisconnected, the recording device is disconnected.
  • AVErrorSessionWasInterrupted, recording, session was interrupted, better than the tathagata.

Add Metadata to the file

We can set the metadata for the recorded file at any time, even during the recording process. The metadata of the file is represented by a series of AVMetadataItem objects, and we can use AVMutableMetadataItem to create our own metadata.

AVCaptureMovieFileOutput *aMovieFileOutput #Get a movie = < file output#> NSArray *existingMetadataArray; NSMutableArray = aMovieFileOutput.metadata; *newMetadataArray = nil; newMetadataArray = if (existingMetadataArray) {existingMetadataArray} else {[mutableCopy]; newMetadataArray = [[NSMutableArray alloc] init] AVMutableMetadataItem *item [[AVMutableMetadataItem alloc];} = init]; item.keySpace = AVMetadataKeySpaceCommon; item.key = AVMetadataCommonKeyLocation; CLLocation *location < #The location to set#> [NSString stringWithFormat:@; item.value = "%+08.4lf%+09.4lf/" location.coordinate.latitude, location.coordinate.longitude]; [newMetadataArray addObject:item]; aMovieFileOutput.metadata = newMetadataArray;

Processing video frames

The AVCaptureVideoDataOutput object uses agents to expose video frames externally, and we set the proxy through setSampleBufferDelegate:queue. In addition to setting up proxies, you also need to set up a serial queue for proxy calls, where serial queue must be used to ensure that the frame data passed to the delegate is in proper order. We can use this queue to distribute and process video frames. Here’s an example of SquareCam.

The video frame data output in captureOutput:didOutputSampleBuffer:fromConnection: is represented by CMSampleBufferRef. By default, this buffer data format comes from the format that the camera can handle most efficiently. We can set a custom output format by videoSettings, which is a dictionary, and now the supported keys include kCVPixelBufferPixelFormatTypeKey. The recommended pixel formats can be obtained through the availableVideoCVPixelFormatTypes attribute, and in addition, the supported encoding types can be obtained through availableVideoCodecTypes. Core, Graphics, and OpenGL can handle BGRA formats very well.

AVCaptureVideoDataOutput *videoDataOutput = [AVCaptureVideoDataOutput new]; NSDictionary *newSettings = @{(NSString *) kCVPixelBufferPixelFormatTypeKey: (@ kCVPixelFormatType_32BGRA)}; videoDataOutput.videoSettings = newSettings; / / discard if the data output queue is blocked (as we process the still image [videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; a serial dispatch) / / create queue used for the sample buffer delegate as well as when a still image is captured serial dispatch queue must / / a be used to guarantee that video frames will be delivered in order the header doc for / / see setSampleBufferDelegate:queue: for more information videoDataOutputQueue = dispatch_queue_create (VideoDataOutputQueue, DISPATCH_QUEUE_SERIAL) [; VideoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue] AVCaptureSession; *captureSession = < #The Capture Session#> if ([captureSession; canAddOutput:videoDataOutput]) {[captureSession addOutput:videoDataOutput];}

Performance issues for video processing

For our application, it’s good to set enough resolution, not too high, so performance will decrease.

We need to make sure that the time frames for processing video frames in the captureOutput:didOutputSampleBuffer:fromConnection: do not exceed the processing time allocated to a frame. If this is too long, then AV Foundation will stop sending video frames to the proxy and will stop sending it to other outputs, such as preview layer.

You can use the minFrameDuration property of AVCaptureVideoDataOutput to make sure you have enough time to process a frame. At the same time, we can also set alwaysDiscardsLateVideoFrames to YES to ensure that the late frames are dropped to avoid latency. If you don’t mind delaying and want to process more frames, set this value to NO, but that doesn’t mean you won’t lose frames, but it won’t be thrown away so early or efficiently.

Intercepting static pictures

When we want to intercept static pictures with metadata, we can use AVCaptureStillImageOutput. The resolution of the extracted plots depends on the session of the preset and the specific device.

Pixel and encoding types

Different types support different picture formats. We can use availableImageDataCVPixelFormatTypes and availableImageDataCodecTypes to check the type of pixels supported by the current device and the type of encoding. Then, we can specify the format of the image by setting the outputSettings property.

AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init]; NSDictionary *outputSettings = @{AVVideoCodecKey: AVVideoCodecJPEG}; [stillImageOutput setOutputSettings:outputSettings];

If we want to capture a JPEG picture, you’d better not specify your own compression format, instead, you need to do is to help you do AVCaptureStillImageOutput compression, because it uses hardware acceleration to compression. If you need the data of the image, you can use the jpegStillImageNSDataRepresentation: to obtain the corresponding NSData object, which is not compressed, or even you can modify it.

Capture image

When the screenshot, we use the captureStillImageAsynchronouslyFromConnection:completionHandler: method, the first parameter which is corresponding to the connection, this is what you need to look at the output port of connection is in the mobile phone video:

AVCaptureConnection *videoConnection = nil; for (AVCaptureConnection *connection in stillImageOutput.connections for (AVCaptureInputPort) {*port in [connection inputPorts] if ([[port) {mediaType] isEqual:AVMediaTypeVideo]) {videoConnection = connection; break; if (videoConnection)}} {break}};

The second parameter is a block callback with two parameters back: a CMSampleBuffer containing the image data, and a NSError. This sample buffer may contain metadata, such as a EXIF dictionary as subsidiary information. We can modify the attached information, but we need to pay attention to the optimization of the pixel and coding of the JPEG image.

[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^ (CMSampleBufferRef imageSampleBuffer, NSError *error) {CFDictionaryRef exifAttachments = CMGetAttachment (imageSampleBuffer, kCGImagePropertyExifDictionary, NULL); if (exifAttachments something with) {/ / Do / / Continue the attachments.} as appropriate.}];

preview

Video Preview

We can use AVCaptureVideoPreviewLayer to provide previews for users, and we don’t need any output objects to display previews. In addition, we can use AVCaptureVideoDataOutput to obtain data at the image pixel level before previewing the user.

Unlike a capture output, a video preview layer maintains a strong reference to the session with which it is associated. This is to ensure that the session is not deallocated while the layer is attempting to display video. This is reflected in the way you initialize a preview layer:

Unlike capture output, a video preview layer holds a strong reference to the associated session. This prevents layer from releasing the session while displaying previews, causing problems.

AVCaptureSession *captureSession = < #Get a; capture session#> CALayer; *viewLayer = < #Get a layer from the view in which you want to present the preview#> AVCaptureVideoPreviewLayer; *captureVideoPreviewLayer = AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession] [viewLayer addSublayer:captureVideoPreviewLayer] ";

In general, preview layer, like other CALayer objects, can scale its images, perform transformations, and rotate. A different place is that we need to set the orientation property of layer to specify how it changes the image obtained from the camera. In addition, we can use supportsVideoMirroring to test whether the device supports video mirror effect (fliparound), we can set videoMirrored to configure whether to open the mirror effect, even if the automaticallyAdjustsVideoMirroring is set to the default YES no problem (this value is set automatically when configuring session).

Video gravity mode

We can set the video gravity mode by the videoGravity attribute, for example:

  • AVLayerVideoGravityResizeAspect, keep the aspect ratio of video, not completely filled, may leave black side; do not cut video,.
  • AVLayerVideoGravityResizeAspectFill, keep the aspect ratio of the video; completely filled with no black edges; possible clipping of video.
  • AVLayerVideoGravityResize does not keep the video aspect ratio, may distort the picture; completely fills; does not cut the video.

Click focus when using preview

It is important to note that when you click focus, you must take into account the preview direction and gravity of the layer, and consider the possibility that the preview changes to mirror display. You can refer to instance projects: AVCam-iOS:, Using, AVFoundation, to, Capture, Images, and, Movies.

Display audio scale

To monitor the average and peak power levels in an audio channel in a capture connection, you use an AVCaptureAudioChannel object. Audio levels are not key-value observable, so you must poll for updated levels as often as you want to update your user interface (for example times a second, 10).

To monitor the average intensity and peak intensity of audio channels in capture connection, we can use AVCaptureAudioChannel. Audio levels are not supported by KVO monitoring, so when we want to update the user interface, we need to query.

AVCaptureAudioDataOutput *audioDataOutput #Get the audio data = < output#> NSArray; *connections = audioDataOutput.connections; if ([connections count] > 0 should be only) {/ / There one connection to an AVCaptureAudioDataOutput. AVCaptureConnection *connection objectAtIndex:0] NSArray = [connections; *audioChannels = connection.audioChannels; for (AVCaptureAudioChannel *channel in audioChannels float) {AVG = channel.averagePowerLevel; float = channel.peakHoldLevel peak the level meter; / / Update user interface.}}

A complete example

The sample code here will show you how to get video and convert it to UIImage. Probably include the following steps:

  • Create a AVCaptureSession object to coordinate the input device to the output data stream.
  • Find the AVCaptureDevice we want.
  • Create a AVCaptureDeviceInput object for device.
  • Create AVCaptureVideoDataOutput to output video frames.
  • Implement AVCaptureVideoDataOutput’s delegate method to process video frames.
  • In the implemented proxy method, CMSampleBuffer is converted to UIImage.

The code reads as follows:

Create and configure Capture Session:AVCaptureSession *session / [[AVCaptureSession alloc] = init]; session.sessionPreset = AVCaptureSessionPresetMedium; / / create and configure Device and Device Input:AVCaptureDevice *device defaultDeviceWithMediaType:AVMediaTypeVideo] NSError = [AVCaptureDevice; *error = nil; AVCaptureDeviceInput = *input [AVCaptureDeviceInput deviceInputWithDevice:device error:& error]; if (input!) the error} {/ / Handle appropriately. [session addInput: input]; / / create and configure Video Data Output:AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init]; [session addOutput:output]; output.videoSettings = @{(NSString *) kCVPixelBufferPixelFormatTypeKey: (@ kCVPixelFormatType_32BGRA)}; output.m InFrameDuration = CMTimeMake (1, 15); / / delegate and set the Video Data Output queue: dispatch_queue_t queue = dispatch_queue_create (MyQueue, NULL); [output setSampleBufferDelegate:self queue:queue];

Agent method implementation:

- (void) captureOutput: (AVCaptureOutput *) captureOutput didOutputSampleBuffer: (CMSampleBufferRef) sampleBuffer fromConnection: (AVCaptureConnection * connection) {UIImage *image = imageFromSampleBuffer (sampleBuffer); / / Add your code here that uses the image.}

Start recording:

Before recording, you also need to pay attention to your camera privileges:

Apply for permission: NSString *mediaType / / [AVCaptureDevice = AVMediaTypeVideo; requestAccessForMediaType:mediaType completionHandler:^ (BOOL granted) {if (granted) access to mediaType [self {//Granted} else {//Not setDeviceAuthorized:YES]; granted access to mediaType dispatch_async (dispatch_get_main_queue), [[[UIAlertView alloc] (^{initWithTitle:@ "AVCam!" message:@ "AVCam doesn't have permission to use Camera, please change privacy" settings delegate:self cancelButtonTitle:@ "OK" otherButtonTitles:nil] show]; [self setDeviceAuthorized:NO];}); }]};

Start recording method:

[session startRunning];

Stop recording method:

[session stopRunning];

High frame rate video capture

After iOS 7, high frame rate video capture support was introduced into some devices. We can query the media type, frame rate, zoom ratio, video anti shake and so on through AVCaptureDeviceFormat. In AVFoundation:

  • Support capture resolution 720p and above, frame rate 60 FPS, video shake, P frame lost frame.
  • Playback has increased support for audio acceleration and slow playback.
  • Editorial aspects provide comprehensive editorial support.
  • The output supports two ways to output video files with frame rates up to 60 fps. One is that the output is variable frame rate to support slow or fast playback. The other is the output for arbitrary frame rates, such as 30 fps.

It’s important to note that the latest version of the iOS version will be much more powerful than these.

play

When you play, you can set the playback rate by setting the rate property through the AVPlayer.

AVPlayerItem can support settings via the audioTimePitchAlgorithm property, and when you play video at different rates, how do you play the audio?. The following are relevant options:

  • AVAudioTimePitchAlgorithmLowQualityZeroLatency has lower sound quality and adapts to multiple playback speeds. Suitable rates: 0.5, 0.666667, 0.8, 1, 1.25, 1.5, 2.0.
  • AVAudioTimePitchAlgorithmTimeDomain, general tone quality. Appropriate rate: 0.5 – 2x.
  • AVAudioTimePitchAlgorithmSpectral, highest sound quality, performance consumption. Suitable rate: 1/32 – 32.
  • AVAudioTimePitchAlgorithmVarispeed, high sound quality. Suitable rate: 1/32 – 32.

edit

AVMutableComposition is usually used for editing audio and video:

  • Create a AVMutableComposition object using the composition class method class.
  • Use insertTimeRange:ofAsset:atTime:error: to insert audio and video data.
  • Use scaleTimeRange:toDuration: to set the time interval for the audio and video data.

export

You can use AVAssetExportSession to export 60 FPS videos in two ways:

  • Use AVAssetExportPresetPassthrough preset to encode and re encode video. It will reprocess the time to mark 60 FPS, slow down, and accelerate the zone.
  • Use a fixed frame rate to ensure maximum compatibility of exported video. For example, the frameDuration setting video composition is 30 fps. For example, set the export session audioTimePitchAlgorithm to configure the audio playback options.

Recording

We use AVCaptureMovieFileOutput to record high frame rate video, which will default to high frame rate video recording, and will default to the correct H264, pitch, level, and bit rates.

If you want to do some custom recording, we have to use AVAssetWriter, which requires some additional creation.

Usually we need to set it up:

AssetWriterInput.expectsMediaDataInRealTime = YES;

To ensure that video capture can keep up with incoming data.