GPUImage source code reading (five)

Summary

GPUImage is a well-known open source library for image processing, which allows you to use GPU accelerated filters and other special effects on images, videos, and cameras. Compared with the CoreImage framework, you can use the custom filters based on the interface provided by GPUImage. Project address: https://github.com/BradLarson/GPUImage
GPUImage reading this article is mainly in the framework of GPUImageVideoCamera, GPUImageStillCamera, GPUImageMovieWriter, GPUImageMovie this kind of source. In the introduction of GPUImage source code reading (four), the data source is mainly from the picture and UI rendering, this article describes the data source from the camera, audio and video files. The same display screen will be used to introduce the last GPUImageView, GPUImageMovieWriter will also be used to save the recorded audio and video files. The following is the source:
GPUImageVideoCamera GPUImageStillCamera
GPUImageMovieWriter

GPUImageMovie

Implementation effect

  • record video
GPUImage source code reading (five)
video recording.Gif
  • Photograph
GPUImage source code reading (five)
camera.Png
  • Video transcoding and filters
GPUImage source code reading (five)
original video.Gif
GPUImage source code reading (five)
filter after processing video.Gif

GPUImageVideoCamera

GPUImageVideoCamera inherited from GPUImageOutput, to achieve the AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate protocol. GPUImageVideoCamera can call the camera for video shooting, after the shooting will generate a frame cache object, we can use GPUImageView display, you can also use GPUImageMovieWriter to save as a video file. At the same time, it also provides the GPUImageVideoCameraDelegate agent, which is convenient for us to deal with CMSampleBuffer. The following concepts are involved in the processing of video:

KCVPixelFormatType_32BGRA
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange

These kinds of pixel format, due to the introduction of the OpenGL ES in front of the 11- camera video rendering to do a detailed introduction, here will not be introduced, if necessary, please refer to the previous article.

  • Attribute list. Most of the properties are related to the camera.
/ / AVCaptureSession is running @property (readonly, nonatomic) BOOL isRunning; / / AVCaptureSession object @property (readonly, retain, nonatomic) AVCaptureSession *captureSession; / / quality control parameters, the size of the video output. Such as: AVCaptureSessionPreset640x480 @property (readwrite, nonatomic, copy) NSString *captureSessionPreset; / / video frame rate @property (readwrite) int32_t frameRate; / / which is using @property camera (readonly, getter = isFrontFacingCameraPresent) BOOL frontFacingCameraPresent; @property (readonly, getter = isBackFacingCameraPresent) BOOL backFacingCameraPresent @property; / / real-time log output (readwrite, nonatomic) BOOL runBenchmark; / / is the use of the camera object, convenient parameter setting of @property AVCaptureDevice *inputCamera (readonly); / / output picture of the direction of @property (readwrite, nonatomic) UIInterfaceOrientation outputImageOrientation; / / @property front camera horizontal mirror (readwrite, nonatomic) BOOL horizontallyMirrorFrontFacin GCamera, horizontallyMirrorRearFacingCamera; / / GPUImageVideoCameraDelegate agent @property (nonatomic, assign) id< GPUImageVideoCameraDelegate> delegate;
  • Initialization method.
- (ID) initWithSessionPreset: (NSString *) sessionPreset cameraPosition: (AVCaptureDevicePosition) cameraPosition;

GPUImageVideoCamera initialization method is relatively small, the initial need to transmit video quality and the use of which camera. If the direct call – (instancetype) init will be initialized using AVCaptureSessionPreset640x480 and AVCaptureDevicePositionBack.

- (ID) initWithSessionPreset: (NSString *) sessionPreset cameraPosition: (AVCaptureDevicePosition) cameraPosition; if ({! (self = [super init]) {return nil}); / / create audio and video processing queue cameraProcessingQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_HIGH, 0); audioProcessingQueue = dispatch_get_global_queue (DISPATCH_QUEUE_PRIORITY_LOW, 0); / / frameRenderingSemaphore = dispatch_semaphore_create (create a semaphore 1); / / initialization of the variable _frameRate = 0; / / This will not set frame rate unless this value gets set to 1 or above _runBenchmark = NO; capturePaused = NO; outputRotation = kGPUImageNoRotation; internalRotation = kGPUImageNoRotation; captureAsYUV = YES; _preferredConversion = kColorConversion709; / / according to the incoming parameters before and after the camera _inputCamera = nil; NSArray = *devices [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; for (AVCaptureDevice *device in devices ([device) {if position] = = cameraPosition) {_inputCamera = device;}} / / get the camera failed to return immediately (if! _inputCamera) {return nil}; / / create a session object _captureSession = [[AVCaptureSession alloc] init]; / / to configure [_captureSession beginConfiguration]; / / create video input object NSError *error = nil; videoInput = [[AVCaptureDeviceInput alloc] initWithDevice:_inputCamera error:& error]; If ([_captureSession canAddInput:videoInput]) {[_captureSession addInput:videoInput]}; / / create a video object videoOutput = [[AVCaptureVideoDataOutput alloc] output init]; [videoOutput setAlwaysDiscardsLateVideoFrames:NO]; / / if (captureAsYUV & amp; & [GPUImageContext deviceSupportsRedTextures]) / / treatment set YUV of if (captureAsYUV & & [GPUImageContext; supportsFastTextureUpload]) {BOOL supportsFullYUVRange NSArray = NO; *supportedPixelFormats = videoOutput.availableVideoCVPixelFormatTypes; for (NSNumber *currentPixelFormat in supportedPixelFormats ([currentPixelFormat) {if intValue] = = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange ) {supportsFullYUVRange = YES;}} {/ / if (supportsFullYUVRange) [videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt: set the kCVPixelFormatType_420YpCbCr8BiPlanarFullRange format kCVPixelFormatType_420YpCbCr8BiPlanarFullRange] forKey: (ID) kCVPixelBufferPixelFormatTypeKey]]; isFullYUVRange = YES;} else {/ / [videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] set the kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange format forKey: (ID) kCVPixelBufferPixelFormatTypeKey]]; IsFullYUVRange = NO;}} else {/ / [videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] set the kCVPixelFormatType_32BGRA format forKey: (ID) kCVPixelBufferPixelFormatTypeKey]];} / / create a GL program, runSynchronouslyOnVideoProcessingQueue (if ^{position attributes (captureAsYUV) {[GPUImageContext useImageProcessingContext]; / / if ([GPUImageContext deviceSupportsRedTextures]) {/ / / / yuvConversionProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString:kGPU ImageYUVVideoRangeConversionForRGFragmentShaderString]; / / / / / / {if} else {yuvConversionProgram = [[GPUImageContext (isFullYUVRange) sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString: kGPUImageYUVFullRangeConversionForLAFragmentShaderString];} else {yuvConversionProgram = [[GPUImageContext sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString fragmentShaderString: kGPUImageYUVVideoRangeConversionForLAFragmentShaderString];}} / / if (yuvCo! NversionProgram.initialized) {[yuvConversionProgram addAttribute:@ "position"]; [yuvConversionProgram addAttribute:@ "inputTextureCoordinate"]; if ([yuvConversionProgram! Link]) {NSString *progLog = [yuvConversionProgram programLog]; NSLog (@ Program link, progLog log:% @ "); NSString *fragLog = [yuvConversionProgram fragmentShaderLog]; NSLog (" Fragment shader compile log: @% @ ", fragLog); NSString *vertLog = [yuvConversionProgram vertexShaderLog]; NSLog (" Vertex shader compile log: @% @ ", vertLog); yuvConversionProgram = nil; NSAssert (NO Filter shader link @ failed ");}} = yuvConversionPositionAttribute [yuvConversionProgram attributeIndex:@" position "]; yuvConversionTextureCoordinateAttribute [yuvConversionProgram attributeIndex:@ =" inputTextureCoordinate "]; yuvConversionLuminanceTextureUniform [yuvConversionProgram uniformIndex:@ =" luminanceTexture "]; yuvConversionChrominanceTextureUniform [yuvConversionProgram uniformIndex:@ =" chrominanceTexture "]; yuvConversionMatrixUniform = [yuvConversionProgram uniformIndex:@]" colorConversionMatrix "; [GPUImageContext setActiveShaderProgram:yuvConversionProgram]; glEnableVertexAttribArray (yuvC OnversionPositionAttribute; glEnableVertexAttribArray (yuvConversionTextureCoordinateAttribute));}}); / / AVCaptureVideoDataOutputSampleBufferDelegate [videoOutput setSampleBufferDelegate:self queue:cameraProcessingQueue] proxy settings; / / add output if ([_captureSession canAddOutput:videoOutput]) {[_captureSession addOutput:videoOutput];} else {NSLog (@ Couldn't add video output "); return nil;} / / set the video quality of _captureSessionPreset = sessionPreset; [_captureSession setSessionPreset:_captureSessionPreset]; / / This will let you get video from the 720p 60 FPS preset on an iPhone 4S, but only that device and that preset / / AVCaptureC Onnection *conn = [videoOutput connectionWithMediaType:AVMediaTypeVideo] if (conn.supportsVideoMinFrameDuration); / / / / / / conn.videoMinFrameDuration = CMTimeMake (1,60); / / if / / conn.videoMaxFrameDuration (conn.supportsVideoMaxFrameDuration) = CMTimeMake (1,60); / / [_captureSession commitConfiguration] return self submitted configuration;;}
  • Other methods. GPUImageVideoCamera method can be divided into these categories: 1, add input and output devices; 2, capture the video; 3, processing audio and video; 4, camera parameters;
/ / add, remove audio input and output - (BOOL) - (BOOL) addAudioInputsAndOutputs; removeAudioInputsAndOutputs; / / remove all input and output - (void) removeInputsAndOutputs; / / start, pause, resume, close the camera capture - (void) - (void) startCameraCapture; stopCameraCapture; pauseCameraCapture; - (void) - (void) resumeCameraCapture; / / processing of audio and video - (void) processVideoSampleBuffer: (CMSampleBufferRef) sampleBuffer; (void) - processAudioSampleBuffer: (CMSampleBufferRef) sampleBuffer; / / get the camera parameters - (AVCaptureDevicePosition) - cameraPosition (AVCaptureConnection *); videoCaptureConnection; isBackFacingCameraPresent; + + (BOOL) isFrontFacingCameraPresent (BOOL); / / transform - (void) rotateCamera camera; / / to obtain the average frame rate (CGFloat) - A VerageFrameDurationDuringCapture; / / reset benchmark - (void) resetBenchmarkAverage;

Although the GPUImageVideoCamera method is more, but the internal logic is not very complicated.

Increase / / audio input and output - (BOOL) addAudioInputsAndOutputs (audioOutput) return {if NO; [_captureSession beginConfiguration]; _microphone [AVCaptureDevice = defaultDeviceWithMediaType:AVMediaTypeAudio]; audioInput = [AVCaptureDeviceInput deviceInputWithDevice:_microphone error:nil]; if ([_captureSession canAddInput:audioInput]) {[_captureSession} addInput:audioInput]; audioOutput = [[AVCaptureAudioDataOutput alloc] init]; if ([_captureSession canAddOutput:audioOutput]) {addOutput:audioOutput]} else {[_captureSession (@ NSLog; "Couldn't add audio output");} [audioOutput setSampleBufferDelegate:self queue:audioProcessingQueue]; [_captur ESession commitConfiguration]; return YES;} / / remove the audio input and output (BOOL) - removeAudioInputsAndOutputs (audioOutput {if!) return NO; [_captureSession beginConfiguration]; [_captureSession removeInput:audioInput]; [_captureSession removeOutput:audioOutput]; audioInput = nil; audioOutput = nil; _microphone = nil; [_captureSession commitConfiguration]; return YES;} / / remove all input and output - (void) removeInputsAndOutputs {[_captureSession; beginConfiguration]; if (videoInput) {[_captureSession removeInput:videoInput]; [_captureSession removeOutput:videoOutput]; videoInput = nil; videoOutput = nil;} if (_microphone! = Nil) { [_captureSession removeInput:audioInput]; [_captureSession removeOutput:audioOutput]; audioInput = nil; audioOutput = nil; _microphone = nil;} [_captureSession commitConfiguration];} / / to capture - (void) startCameraCapture; if ([_captureSession {! IsRunning]) {startingCaptureTime = [NSDate date]; [_captureSession startRunning];};} / / stop capture (void) {if (stopCameraCapture; [_captureSession isRunning] {[_captureSession stopRunning]}}); / / suspend capture (void) {pauseCameraCapture; capturePaused = YES;} / / recovery (void) capture resumeCameraCapture; {capturePaused = NO;} / / Video - (void) (processVideoSampleBuffer: CMSampleBufferRef sampleBuffer; if) {(capturePaused)} CFAbsoluteTime {return; startTime = CFAbsoluteTimeGetCurrent; cameraFrame = CMSampleBufferGetImageBuffer (CVImageBufferRef) (sampleBuffer); / / get the video width of int bufferWidth = (int) CVPixelBufferGetWidth (cameraFrame); int bufferHeight = (int) CVPixelBufferGetHeight (cameraFrame); CFTypeRef colorAttachments = (cameraFrame, kCVImageBufferYCbCrMatrixKey, CVBufferGetAttachment NULL); if (colorAttachments! = NULL) {if (CFStringCompare (colorAttachments, kCVImageBufferYCbCrMatrix_ITU_R_601_4, 0) = = kCFCompareEqualTo) {if (isFullYUVRange) {_preferredConversion = kColorConversion601F UllRange;} else {_preferredConversion = kColorConversion601;}} else {_preferredConversion = kColorConversion709;}} else {if (isFullYUVRange) {_preferredConversion = kColorConversion601FullRange;} else {_preferredConversion}} = kColorConversion601; CMTime = CMSampleBufferGetPresentationTimeStamp currentTime (sampleBuffer); [GPUImageContext useImageProcessingContext]; / / YUV if fast texture generation ([GPUImageContext supportsFastTextureUpload] & & captureAsYUV {CVOpenGLESTextureRef luminanceTextureRef) CVOpenGLESTextureRef = NULL; chrominanceTextureRef = NULL; / / if (captureAsYUV & & [GPUImageContext; deviceSupportsRedTextures] (if) CVPixelBufferGetPlaneCount (cameraFrame) > 0) Check for YUV planar inputs to / do RGB conversion {CVPixelBufferLockBaseAddress (cameraFrame, 0); if ((imageBufferWidth! = bufferWidth) & & (imageBufferHeight = bufferHeight;!)) {imageBufferWidth = bufferWidth; imageBufferHeight = bufferHeight;} CVReturn err; / / Y component glActiveTexture (GL_TEXTURE4); if ([GPUImageContext = deviceSupportsRedTextures]) {/ / err CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RED_EXT, bufferWidth, bufferHeight, GL_RED_EXT, GL_UNSIGNED_BYTE, 0, & luminanceTextureRef; ERR) = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, & luminanceTextureRef else;}) {err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TE XTURE_2D, GL_LUMINANCE, bufferWidth, bufferHeight, GL_LUMINANCE, GL_UNSIGNED_BYTE, 0, & luminanceTextureRef);} if (ERR) {NSLog (@ Error at CVOpenGLESTextureCacheCreateTextureFromImage%d ", ERR);} luminanceTexture = CVOpenGLESTextureGetName (luminanceTextureRef); glBindTexture (GL_TEXTURE_2D, luminanceTexture); glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE (glTexParameterf); GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); / / UV component (Width/2 = Width/4 + Width/4) glActiveTexture (GL_TEXTURE5); if ([GPUImageContext deviceSupportsRedTextures]) {/ / Err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCache, cameraFrame, NULL, GL_TEXTURE_2D, GL_RG_EXT, bufferWidth/2, bufferHeight/2, GL_RG_EXT, GL_UNSIGNED_BYTE, 1, & chrominanceTextureRef; ERR) = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, & chrominanceTextureRef);} else {err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTe XtureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, 1, & chrominanceTextureRef);} if (ERR) {NSLog (@ Error at CVOpenGLESTextureCacheCreateTextureFromImage%d ", ERR);} chrominanceTexture = CVOpenGLESTextureGetName (chrominanceTextureRef); glBindTexture (GL_TEXTURE_2D, chrominanceTexture) glTexParameterf (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S; GL_CLAMP_TO_EDGE, (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T); glTexParameterf, GL_CLAMP_TO_EDGE); / / if / / [self (allTargetsWantMonochromeData!) {/ / convertYUVToRGBOutput]; Int rotatedImageBufferWidth} = bufferWidth, rotatedImageBufferHeight = bufferHeight; if (GPUImageRotationSwapsWidthAndHeight (internalRotation)) {rotatedImageBufferWidth = bufferHeight; rotatedImageBufferHeight = bufferWidth;} [self updateTargetsForVideoCameraUsingCacheTextureAtWidth:rotatedImageBufferWidth height:rotatedImageBufferHeight time:currentTime]; CVPixelBufferUnlockBaseAddress (cameraFrame, 0); CFRelease (luminanceTextureRef); CFRelease (chrominanceTextureRef);} else {/ / TODO: Mesh this with the output framebuffer structure / / CVPixelBufferLockBas EAddress (cameraFrame, 0); / / CVReturn / / err = CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, [[GPUImageContext sharedImageProcessingContext] coreVideoTextureCache], cameraFrame, NULL, GL_TEXTURE_2D, GL_RGBA, bufferWidth, bufferHeight, GL_BGRA, GL_UNSIGNED_BYTE, 0, & texture); / / / / if (texture ||! ERR) {/ / NSLog (Camera CVOpenGLESTextureCacheCreateTextureFromImage failed error: (@%d "), err (NO, NSAssert); / / @" Camera failure "); / / return / / / / / /}; outputTexture = CVOpenGLESTextureGetName (texture); / / / / glBindTexture (CVOpenGLESTextureGetTarget (texture), outputTexture); / / GlBindTexture (GL_TEXTURE_2D, outputTexture); / / glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); / / glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); / / glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); / / glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); / / [self / / updateTargetsForVideoCameraUsingCacheTextureAtWidth:bufferWidth / / height:bufferHeight time:currentTime]; / / CVPixelBufferUnlockBaseAddress (cameraFrame, 0); / / CFRelease (texture); / / / / outputTexture = 0;} / / frame rate if (_runBenchmark) {numberOfFramesCapt Ured++; if (numberOfFramesCaptured > INITIALFRAMESTOIGNOREFORBENCHMARK) {CFAbsoluteTime = currentFrameTime (CFAbsoluteTimeGetCurrent) - (startTime); totalFrameTimeDuringCapture = currentFrameTime; NSLog (@ Average frame time:%f MS, [self averageFrameDurationDuringCapture]); NSLog ("Current frame time:%f @ MS, 1000 * currentFrameTime);}}} {/ / else lock base address CVPixelBuff

Note
due to processAudioSampleBuffer directly to the audioEncodingTarget processing AudioBuffer. Therefore, when recording video if you need to join the sound need to set audioEncodingTarget. Otherwise, there will be no video recording.


Texture format and color mapping basic format texel data description GL_RED (R, 0, 0, 1) GL_RG (R, G, 0, 1) GL_RGB (R, G, B, GL_RGBA (1) R, G, B, A) GL_LUMINANCE (L, L, L, GL_LUMINANCE_ALPHA (1) L L, L, A), GL_ALPH (0, 0, 0, A) from the table, we’ll know why in GPUImage, Y components with GL_LUMINANCE and UV components with various internal, GL_LUMINANCE_ALPHA internal various.

GPUImageStillCamera

GPUImageStillCamera is mainly used to take pictures. It inherits from the GPUImageVideoCamera, therefore, in addition to having the function of GPUImageVideoCamera, it also provides a wealth of camera API, convenient for us to take pictures of the relevant operations.

  • Attribute list. GPUImageStillCamera attributes are relatively small, the property is mainly associated with the picture.
/ / compression ratio of JPEG images, the default is 0.8 @property CGFloat jpegCompressionQuality; / / Metadata information @property (readonly) NSDictionary *currentCaptureMetadata;
  • Method list. The main method is to photograph, the type of output is rich, can be CMSampleBuffer, UIImage, NSData, etc.. If there is a filter, you can also enter the relevant filter (final Filter In Chain).
- (void) capturePhotoAsSampleBufferWithCompletionHandler: (void (^) (CMSampleBufferRef imageSampleBuffer, NSError *error) - (block) void (capturePhotoAsImageProcessedUpToFilter:); GPUImageOutput< GPUImageInput> withCompletionHandler: (void * finalFilterInChain) (^) (UIImage *processedImage, NSError *error) - (block) void (capturePhotoAsImageProcessedUpToFilter:); GPUImageOutput< GPUImageInput> * withOrientation: (finalFilterInChain) UIImageOrientation orientation withCompletionHandler: (void) (^) (UIImage *processedImage, NSError *error) - (block) void (capturePhotoAsJPEGProcessedUpToFilter:); GPUImageOutput< GPUImageInput> withCompletionHandler: (void * finalFilterInChain) (^) (NSData *processedJPEG, NSError *error)) block - (void) capturePhotoAsJ; PEGProcessedUpToFilter: (GPUImageOutput< GPUImageInput> *) finalFilterInChain withOrientation: (UIImageOrientation) orientation (withCompletionHandler: (void ^ (NSData) *processedJPEG, NSError *error) - (void) block (capturePhotoAsPNGProcessedUpToFilter:); GPUImageOutput< GPUImageInput> withCompletionHandler: (void * finalFilterInChain) (^) (NSData *processedPNG, NSError *error)) block - (void) capturePhotoAsPNGProcessedUpToFilter:; (GPUImageOutput< GPUImageInput> *) finalFilterInChain withOrientation: (UIImageOrientation) orientation (withCompletionHandler: (void ^ (NSData) *processedPNG, NSError *error)) block;

Although API is quite rich, but in the end is a private method call – (void) capturePhotoProcessedUpToFilter: (GPUImageOutput< GPUImageInput> withImageOnGPUHandler: (void * finalFilterInChain) (^) (NSError *error)) block. Therefore, pay attention to the method on the line.

- (void) capturePhotoAsJPEGProcessedUpToFilter: (GPUImageOutput< GPUImageInput> *) finalFilterInChain withOrientation: (UIImageOrientation) orientation (withCompletionHandler: (void ^ (NSData) *processedImage, NSError *error) {/ / block) calls the private generation method of frame buffer object [self capturePhotoProcessedUpToFilter:finalFilterInChain withImageOnGPUHandler:^ (NSError *error) {NSData *dataForJPEGFile = nil; if (error!) {{/ / @autoreleasepool read frame buffer and generate the UIImage object UIImage *filteredPhoto [finalFilterInChain = imageFromCurrentFramebufferWithOrientation:orientation]; dispatch_semaphore_signal (frameRenderingSemaphore); / / UIImage The generation of NSData objects dataForJPEGFile = UIImageJPEGRepresentation (filteredPhoto, self.jpegCompressionQuality);}} else {dispatch_semaphore_signal} (frameRenderingSemaphore); block (dataForJPEGFile, error);}];} - (void) capturePhotoProcessedUpToFilter: (GPUImageOutput< GPUImageInput> withImageOnGPUHandler: (void * finalFilterInChain) (^) (NSError *error block)) {/ / wait counter dispatch_semaphore_wait (frameRenderingSemaphore, DISPATCH_TIME_FOREVER); / / to determine whether the captured image if (photoOutput.isCapturingStillImage) {block ([NSError errorWithDomain:AVFoundationErrorDomain code:AVErrorMaximumStillImageCaptureRequestsExceeded userInfo:nil]); Return;} / / [photoOutput captureStillImageAsynchronouslyFromConnection:[[photoOutput connections] objectAtIndex:0] asynchronous image capture completionHandler:^ (CMSampleBufferRef imageSampleBuffer, NSError *error) {if (imageSampleBuffer = = NULL) {block (error); return;} / / For now, resize photos to fix within the max texture size of the GPU CVImageBufferRef cameraFrame = CMSampleBufferGetImageBuffer (imageSampleBuffer); / / get the image size CGSize sizeOfPhoto = CGSizeMake (CVPixelBufferGetWidth (cameraFrame), CVPixelBufferGetHeight (cameraFrame)); CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize:sizeOfPhoto]; When judging / need to adjust the size if (! CGSizeEqualToSize (sizeOfPhoto, scaledImageSizeToFitOnGPU)) {CMSampleBufferRef sampleBuffer = NULL; if (CVPixelBufferGetPlaneCount (cameraFrame) > 0) {NSAssert (NO @ Error: no downsampling for, YUV input in the framework yet ");} else {/ / GPUImageCreateResizedSampleBuffer (cameraFrame, scaledImageSizeToFitOnGPU image adjustment. & sampleBuffer;}); dispatch_semaphore_signal (frameRenderingSemaphore) [finalFilterInChain useNextFrameForImageCapture]; / / call the parent class picture processing, generating the framebuffer object." Self captureOutput:photoOutput didOutputSampleBuffer:sampleBuffer fromConnection:[[photoOutput connections] objectAtIndex:0]]; dispatch_semaphore_wait (frameRenderingSemaphore, DISPATCH_TIME_FOREVER); if (sampleBuffer! = NULL) CFRelease (sampleBuffer);} else {/ / This is a workaround for the corrupt images that are sometimes returned when taking a photo with the front camera and using the iOS 5 texture caches AVCaptureDevicePosition currentCameraPosition = [[videoInput device] position]; if ((currentCameraPosition! = AVCaptureDevicePositionFront) || ([GPUImageContext! SupportsFastTextureUpload]) || requiresFrontCameraTextureCacheCorruptio! NWorkaround) {dispatch_semaphore_signal (frameRenderingSemaphore); [finalFilterInChain useNextFrameForImageCapture]; / / call the parent class picture processing, generating [self captureOutput:photoOutput didOutputSampleBuffer:imageSampleBuffer frame buffer object fromConnection:[[photoOutput connections] objectAtIndex:0]]; dispatch_semaphore_wait (frameRenderingSemaphore, DISPATCH_TIME_FOREVER);}} / / metadata CFDictionaryRef metadata to obtain the image information (NULL = CMCopyDictionaryOfAttachments, imageSampleBuffer, kCMAttachmentMode_ShouldPropagate); _currentCaptureMetadata = (__bridge_transfer * NSDictionary) metadata; Block (NIL); _currentCaptureMetadata = nil;}]};

GPUImageMovieWriter

The main function of GPUImageMovieWriter is to encode audio and video and save it as audio and video files. It implements the GPUImageInput protocol. Therefore, the input of the frame buffer can be accepted. GPUImageMovieWriter in audio and video recording, the main use of these categories AVAssetWriter, AVAssetWriterInput, AVAssetWriterInputPixelBufferAdaptor. AVAssetWriter support more audio and video formats, specific reference to the following table:

Definition Extension name
AVFileTypeQuickTimeMovie .mov or.Qt
AVFileTypeMPEG4 .mp4
AVFileTypeAppleM4V .m4v
AVFileTypeAppleM4A .m4a
AVFileType3GPP .3gp or.3gpp or.Sdv
AVFileType3GPP2 .3g2 or.3gp2
AVFileTypeCoreAudioFormat .caf
AVFileTypeWAVE .wav or.Wave or.Bwf
AVFileTypeAIFF .aif or.Aiff
AVFileTypeAIFC .aifc or.Cdda
AVFileTypeAMR .amr
AVFileTypeWAVE .wav or.Wave or.Bwf
AVFileTypeMPEGLayer3 .mp3
AVFileTypeSunAU .au or.Snd
AVFileTypeAC3 .ac3
AVFileTypeEnhancedAC3 .eac3
  • Property. GPUImageMovieWriter attributes more, but more practical. Many are related to the status of audio and video processing, such as: the preservation of audio and video, save the completion of the callback, failure callback. Here are some of the more important attributes.
If there is / / audio @property (readwrite, nonatomic) BOOL hasAudioTrack; / / not processing audio @property (readwrite, nonatomic) BOOL shouldPassthroughAudio; / / mark not be used again @property (readwrite, nonatomic) BOOL shouldInvalidateAudioSampleWhenDone; / / complete failure and correction of @property (nonatomic, copy) void (^completionBlock) @property (nonatomic (void); copy, void) (^failureBlock) (NSError*); / / whether real-time encoding @property (readwrite, nonatomic) BOOL encodingLiveVideo; / / video ready callback @property (nonatomic, copy) BOOL (^videoInputReadyCallback) (void); @property (nonatomic, copy) BOOL (^audioInputReadyCallback) (void); / / return @property audio processing (nonatomic, copy) void (^audioProcessingCallback) (SInt16 **samplesRef, CMItemCoun T numSamplesInBuffer AVAssetWriter (@property); / / get nonatomic AVAssetWriter *assetWriter, readonly); / / get to the previous frame length of @property (nonatomic, readonly) CMTime duration;
  • Initialization method
- (ID) initWithMovieURL: (NSURL *) newMovieURL size: (CGSize) newSize; (ID) - initWithMovieURL: (NSURL *) newMovieURL size: (CGSize) newSize fileType: (NSString * newFileType) outputSettings: (NSDictionary * outputSettings);

Initialization is mainly related to: 1, initialize the instance variables; 2, to create the OpenGL program; 3, initialization of AVAssetWriter related parameters, such as: video coding, video size, etc.. It should be noted that the initialization of the audio parameters are not initialized, if you need to deal with audio, you need to use – (void) setHasAudioTrack: (BOOL) newValue for the relevant settings.

- (ID) initWithMovieURL: (NSURL *) newMovieURL size: (CGSize) newSize; {/ / invoke other initialization method return [self initWithMovieURL:newMovieURL size:newSize fileType:AVFileTypeQuickTimeMovie outputSettings:nil];} - (ID) initWithMovieURL: (NSURL *) newMovieURL size: (CGSize) newSize fileType: (NSString * newFileType) outputSettings: (NSMutableDictionary * outputSettings); if ({! (self = [super init]) {return nil}); / / initial instance variable _shouldInvalidateAudioSampleWhenDone = NO; self.enabled = YES; alreadyFinishedRecording = NO; videoEncodingIsFinished = NO; audioEncodingIsFinished = NO; discont = NO; videoSize = newSize; movieURL = newMovieURL; fileType = newFileType; start Time = kCMTimeInvalid; _encodingLiveVideo = [[outputSettings objectForKey:@ isKindOfClass:[NSNumber class]] "EncodingLiveVideo"]? [[outputSettings objectForKey:@ "EncodingLiveVideo" boolValue]: YES; previousFrameTime = kCMTimeNegativeInfinity; previousAudioTime = kCMTimeNegativeInfinity; inputRotation = kGPUImageNoRotation; _movieWriterContext = [[GPUImageContext / initial context object alloc] init]; [_movieWriterContext useSharegroup:[[[GPUImageContext sharedImageProcessingContext] context] sharegroup]]; runSynchronouslyOnContextQueue _movieWriterContext, [_movieWriterContext useAsCurrentContext] (^{; / / initialize OpenGL the program if ([GPUImageContext supportsFastTextureUpload]) [_movieWriterContext programForVertexShaderString:kGPUImageVertexShaderString = {colorSwizzlingProgram fragmentShaderString:kGPUImagePassthroughFragmentShaderString]} else {colorSwizzlingProgram = [_movieWriterContext; programForVertexShaderString: kGPUImageVertexShaderString fragmentShaderString:kGPUImageColorSwizzlingFragmentShaderString];} / / get the relevant variables in glsl (if! ColorSwizzlingProgram.initialized) {[colorSwizzlingProgram addAttribute:@ "position"]; [colorSwizzlingProgram addAttribute:@ "inputTextureCoordinate"]; if ([colorSwizzlingProgram! Link]) {NSString *progLog = [colorS WizzlingProgram programLog]; NSLog (@ "Program link, progLog log:% @"); NSString *fragLog = [colorSwizzlingProgram fragmentShaderLog]; NSLog ("Fragment shader compile log: @% @", fragLog); NSString *vertLog = [colorSwizzlingProgram vertexShaderLog]; NSLog ("Vertex shader compile log: @% @", vertLog); colorSwizzlingProgram = NSAssert (NO, nil; "Filter shader link @ failed");}} = colorSwizzlingPositionAttribute [colorSwizzlingProgram attributeIndex:@ "position"]; colorSwizzlingTextureCoordinateAttribute [colorSwizzlingProgram attributeIndex:@ = "inputTextureCoordinate"]; colorSwizzlingIn PutTextureUniform [colorSwizzlingProgram uniformIndex:@ = "inputImageTexture"]; [_movieWriterContext setContextShaderProgram:colorSwizzlingProgram]; glEnableVertexAttribArray (colorSwizzlingPositionAttribute); glEnableVertexAttribArray (colorSwizzlingTextureCoordinateAttribute);}); [self initializeMovieWithOutputSettings:outputSettings]; return self;} - (void) initializeMovieWithOutputSettings: (NSDictionary * outputSettings); {isRecording = NO; self.enabled = YES; NSError = *error nil; / / initialize AVAssetWriter, incoming file path assetWriter = [[AVAssetWriter alloc] file format and initWithURL:movieURL fileType:fileType error:& error]; / / handle initialization failed callback I F (error! = Nil) {NSLog (@ Error:, error% @ "); if (failureBlock) {failureBlock} {(error); else if (self.delegate & & [self.delegate; respondsToSelector:@selector (movieRecordingFailedWithError:) [self.delegate]) {movieRecordingFailedWithError:error]}}}; / / Set this to make sure that a functional movie is produced, even if the recording is cut off mid-stream. Only the last second should be lost in that case. assetWriter.movieFragmentInterval = CMTimeMakeWithSeconds (1, 1000); / / set the width, and encoding format of if (outputSettings = = Nil) {NSMutableDictionary *s Ettings = [[NSMutableDictionary alloc] init]; [settings setObject:AVVideoCodecH264 forKey:AVVideoCodecKey]; [settings setObject:[NSNumber numberWithInt:videoSize.width] forKey:AVVideoWidthKey] [settings setObject:[NSNumber numberWithInt: videoSize.height]; forKey:AVVideoHeightKey]; outputSettings = settings;} / / if you passed the relevant settings, check whether it is necessary to set the parameters of the else __unused NSString [outputSettings objectForKey:AVVideoCodecKey] {*videoCodec = __unused; NSNumber = *width [outputSettings objectForKey:AVVideoWidthKey]; __unused NSNumber *height = [outputSettings objectForKey:AVVideoHeightKey] NSAssert (videoCodec & & width; & & height; OutputSettings is missing required @ parameters. "); if ([outputSettings" EncodingLiveVideo "objectForKey:@") {NSMutableDictionary *tmp = [outputSettings mutableCopy]; [tmp removeObjectForKey:@ EncodingLiveVideo; outputSettings = TMP;}} / * NSDictionary = *videoCleanApertureSettings [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:videoSize.width], AVVideoCleanApertureWidthKey [NSNumber, numberWithInt:videoSize.height], AVVideoCleanApertureHeightKey, [NSNumber, numberWithInt:0], AVVideoCleanApertur EHorizontalOffsetKey, [NSNumber, numberWithInt:0], AVVideoCleanApertureVerticalOffsetKey, nil]; NSDictionary *videoAspectRatioSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:3], AVVideoPixelAspectRatioHorizontalSpacingKey [NSNumber, numberWithInt:3], AVVideoPixelAspectRatioVerticalSpacingKey, nil]; NSMutableDictionary * compressionProperties = [[NSMutableDictionary alloc] init]; [compressionProperties setObject: videoCleanApertureSettings forKey:AVVideoCleanApertureKe Y]; [compressionProperties setObject:videoAspectRatioSettings forKey:AVVideoPixelAspectRatioKey]; [compressionProperties setObject:[NSNumber numberWithInt: 2000000] forKey:AVVideoAverageBitRateKey] [compressionProperties setObject:[NSNumber numberWithInt: 16]; forKey:AVVideoMaxKeyFrameIntervalKey] [compressionProperties; setObject:AVVideoProfileLevelH264Main31 forKey:AVVideoProfileLevelKey]; [outputSettings setObject:compressionProperties forKey:AVVideoCompressionPropertiesKey]; AVAssetWriterInput * / / / create video assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings]; / / real-time data processing, if not set may have dropped frames AssetWriterVideoInput.expectsMediaDataInRealTime = _encodingLiveVideo need to use BGRA; / / You for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels'normal RGBA output with the movie input's BGRA. / / set the input to the encoder of the NSDictionary *sourcePixelBufferAttributesDictionary [NSDictionary dictionaryWithObjectsAndKeys: pixel format = [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey [NSNumber, numberWithInt: videoSize.width], kCVPixelBufferWidthKey, [NSNumber, numberWithInt:videoSize.height]. KCVPixelBufferHeightKey, Nil]; / / NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil] / assetWriterPixelBufferInput = AVAssetWriterInputPixelBufferAdaptor; / / create the object [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary]; [assetWriter addInput: assetWriterVideoInput];}
  • Other methods.
You need to write an audio data set / / - (void) setHasAudioTrack: (BOOL) hasAudioTrack audioSettings: (NSDictionary * audioOutputSettings); / / to the end, and cancel the record - (void) - (void) startRecording; startRecordingInOrientation: (orientationTransform; CGAffineTransform) - (void) - (void) finishRecording; finishRecordingWithCompletionHandler: (void (^) (void) handler - (void)); cancelRecording; / / audio - (void) processAudioBuffer: (CMSampleBufferRef) audioBuffer; / / videoInputReadyCallback, audioInputReadyCallback synchronous callback - (void) enableSynchronizationCallbacks;

GPUImageMovieWriter method is not a lot, but the method is relatively long, internal processing is relatively complex. Here only a common method is given. If you need to record the video, you can carefully read the GPUImageMovieWriter source code.

The initial parameters such as audio / words, encoding format, channel number, sampling rate, bit rate (void) - setHasAudioTrack: (BOOL) newValue (NSDictionary * audioSettings:) {_hasAudioTrack = newValue; audioOutputSettings; if (_hasAudioTrack) {if (_shouldPassthroughAudio) not set any {/ / Do settings so audio will be the same as passthrough = audioOutputSettings nil else;} if (audioOutputSettings = = Nil) {AVAudioSession *sharedAudioSession = [AVAudioSession sharedInstance]; double preferredHardwareSampleRate; if ([sharedAudioSession respondsToSelector:@selector (sampleRate)]) {preferredHardwareSampleRate = [sharedAudioSessi On sampleRate]; else clang diagnostic {#pragma} push #pragma clang diagnostic ignored "-Wdeprecated-declarations" preferredHardwareSampleRate = [[AVAudioSession sharedInstance] currentHardwareSampleRate]; #pragma clang diagnostic pop AudioChannelLayout ACL bZero}; (& ACL, sizeof (ACL)); acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; audioOutputSettings = [NSDictionary [NSNumber numberWithInt: kAudioFormatMPEG4AAC] AVFormatIDKey dictionaryWithObjectsAndKeys:, NSNumber numberWithInt:, [1]. AVNumberOfChannelsKey, [NSNumber numberWithFloat: preferredHardwareSampleRate], AVSampleRateKey, [NSData dataWithBytes: & ACL length: sizeof (ACL)], AVChannelLayoutKey, //[NSNumber numberWithInt:AVAudioQualityLow], AVEncoderAudioQualityKey NSNumber numberWithInt:, [64000], AVEncoderBitRateKey, nil]; AudioChannelLayout / ACL; bZero (& ACL, sizeof (ACL)); acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; audioOutputSettings = [NSDictionary / dictionaryWithObjectsAndKeys: NSNumber number WithInt: kAudioFormatMPEG4AAC], AVFormatIDKey, [NSNumber numberWithInt: 1], AVNumberOfChannelsKey, [NSNumber numberWithFloat: 44100], AVSampleRateKey, [NSNumber numberWithInt: 64000], AVEncoderBitRateKey [NSData, dataWithBytes: & amp; ACL length: sizeof (ACL)], AVChannelLayoutKey, nil]; assetWriterAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings] * /}; [assetWriter addInput:assetWriterAudioInput]; assetWriterAudioInput.expectsMediaDataInRealTime = _encodingLiveV IDEO;} else {/ / Remove audio track if it exists}} - (void) finishRecordingWithCompletionHandler: (void (^) (void) handler; runSynchronouslyOnContextQueue (_movieWriterContext) {^{, isRecording = NO; if (assetWriter.status = = AVAssetWriterStatusCompleted assetWriter.status = AVAssetWriterStatusCancelled || || assetWriter.status = = AVAssetWriterStatusUnknown) {if (handler) runAsynchronouslyOnContextQueue (_movieWriterContext, handler); return;} if (assetWriter.status = = AVAssetWriterStatusWriting & & videoEncodingIsFinished!) {videoEncodingIsFinished = YES; [assetWriterVideoInput markAsFini Shed];} if (assetWriter.status = = AVAssetWriterStatusWriting & & audioEncodingIsFinished!) {audioEncodingIsFinished = YES; [assetWriterAudioInput markAsFinished];} #if (! Defined (__IPHONE_6_0) || (__IPHONE_OS_VERSION_MAX_ALLOWED < __IPHONE_6_0)) / / Not iOS 6 SDK [assetWriter finishWriting]; if (handler) runAsynchronouslyOnContextQueue (_movieWriterContext, handler); / / #else iOS 6 SDK if ([assetWriter respondsToSelector:@ selector (finishWritingWithCompletionHandler:)]) {/ / Running iOS 6 [assetWriter finishWritingWithCompletionHandler: (handler: ^{?})];} else {/ / Not R Unning iOS 6 #pragma clang diagnostic push #pragma clang diagnostic ignored "-Wdeprecated-declarations" [assetWriter finishWriting] #pragma clang diagnostic pop; if (handler) runAsynchronouslyOnContextQueue (_movieWriterContext, handler);}}}); / / #endif processing audio data - (void) processAudioBuffer: (CMSampleBufferRef) audioBuffer; if (isRecording || {! _paused) {return}; / / if (_hasAudioTrack & & CMTIME_IS_VALID (startTime)) / / audio data processing if (_hasAudioTrack) {CFRetain (audioBuffer); / / get the audio PTS (audioBuffer) CMTime currentSampleTime = CMSampleBufferGetOutputPresentationTimeStamp (CMTI; if ME_IS_INVALID (startTime)) {runSynchronouslyOnContextQueue (_movieWriterContext, assetWriter / ^{judgment of the state, not write state started if ((audioInputReadyCallback = = NULL) & & (assetWriter.status! = AVAssetWriterStatusWriting) {[assetWriter startWriting]}); / / set PTS [assetWriter startSessionAtSourceTime:currentSampleTime]; startTime = currentSampleTime;}}); / / without judgment need to use audioBuffer, if you don't need to use is set to the invalidate state of if (assetWriterAudioInput.readyForMoreMediaData & & _encodingLiveVideo! (@ NSLog) {"1: Had to drop an audio frame: CFBridgingRelease (% @", CMTimeCopyDescription (kCFAllocatorDefault, currentSampleTime))); if (_shouldInvalidateAudioSampleWhenDone) {CMSampleBufferInvalidate} (audioBuffer); CFRelease (audioBuffer); return (discont);} if {discont = NO; CMTime current; if (offsetTime.value > 0 {current) = CMTimeSubtract (currentSampleTime, offsetTime);} else {current = currentSampleTime;} CMTime offset = CMTimeSubtract (current, previousAudioTime); if (offsetTime.value = = 0) { OffsetTime = offset;} else {offsetTime = CMTimeAdd (offsetTime, offset);}} if (offsetTime.value > 0) {CFRelease (audioBuffer) = [self adjustTime:audioBuffer; audioBuffer by:offsetTime]; CFRetain (audioBuffer);} / / record most recent time so we know the length of the pause currentSampleTime = CMSampleBufferGetPresentationTimeStamp (audioBuffer); previousAudioTime //if = currentSampleTime; the consumer wants to do something with the audio samples before writing, let him. / / if the audio processing requires a callback, then the callback if (self.audioProcessingCallback) {//need to int Rospect into the opaque CMBlockBuffer structure to find its raw sample buffers. CMBlockBufferRef buffer (audioBuffer) CMItemCount = CMSampleBufferGetDataBuffer; numSamplesInBuffer = CMSampleBufferGetNumSamples (audioBuffer); AudioBufferList audioBufferList; CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer (audioBuffer, NULL, & audioBufferList, sizeof (audioBufferList), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, & buffer) //passing a live pointer to; the audio buffers, try to process them in-place or we might have syncing issues. for (int bufferCount=0; bufferCount < audioBufferList.mNumberBuffers; bufferCount++) {SInt16 *samples = (SInt16 *) audioBufferList.mBuffers[bufferCount].mData; self.audioProcessingCallback (& samples, numSamplesInBuffer);}} / / NSLog (@ Record Ed audio sample time:%lld,%d,%lld, currentSampleTime.value, currentSampleTime.timescale, currentSampleTime.epoch "); / / write audio block (^write) = (void) ^ () {/ / append (if not waiting for while! AssetWriterAudioInput.readyForMoreMediaData & & _encodingLiveVideo; & &!! audioEncodingIsFinished) {NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.5]; //NSLog (@" audio waiting... "); [[NSRunLoop currentRunLoop] runUntilDate:maxDate];} (if! AssetWriterAudioInput.readyForMoreMediaData) {NSLog (2: Had to drop an @" audio frame ", CFBridgingRelease (CMTimeCopyDescription (% @ kCFAllocator Default, currentSampleTime)));} / / when readyForMoreMediaData is YES when it can append data else if (assetWriter.status = = AVAssetWriterStatusWriting) {if ([assetWriterAudioInput! AppendSampleBuffer:audioBuffer]) NSLog ("Problem appending audio buffer at @ time:, CFBridgingRelease (% @" CMTimeCopyDescription (kCFAllocatorDefault, currentSampleTime)));} else {//NSLog (@ "Wrote an audio fr

GPUImageMovie

The main function of GPUImageMovie is to read and decode audio and video files. It inherits from the GPUImageOutput, you can output the frame cache object, because there is no implementation of the GPUImageInput protocol, it can only be used as a response source.

  • Initialization。 Can be initialized by NSURL, AVPlayerItem, AVAsset.
- (ID) initWithAsset: (AVAsset *) asset; - (ID) initWithPlayerItem: (AVPlayerItem *) playerItem; - (ID) initWithURL: (NSURL *) url;

Initialization is relatively simple, simply save the incoming data.

- (ID) initWithURL: (NSURL * URL); if ({! (self = [super init]) {return nil}); [self yuvConversionSetup]; self.url = URL; self.asset = nil; return self;} - (ID) initWithAsset: (AVAsset * asset); {if (self = [super (init]!) {return nil}); [self yuvConversionSetup]; self.url = nil; self.asset = asset; return self;} - (ID) initWithPlayerItem: (AVPlayerItem * playerItem); if ({! (self = [super init]) {return nil}); [self yuvConversionSetup]; self.url = nil; self.asset = nil; self.playerItem = playerItem; return self;}
  • Other methods. GPUImageMovie method is relatively small, but the code is more complex. Due to limited space, no longer look here. Its method is mainly divided into these categories: 1, read audio and video data; 2, read the control (start, pause, cancel); 3, the processing of audio and video data frames.
/ / allow synchronization of audio and video encoding using GPUImageMovieWriter - (void) enableSynchronizedEncodingUsingMovieWriter: (GPUImageMovieWriter * movieWriter); / / read audio and video - (BOOL) readNextVideoFrameFromOutput: (AVAssetReaderOutput * readerVideoTrackOutput); - (BOOL) readNextAudioSampleFromOutput: (AVAssetReaderOutput * readerAudioTrackOutput); / / to the end, and cancel the read - (void) startProcessing - (void) endProcessing; - (void); cancelProcessing; / / video frame - (void) processMovieFrame: (CMSampleBufferRef) movieSampleBuffer;

Implementation process

  • record video
#import "ViewController.h" #import < GPUImage.h> #define; DOCUMENT (path) [[NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:path] @interface (ViewController) @property (weak, nonatomic) IBOutlet GPUImageView *imageView; @property (strong, nonatomic) GPUImageVideoCamera *video @property (strong, nonatomic); GPUImageMovieWriter *writer; @property (nonatomic, strong) NSURL * videoFile; @property (nonatomic, readonly, getter=isRecording) BOOL recording; @end @implementation ViewController (void) viewDidLoad {[super viewDidLoad]; _recording = NO; / / [_imageView setBackgroundColorRed:1.0 green:1.0 set the background color blue:1.0 alpha:1.0]; / / set The file path _videoFile = [NSURL fileURLWithPath:DOCUMENT (@ /1.mov)]; / / delete file [[NSFileManager defaultManager] removeItemAtURL:_videoFile error:nil] GPUImageMovieWriter _writer = [[GPUImageMovieWriter; / / set the alloc] initWithMovieURL:_videoFile size:CGSizeMake (480, 640)]; [_writer setHasAudioTrack:YES audioSettings:nil]; / / _video = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 GPUImageVideoCamera cameraPosition: AVCaptureDevicePositionBack]; _video.outputImageOrientation = UIInterfaceOrientationPortrait; [_video addAudioInputsAndOutputs]; / / set audio processing Target _video.audioEncodingTarget = _writer; / / set Target [_ Video addTarget:_imageView]; [_video addTarget:_writer]; / / start shooting [_video startCameraCapture];} - (IBAction) startButtonTapped: (UIButton * sender) {if (_recording!) {/ / to record video [_writer startRecording]; _recording = YES;}} - (IBAction) finishButtonTapped: (UIButton * sender) {/ / end of the recording of [_writer finishRecording] @end;}
  • Photograph
#import "SecondViewController.h" #import "ImageShowViewController.h" #import < GPUImage.h> @interface (SecondViewController) @property (weak, nonatomic) IBOutlet GPUImageView *imageView; @property (nonatomic, strong) GPUImageStillCamera *camera @property (nonatomic, strong); GPUImageFilter *filter; @end @implementation SecondViewController (void) viewDidLoad {[super viewDidLoad]; / / [_imageView setBackgroundColorRed:1.0 green:1.0 set the background color blue: 1 alpha:1.0] _filter [[GPUImageGrayscaleFilter alloc]; / / filter = init]; _camera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPresetPhoto / / initialize cameraPosition:AVCaptureDevicePositionBack]; _camera.outputImageOrien Tation = UIInterfaceOrientationPortrait; [_camera addTarget:_filter]; [_filter addTarget:_imageView]; / / to run [_camera startCameraCapture];} - (IBAction) pictureButtonTapped: (UIButton * sender) {if ([_camera isRunning]) {[_camera capturePhotoAsImageProcessedUpToFilter:_filter withCompletionHandler:^ (UIImage *processedImage, NSError *error) {[_camera stopCameraCapture]; ImageShowViewController *imageShowVC = [[UIStoryboard storyboardWithName:@ bundle:nil] "Main" instantiateViewControllerWithIdentifier:@ "ImageShowViewController"]; imageShowVC.image = processedImage [self presentViewController:imageShowVC; animated:YES completion:NULL];}];}e LSE {[_camera startCameraCapture];}}
  • Video transcoding and filters. Because the AAC needs to be decoded into PCM, so I modified the 230 lines of GPUImageMovie.m source code (to solve the error [AVAssetWriterInput appendSampleBuffer:] Cannot append sample buffer: Input buffer must be in an uncompressed format when outputSettings is not Nil), as shown below:
NSDictionary *audioOutputSetting = AVFormatIDKey @{: @ (kAudioFormatLinearPCM)}; / / This might need to be extended to handle movies with more than one audio track AVAssetTrack* audioTrack objectAtIndex:0] = [audioTracks; readerAudioTrackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioOutputSetting];
#import "ThirdViewController.h" #import < GPUImage.h> #define; DOCUMENT (path) [[NSSearchPathForDirectoriesInDomains (NSDocumentDirectory, NSUserDomainMask, YES) lastObject] stringByAppendingPathComponent:path] @interface (ThirdViewController) @property (weak, nonatomic) IBOutlet GPUImageView *imageView; @property (nonatomic, strong) GPUImageMovie *movie @property (nonatomic, strong); GPUImageMovieWriter *movieWriter; @property (nonatomic, strong) GPUImageFilter *filter @property (nonatomic, assign); CGSize size; @end @implementation ThirdViewController (void) viewDidLoad {[super viewDidLoad]; / / NSURL *fileURL to get the file path = [[NSBundle mainBundle] URLForResource:@ "1.mp4" withExtension:nil]; AVAsset = *asset [AVAsset assetWithURL:fileURL]; / / get the video width of NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo]; AVAssetTrack *videoTrack [tracks = firstObject]; _size = videoTrack.naturalSize; _movie = [[GPUImageMovie alloc] / / initialize GPUImageMovie initWithAsset:asset]; / / _filter = [[GPUImageGrayscaleFilter alloc] init] filter; [_movie addTarget:_filter]; [_filter addTarget:_imageView];} - (IBAction) playButtonTapped: (UIButton * sender) {[_movie} - startProcessing]; (IBAction) transcodeButtonTapped: (ID sender) {/ / file path NSURL *videoFile = [NSURL fileURLWithPath:DOCUMENT (@ /2.mov)]; [[NSFileManager defaultManager] removeItemAtURL:videoFile error:nil]; / / GP UImageMovieWriter _movieWriter = [[GPUImageMovieWriter alloc] initWithMovieURL:videoFile size:_size]; [_movieWriter setHasAudioTrack:YES audioSettings:nil]; / / GPUImageMovie related setting _movie.audioEncodingTarget = _movieWriter; [_filter addTarget: _movieWriter]; [_movie enableSynchronizedEncodingUsingMovieWriter:_movieWriter]; / / [_movieWriter startRecording] [_movie startProcessing] to start transcoding; __weak; / / the end of typeof (_movieWriter) wMovieWriter = _movieWriter; __weak typeof (self) wSelf = self; [_movieWriter setCompletionBlock:^{[wMovieWriter finishRecording]; [wSelf.movie removeTarget:wMovieWriter];}]; wSelf.movie.audioEncodingTarget = nil;}

summary

When the GPUImageVideoCamera, GPUImageStillCamera, GPUImageMovieWriter, GPUImageMovie in the processing of audio and video camera, very useful, due to space limitations, can not explain all their source. If you need to be able to read.

Source address: GPUImage source code reading series https://github.com/QinminiOS/GPUImage