GPUImage source code reading (four)

Summary

GPUImage is a well-known open source library for image processing, which allows you to use GPU accelerated filters and other special effects on images, videos, and cameras. Compared with the CoreImage framework, you can use the custom filters based on the interface provided by GPUImage. Project address: https://github.com/BradLarson/GPUImage
this article is mainly to read the GPUImage framework of GPUImagePicture, GPUImageView, GPUImageUIElement three categories of source code. These three classes and iOS in the image loading, image display, UI rendering related. In the use of the GPUImage framework, when it comes to the image filter processing and display, the basic will be used in these categories. The following is the source:
GPUImagePicture GPUImageView

GPUImageUIElement

Implementation effect

GPUImage source code reading (four)
GPUImagePicture.png
GPUImage source code reading (four)
GPUImageUIElement.png

GPUImagePicture

From the name you can know that GPUImagePicture is the GPUImage framework to deal with the image related classes, its main role is to convert the UIImage or CGImage texture objects. GPUImagePicture inherited from GPUImageOutput, so that it can be used as an output, because it does not implement the GPUImageInput protocol, can not handle input. Therefore, it is often used as the source of response chain.

Direct alpha and pre multiply alpha using direct alpha to describe the RGBA color, the color alpha value will be stored in the alpha channel. For example, if you want to describe red with 60% opacity, use the following values: (255, 0, 0, * 255 * 0.6) = (255, 0, 153). Of which 153 (153 = 255 * 0.6) indicates that the color should have a opacity of 60%. When using alpha to describe the RGBA color, each color is multiplied by the alpha value (255 * 0.6, * * * 0.6, * * * 0.6, * * * 0.6, * * * * 255 * * = = (153, 0, 0, 153).

  • Initialization method. Initialization method is more, because the initialization options are more. The advantage is that the degree of freedom is relatively large, easy to customize.
Through the picture / / URL initialization - (ID) initWithURL: (NSURL * URL); / / via UIImage or CGImage initialization - (ID) newImageSource; initWithImage: (UIImage *) - (ID) initWithCGImage: (CGImageRef) newImageSource; / / whether the smooth zoom output by UIImage or CGImage, initialization - (ID) initWithImage: (UIImage *) newImageSource smoothlyScaleOutput: (BOOL) smoothlyScaleOutput; (ID) - initWithCGImage: (CGImageRef) newImageSource smoothlyScaleOutput: smoothlyScaleOutput (BOOL); / / by UIImage or CGImage, whether the removal by alpha to initialize the pre - (ID) initWithImage: (UIImage *) newImageSource removePremultiplication: (BOOL) removePremultiplication; (ID) - initWithCGImage: (CGImageRef) newImageSource removePremultiplication: (BOOL) removePremultiplication; / / via UIImage or CGImage And whether the smooth zoom, whether the removal of pre initialized by alpha - (ID) initWithImage: (UIImage *) newImageSource smoothlyScaleOutput: (BOOL) smoothlyScaleOutput removePremultiplication: (BOOL) removePremultiplication; (ID) - initWithCGImage: (CGImageRef) newImageSource smoothlyScaleOutput: (BOOL) smoothlyScaleOutput removePremultiplication: (BOOL) removePremultiplication;

Initialization method is more, but other initialization methods are based on the following initialization method. So, just look at the following initialization method. To achieve more complex, but the basic idea is: 1, get the picture for the width and height (cannot exceed the maximum allowable width of high texture OpenGL ES) 2, if you use smoothlyScaleOutput, need to adjust the width and height is close to 2 of the value of power, after adjustment must be redrawn; 3, if not redraw, obtaining the size of terminal and alpha information; 4, need to redraw, CoreGraphics redraw; 5, according to the need to remove the premultiplied alpha options, decide whether to remove premultiplication alpha; 6, by the adjusted data to generate texture cache objects; 7, according to the shouldSmoothlyScaleOutput option to decide whether to generate mipmaps; 8, the release of resources.

- (ID) initWithCGImage: (CGImageRef) newImageSource smoothlyScaleOutput: (BOOL) smoothlyScaleOutput removePremultiplication: (BOOL) removePremultiplication; if ({! (self = [super init]) {return nil}); hasProcessedImage = NO; self.shouldSmoothlyScaleOutput = smoothlyScaleOutput; imageUpdateSemaphore = dispatch_semaphore_create (0); dispatch_semaphore_signal (imageUpdateSemaphore); / / TODO: Dispatch this whole thing asynchronously to move image loading off main thread CGFloat widthOfImage (newImageSource) CGFloat = CGImageGetWidth; heightOfImage = CGImageGetHeight (newImageSource); / / If passed an empty image reference, CGContextDrawImage will fail in future versions of the SDK. NSAssert (widthOfIm Age > & 0; & heightOfImage > 0, "Passed image must not @ be empty - it should be at least 1px tall and wide"); pixelSizeOfImage = CGSizeMake (widthOfImage, heightOfImage); CGSize pixelSizeToUseForTexture BOOL = pixelSizeOfImage; shouldRedrawUsingCoreGraphics = NO; / / For now, deal with images larger than the maximum texture size by resizing to be within that limit CGSize scaledImageSizeToFitOnGPU = [GPUImageContext sizeThatFitsWithinATextureForSize: pixelSizeOfImage]; if (! CGSizeEqualToSize (scaledImageSizeToFitOnGPU, pixelSizeOfImage)) {pixelSizeOfImage = scaledImageSizeToFitOnGPU; pixelSizeToUseForTexture = pixelSizeOfImage; shouldRedrawUsingCoreGraphics = YES;} if (self.shouldSmoothlyScaleOutput) In order to use {/ / mipmaps, you need to provide power-of-two textures, so convert to the next largest power of two and stretch to fill CGFloat powerClosestToWidth = ceil (log2 (pixelSizeOfImage.width)); CGFloat powerClosestToHeight = ceil (log2 (pixelSizeOfImage.height) = CGSizeMake (POW); pixelSizeToUseForTexture (2, powerClosestToWidth). Pow (2, powerClosestToHeight)); shouldRedrawUsingCoreGraphics = YES;} GLubyte * imageData = NULL; CFDataRef = dataFromImageDataProvider NULL; GLenum format BOOL = GL_BGRA; isLitteEndian = YES; BOOL = alphaFirst NO; BOOL premultiplied = NO; if (shouldRedrawUsingCoreGraphics!) {/ * Check That the memory layout is compatible with GL, as we cannot use glPixelStore to tell GL about the memory * layout with GLES. * if (CGImageGetBytesPerRow (newImageSource)! = CGImageGetWidth (newImageSource) CGImageGetBitsPerPixel (newImageSource) || * 4! = 32 CGImageGetBitsPerComponent (newImageSource) ||! = 8) {shouldRedrawUsingCoreGraphics} {else = YES; Check that the bitmap pixel format / is compatible with GL / CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo (newImageSource); if ((bitmapInfo & kCGBitmapFloatComponents)! = 0) {/ * We don't support float components for use directly in GL. ShouldRedrawUsingCoreGraphics = YES;} else {CGBitmapInfo byteOrderInfo = bitmapInfo & kCGBitmapByteOrderMask; if (byteOrderInfo = = kCGBitmapByteOrder32Little) {/ * Little endian, for alpha-first we can use this bitmap directly in GL / CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask; if (alphaInfo! = kCGImageAlphaPremultipliedFirst & & kCGImageAlphaFirst = & alphaInfo! & alphaInfo! = kCGImageAlphaNoneSkipFirst) {shouldRedrawUsingCoreGraphics = YES;}} (byteOrderInfo = = if else kCGBitmapByteOrderDefault byteOrd || ErInfo = = kCGBitmapByteOrder32Big) {isLitteEndian = NO; / * Big endian, for alpha-last we can use this bitmap directly in GL / CGImageAlphaInfo alphaInfo = bitmapInfo & kCGBitmapAlphaInfoMask; if (alphaInfo! = kCGImageAlphaPremultipliedLast & & alphaInfo; kCGImageAlphaLast = & &! AlphaInfo! = kCGImageAlphaNoneSkipLast) {shouldRedrawUsingCoreGraphics = YES;} else {/ * Can access directly using GL_RGBA pixel format * / premultiplied = alphaInfo = = kCGImageAlphaPremultipliedLast alphaInfo = = || kCGImageAlphaPremultipliedLast; alphaFirst = alphaInfo = = kCGIma GeAlphaFirst || alphaInfo = = kCGImageAlphaPremultipliedFirst; format = GL_RGBA;}}}}} / / CFAbsoluteTime = CFAbsoluteTimeGetCurrent (elapsedTime, startTime); if (shouldRedrawUsingCoreGraphics) For resized or incompatible image: {/ / redraw (imageData = GLubyte * calloc (1), (int) pixelSizeToUseForTexture.width (int) pixelSizeToUseForTexture.height * * 4); CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB (CGContextRef); imageContext = CGBitmapContextCreate (imageData (size_t) pixelSizeToUseForTexture.width, (size_t) pixelSizeToUseForTexture.height, 8, (size_t) pixelSizeToUseForTexture.width * 4, genericRGBColor Space, kCGBitmapByteOrder32Little kCGImageAlphaPremultipliedFirst |); / / CGContextSetBlendMode (imageContext, kCGBlendModeCopy); / / From Technical Q& A QA1708: http://developer.apple.com/library/ios/#qa/qa1708/_index.html CGContextDrawImage (imageContext, CGRectMake (0, 0, pixelSizeToUseForTexture.width, pixelSizeToUseForTexture.height, newImageSource)); CGContextRelease (imageContext); CGColorSpaceRelease (genericRGBColorspace); isLitteEndian = YES; alphaFirst = YES; premultiplied = YES;} else Access the raw image bytes {/ / directly dataFromImageDataProvider = CGDataProviderCopyData (CGImageGetDataProvider (newImageSource)); imageData = (GLubyte * CFDataGe) TBytePtr (dataFromImageDataProvider);} if (removePremultiplication & & premultiplied) {NSUInteger totalNumberOfPixels = round (pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height); uint32_t *pixelP = (uint32_t * imageData); uint32_t pixel; CGFloat srcR, srcG, srcB, srcA; for (NSUInteger idx=0; idx< totalNumberOfPixels; idx++, pixelP++) {pixel = isLitteEndian CFSwapInt32LittleToHost? (*pixelP): CFSwapInt32BigToHost (*pixelP); if (alphaFirst) {srcA = ((CGFloat) (pixel & > > 0xff000000) 24 / 255.0f);} else {srcA = (CGFloat) (pixel & 0x000000ff) pixel / 255.0f; > > srcR = 8;} = ((CGFloat) (pixel & 0x00ff0000 > >); 16) / 255.0f; srcG = (CGFloat) ((pixel & 0x0000ff00 >); > / 255.0f; srcB; 8) (CGFloat) (pixel = & 0x000000ff / 255.0f; srcR) / = srcA; srcG / = srcA / = srcA; srcB; pixel = (uint32_t) (srcR * 255) < < 16; pixel (uint32_t) (|= srcG * 255 <); < 8; pixel |= (uint32_t) (srcB * 255); if (alphaFirst) {pixel (uint32_t) |= (srcA * 255) < < 24;} else {pixel = 8; < < pixel |= (uint32_t) (srcA * 255);} * pixelP = isLitteEndian? CFSwapInt32HostToLittle (pixel): CFSwapInt32HostToBig (pixel);}} / / elapsedTime = (CFAbsoluteTimeGetCurrent) * 1000 (- startTime); / / NSLog (@ Core Graphics drawing time:%f ", elapsedTime); / / CGFloat currentRedTotal = 0.0F, currentGreenTotal = 0.0F, C UrrentBlueTotal = 0.0F, currentAlphaTotal = 0.0F; totalNumberOfPixels = NSUInteger / round (pixelSizeToUseForTexture.width * pixelSizeToUseForTexture.height); / / / / for (NSUInteger currentPixel = 0; currentPixel < totalNumberOfPixels; currentPixel++) {/ / / / currentBlueTotal = (CGFloat) imageData[(currentPixel * 4)] / 255.0f; / / currentGreenTotal = (CGFloat) imageData[(currentPixel * 4) + 1] / 255.0f; / / currentRedTotal = (CGFloat) imageData[(currentPixel * 4 + 2)] / 255.0f; / / currentAlphaTotal = (CGFloat) imageData[(currentPixel * 4) + 3] / 255.0f;} / / / / / / NSLog (Debug, average input image @ red:%f, green:%f, blue:%f, alpha:%f, currentRedTotal / (CGFloat) totalNumberOfPixels, currentGreenTotal (CGFloat) totalNumberOfPixels / currentBlueTotal / totalNumberOfPixels / currentAlphaTotal (CGFloat), (CGFloat) totalNumberOfPixels (runSynchronouslyOnVideoProcessingQueue); ^{[GPUImageContext useImageProcessingContext] [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:pixelSizeToUseForTexture; outputFramebuffer = onlyTexture:YES]; [outputFramebuffer disableReferenceCounting]; glBindTexture (GL_TEXTURE_2D [outputFramebuffer texture]); if (self.shouldSmoothlyScaleOutput) {glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);} / / no need to use self.outputTextureOptions Here since pictures need this texture formats and type glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA (int) pixelSizeToUseForTexture.width, (int) pixelSizeToUseForTexture.height, 0, format, GL_UNSIGNED_BYTE, imageData); if (self.shouldSmoothlyScaleOutput) {glGenerateMipmap} (GL_TEXTURE_2D); glBindTexture (GL_TEXTURE_2D, 0);}); if (shouldRedrawUsingCoreGraphics) {free (imageData); else if (dataFromImageDataProvider)} {{CFRelease}} (dataFromImageDataProvider); return self;}
  • Other methods. These methods are mainly related to image processing.
Image / rendering - (void) - (CGSize) processImage; outputImageSize; - (BOOL) processImageWithCompletionHandler: (void (^) - completion (void)); (void) processImageUpToFilter: (GPUImageOutput< GPUImageInput> withCompletionHandler: (void * finalFilterInChain) (^) (UIImage *processedImage)) block;
/ / image processing (void) - processImage; processImageWithCompletionHandler:nil] {[self}; / / output image size, the image size can be adjusted (see initialization method). Therefore, a method for acquiring image size is provided. - (CGSize) outputImageSize; pixelSizeOfImage {return}; / / image processing, can be processed into the callback block - (BOOL) processImageWithCompletionHandler: (void (^) (void)) {completion; hasProcessedImage = YES; / / dispatch_semaphore_wait (imageUpdateSemaphore, DISPATCH_TIME_FOREVER); / / return immediately if the counter is less than 1. The counter is greater than or equal to 1 of the time, the counter is reduced 1, and implementing if (dispatch_semaphore_wait (imageUpdateSemaphore, DISPATCH_TIME_NOW)! = 0) {return NO;} / / Framebuffer transfer to all targets runAsynchronouslyOnVideoProcessingQueue (for ^{(id< GPUImageInput> currentTarget in; targets) {NSInteger indexOfObject = [targets indexOfObject:currentTarget]; NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue]; [currentTarget setCurrentlyReceivingMonochromeInput:NO]; [currentTarget setInputSize:pixelSizeOfImage atIndex:textureIndexOfTarget]; [currentTarget setInputFramebuff Er:outputFramebuffer atIndex:textureIndexOfTarget]; [currentTarget newFrameReadyAtTime:kCMTimeIndefinite atIndex:textureIndexOfTarget];} / / is executed, the counter plus 1 dispatch_semaphore_signal (imageUpdateSemaphore); / / block, block if (completion! = Nil) {(completion)}});; return YES;} / / final filter UIImage image generated by response chain - (void) processImageUpToFilter: (GPUImageOutput< GPUImageInput> finalFilterInChain withCompletionHandler: (void) * (^) (UIImage *processedImage)) {[finalFilterInChain useNextFrameForImageCapture]; [self block; processImageWithCompletionHandler:^{UIImage *imageFromFilter = [finalFilterInChain ima GeFromCurrentFramebuffer]; block;}] (imageFromFilter);}

GPUImageView

From the name you can know that GPUImageView is the GPUImage framework to display the image related class. GPUImageView implementation of the GPUImageInput protocol, so that it can accept GPUImageFramebuffer input. As a result, it is often used as the terminal node of the response chain to display the processed frame buffer. GPUImageView this involves a lot of knowledge of OpenGL ES, here will not say too much knowledge of OpenGL ES. If you do not understand, welcome to read my OpenGL ES entry topics.

  • Initialization
- (ID) initWithFrame: (CGRect) frame - (ID) initWithCoder: (NSCoder *) coder;

The initialization time is mainly the following aspects: 1, set the OpenGL attribute of the ES; 2, 3, create shader programs; obtaining attribute variables, unified variable position; 4, set the screen color; 5, create the default frame buffer, renderbuffer, used for image display; 6, according to the filling mode adjust the vertex coordinates.

- (ID) initWithFrame: (CGRect) {if (frame! (self = [super initWithFrame:frame]) {return nil}); [self commonInit]; return self;} - (ID) initWithCoder: (NSCoder * coder) {if (! (self = [super initWithCoder:coder]) {return nil}); [self commonInit]; return self;} - (void) commonInit; scaling to account {/ / Set for Retina display if ([self respondsToSelector:@selector (setContentScaleFactor:)] self.contentScaleFactor = [[UIScreen mainScreen]) {scale]}; inputRotation = kGPUImageNoRotation; self.opaque = YES; self.hidden = NO; CAEAGLLayer = *eaglLayer (CAEAGLLayer * self.layer); eaglLayer.opaque = YES; eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking, kEAGLColorFormatRGBA8, kEAGLDrawablePropertyColorFormat, nil]; self.enabled = YES; runSynchronouslyOnVideoProcessingQueue (^{[GPUImageContext useImageProcessingContext]; displayProgram sharedImageProcessingContext] programForVertexShaderString:kGPUImageVertexShaderString = [[GPUImageContext fragmentShaderString:kGPUImagePassthroughFragmentShaderString]; if (! DisplayProgram.initialized) {[displayProgram addAttribute:@ "position"]; [displayProgram addAttribute:@ "inputTextureCoordinate"]; if ([displayProgram! Link]) {NSString *progLog = [displayProgram programLog]; NSLog (@ Program link, progLog log:% @ "); NSString *fragLog = [displayProgram fragmentShaderLog]; NSLog (" Fragment shader compile log: @% @ ", fragLog); NSString *vertLog = [displayProgram vertexShaderLog]; NSLog (" Vertex shader compile log: @% @ ", vertLog); displayProgram (NSAssert = nil; NO," Filter shader link @ failed ");}} = displayPositionAttribute [displayProgram attributeIndex:@" position "]; displayTextureCoordinateAttribute [displayProgram attributeIndex:@ =" inputTextureCoordinate "]; displayInputTextureUniform = [displayProgram uniformIndex:@ I "NputImageTexture This does assume]; / / a name of for the fragment" inputTexture "shader [GPUImageContext setActiveShaderProgram:displayProgram]; glEnableVertexAttribArray (displayPositionAttribute); glEnableVertexAttribArray (displayTextureCoordinateAttribute); [self setBackgroundColorRed:0.0 green:0.0 blue:0.0 alpha:1.0]; _fillMode = kGPUImageFillModePreserveAspectRatio; [self createDisplayFramebuffer];}});
  • Method list
/ / set the background color - (void) setBackgroundColorRed: (GLfloat) redComponent green: (GLfloat) greenComponent blue: (GLfloat) blueComponent alpha: alphaComponent (GLfloat); / / this method is not implemented - (void) setCurrentlyReceivingMonochromeInput: (BOOL) newValue;

Implementation method. Method is relatively simple, mainly to see the following methods.

- (void) setBackgroundColorRed: (GLfloat) redComponent green: (GLfloat) greenComponent blue: (GLfloat) blueComponent alpha: (GLfloat) alphaComponent; {backgroundColorRed = redComponent; backgroundColorGreen = greenComponent; backgroundColorBlue = blueComponent; backgroundColorAlpha = alphaComponent;} - (void) setCurrentlyReceivingMonochromeInput: (BOOL) {} / / newValue; according to the rotation mode to obtain texture coordinates (+ const * GLfloat) textureCoordinatesForRotation: (GPUImageRotationMode) rotationMode; static const GLfloat noRotationTextureCoordinates[] = {/ / {/ / 0.0F / / 1.0F, 0.0F, 0.0F, 1.0F, 1.0F / / 0.0F / / 1.0F / / static, const, GLfloat}; noRotationTextureCoordinates[] = {0.0F, 1.0f, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, static const GLfloat}; rotateRightTextureCoordinates[] = {1.0F, 1.0F, 1.0F, 0.0F, 0.0F, 1.0F, 0.0F, 0.0F, static const GLfloat}; rotateLeftTextureCoordinates[] = {0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F, 1.0F, static, const}; GLfloat verticalFlipTextureCoordinates[] = {0.0F, 0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, static const GLfloat}; horizontalFlipTextureCoordinates[] = {1.0F, 1.0F, 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 0.0F, static const GLfloat}; rotateRightVerticalFlipTextureCoordinates[] = {1.0F 0.0F, 1.0F, 1.0F, 0.0F, 0.0F, 0.0F, 1.0F, static, const, GLfloat}; rotateRightHorizontalFlipTextureCoordinates[] = {0.0F, 1.0F, 0.0F, 0.0F, 1.0F, 1.0F, 1.0F, 0.0F, static const GLfloat}; rotate180TextureCoordinates[] = {1.0F, 0.0F, 0.0F, 0.0F, 1.0F, 1.0F, 0.0F, 1.0F} switch; case kGPUImageNoRotation: return (rotationMode) {noRotationTextureCoordinates case; kGPUImageRotateLeft: return rotateLeftTextureCoordinates; case kGPUImageRotateRight: return rotateRightTextureCoordinates; case kGPUImageFlipVertical: return verticalFlipTextureCoordinates; case kGPUImageFlipHorizonal: retur N horizontalFlipTextureCoordinates; case kGPUImageRotateRightFlipVertical: return rotateRightVerticalFlipTextureCoordinates; case kGPUImageRotateRightFlipHorizontal: return rotateRightHorizontalFlipTextureCoordinates; case kGPUImageRotate180: return rotate180TextureCoordinates;}} / / covering method of the parent class, is responsible for OpenGL graphics rendering, and displayed on the screen - (void) newFrameReadyAtTime: (CMTime) frameTime atIndex: (NSInteger) textureIndex; [GPUImageContext (^{{runSynchronouslyOnVideoProcessingQueue setActiveShaderProgram:displayProgram]; [self setDisplayFramebuffer]; / / screen glClearColor (backgroundColorRed, backgroundColorGreen, backgroundColorBlue, backgroundColorA LPHA glClear (GL_COLOR_BUFFER_BIT); | GL_DEPTH_BUFFER_BIT); glActiveTexture (GL_TEXTURE4); glBindTexture (GL_TEXTURE_2D, [inputFramebufferForDisplay texture]); glUniform1i (displayInputTextureUniform, 4); glVertexAttribPointer (displayPositionAttribute, GL_FLOAT, 2, 0, 0, imageVertices); glVertexAttribPointer (displayTextureCoordinateAttribute, GL_FLOAT, 2, 0, 0, [GPUImageView textureCoordinatesForRotation:inputRotation]); / / glDrawArrays drawing (GL_TRIANGLE_STRIP, 0, 4); / / [self presentFramebuffer]; [inputFramebufferForDisplay unlock]; inputFramebufferForDisplay = nil;}}); / / display frame buffer (void) - presentFramebuffer {glBindR; Enderbuffer (GL_RENDERBUFFER, displayRenderbuffer); [[GPUImageContext sharedImageProcessingContext] presentBufferForDisplay];}

GPUImageUIElement

Similar to GPUImagePicture can be used as a response chain source. Unlike GPUImagePicture, its data is not from the picture, but from the UIView or CALayer rendering results, similar to UIView or CALayer screenshot. GPUImageUIElement inherited from GPUImageOutput, so that it can be used as an output, because it does not implement the GPUImageInput protocol, can not handle input.

  • Initialization
- (ID) initWithView: (UIView *) inputView - (ID) initWithLayer: (CALayer *) inputLayer;

Through UIView or CALayer initialization, initialization of the process of calling [layer renderInContext:imageContext] rendering, rendering the texture object.

- (ID) initWithView: (UIView * inputView); if ({! (self = [super init]) {return nil}); view = inputView; layer = inputView.layer; previousLayerSizeInPixels = CGSizeZero; [self update]; return self;} - (ID) initWithLayer: (CALayer *) inputLayer; if (self = {(! [super init])) {return} nil; view = nil; layer = inputLayer; previousLayerSizeInPixels = CGSizeZero; [self update]; return self;} - (void) update; updateWithTimestamp: {[self kCMTimeIndefinite];} - (void) updateWithTimestamp: (CMTime) {[GPUImageContext useImageProcessingContext]; CGSize frameTime; layerPixelSize [self = GLubyte (*imageData = layerSizeInPixels]; GLubyte * calloc (1), (int) layerPixelSize.width (int) layerPixelSize.height * * 4); CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB (CGContextRef); imageContext = CGBitmapContextCreate (imageData (int) layerPixelSize.width, (int) layerPixelSize.height, 8, (int) layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); / / CGContextRotateCTM (imageContext, M_PI_2; CGContextTranslateCTM (imageContext), 0.0F, layerPixelSize.height); CGContextScaleCTM (imageContext, layer.contentsScale, -layer.contentsScale); / / CGContextSetBlendMode (imageContext, kCGBlendModeCopy); / / From Technical Q& A QA1708: http://developer.apple.com/ library/ios/#qa/qa1708/_index.html [la Yer renderInContext:imageContext]; CGContextRelease (imageContext); CGColorSpaceRelease (genericRGBColorspace); / / TODO: This may not work outputFramebuffer [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions = onlyTexture:YES]; glBindTexture (GL_TEXTURE_2D [outputFramebuffer texture]); / / no need to use self.outputTextureOptions here, we always need these texture options glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA, layerPixelSize.width (int (int). LayerPixelSize.height, 0), GL_BGRA, GL_UNSIGNED_BYTE, imageData); free (imageData); for (id< GPUImageInput> currentTarget in; targets) {if (currentTarget = self.targetToIgnoreFo! RUpdates NSInteger [targets indexOfObject:currentTarget]) {indexOfObject = NSInteger; textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue]; [currentTarget setInputSize:layerPixelSize atIndex:textureIndexOfTarget]; [currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];}}}
  • Other methods
- (CGSize) layerSizeInPixels; - (void) update; - (void) updateUsingCurrentTime; - (void) updateWithTimestamp: (CMTime) frameTime;

The other method is to generate texture objects with screenshots and pass them on to all targets handles.

/ / get the pixel size - (CGSize) layerSizeInPixels; pointSize {CGSize = layer.bounds.size; return CGSizeMake (layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);} / / update method - (void) update; updateWithTimestamp:kCMTimeIndefinite] {[self}; / / use the current time update method - (void) updateUsingCurrentTime; {if (CMTIME_IS_INVALID (time)) {time = CMTimeMakeWithSeconds (0, 600); actualTimeOfLastUpdate [NSDate = timeIntervalSinceReferenceDate];} else {NSTimeInterval now = [NSDate timeIntervalSinceReferenceDate]; NSTimeInterval diff = now - actualTimeOfLastUpdate; time = CMTimeAdd (time, CMTimeMakeWithSeconds (diff, 600)); actualTimeOfLastUpda TE = now;} [self updateWithTimestamp:time];} / / with the time update method - (void) updateWithTimestamp: (CMTime) {[GPUImageContext useImageProcessingContext]; CGSize frameTime; layerPixelSize [self = layerSizeInPixels]; GLubyte = *imageData (GLubyte * calloc (1), (int) layerPixelSize.width (int) * layerPixelSize.height * 4 genericRGBColorspace = CGColorSpaceCreateDeviceRGB (CGColorSpaceRef); imageContext = CGBitmapContextCreate (CGContextRef); imageData (int) layerPixelSize.width, (int) layerPixelSize.height, 8, (int) layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); / / CGContextRotateCTM (imageContext, M_PI_2; CGContextTranslateCTM (imageContext) , 0.0F, layerPixelSize.height); CGContextScaleCTM (imageContext, layer.contentsScale, -layer.contentsScale); / / CGContextSetBlendMode (imageContext, kCGBlendModeCopy); / / From Technical Q& A QA1708: http://developer.apple.com/ library/ios/#qa/qa1708/_index.html [layer; renderInContext:imageContext]; CGContextRelease (imageContext); CGColorSpaceRelease (genericRGBColorspace); / / TODO: This may not work outputFramebuffer [[GPUImageContext sharedFramebufferCache] fetchFramebufferForSize:layerPixelSize textureOptions:self.outputTextureOptions onlyTexture:YES] = glBindTexture (GL_TEXTURE_2D, [outputFramebuffer; texture]); / / no need to use self.outputTextureOptions here, we always need these texture op Tions glTexImage2D (GL_TEXTURE_2D, 0, GL_RGBA (int) layerPixelSize.width, (int) layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData); free (imageData); for (id< GPUImageInput> currentTarget in; targets) {if (currentTarget! = self.targetToIgnoreForUpdates) {NSInteger indexOfObject = [targets indexOfObject:currentTarget]; NSInteger textureIndexOfTarget = [[targetTextureIndices objectAtIndex:indexOfObject] integerValue] [currentTarget; setInputSize:layerPixelSize atIndex:textureIndexOfTarget]; [currentTarget newFrameReadyAtTime:frameTime atIndex:textureIndexOfTarget];}}}

Example code

  • GPUImagePicture and GPUImageView. Effect see GPUImagePicture.png.
@interface (ViewController) @property (weak, nonatomic) IBOutlet GPUImageView *imageView; @end @implementation ViewController (void) viewDidLoad [_imageView setBackgroundColorRed:1.0 green:1.0 {[super viewDidLoad]; blue:1.0 alpha:1.0]; GPUImagePicture *picture = [[GPUImagePicture alloc] initWithImage:[UIImage imageNamed:@ "1.jpg" GPUImageGrayscaleFilter "; *filter alloc] = [[GPUImageGrayscaleFilter init]; [picture addTarget:filter]; [filter addTarget:_imageView]; [filter useNextFrameForImageCapture]; [picture processImage];}
  • GPUImageUIElement and GPUImageView. Effect see GPUImageUIElement.png.
@interface (SecondViewController) @property (weak, nonatomic) IBOutlet GPUImageView *imageView; @property (weak, nonatomic) IBOutlet UIView *bgView; @end @implementation SecondViewController (void) viewDidLoad [_imageView setBackgroundColorRed:1.0 green:1.0 {[super viewDidLoad]; blue:1.0 alpha:1.0]; GPUImageUIElement *element = [[GPUImageUIElement alloc] initWithView:_bgView]; GPUImageHueFilter *filter = [[GPUImageHueFilter alloc] init]; [element addTarget: filter]; [filter addTarget:_imageView]; [filter useNextFrameForImageCapture]; [element update];}

summary

GPUImagePicture, GPUImageView, GPUImageUIElement these categories in dealing with pictures, processing screenshots, and display images will often use. Therefore, it is important to be familiar with these categories to understand the GPUImage framework.

Source address: GPUImage source code reading series https://github.com/QinminiOS/GPUImage