GPUImage detailed analysis (six) – video watermark with video

review

Before parsing is the GPUImage source code analysis, fuzzy pictures, video filters, understand the function of GPUImage, this is the overlap of two videos, two video files can be merged, can also put video and video together.

Effect display

Video screenshot is as follows, the video is composed of two video, one from the file abc.mp4, one from the camera.

GPUImage detailed analysis (six) - video watermark with video

Core idea

The data collected by the GPUImageVideoCamera camera into the response chain, the video file into the response data through the GPUImageMovie chain, merged in GPUImageDissolveBlenderFilter, finally to transmit data to the end point of GPUImageView chain to show the response to UI and GPUImageMovieWriter to write temporary file, the temporary file written through ALAssetsLibrary database system.

GPUImage detailed analysis (six) - video watermark with video

Specific details

1, GPUImageDissolveBlendFilter

The GPUImageDissolveBlendFilter class inherits GPUImageTwoInputFilter and adds the property mix as the mix parameter of the piece shader. GPUImageDissolveBlendFilter needs to accept two inputs on the response chain, when the two inputs are ready, the mix will be used to mix the input and output to the response chain.

Thinking 1: what if there is only one input?

2, GPUImageMovie

GPUImageMovie class inherits the GPUImageOutput class, generally as the source of the response chain, can be initialized by URL, playerItem, asset.

3, GPUImageMovieWriter

GPUImageMovieWriter class implements the GPUImageInput protocol, as a response to the end of the chain. ShouldPassthroughAudio indicates whether to use source.
movieFile.audioEncodingTarget = movieWriter; indicates audio source is file.

If (audioFromFile) {/ / response chain [movieFile addTarget:filter]; [videoCamera addTarget:filter]; movieWriter.shouldPassthroughAudio = YES; movieFile.audioEncodingTarget = movieWriter; [movieFile enableSynchronizedEncodingUsingMovieWriter: movieWriter];} else {/ / [videoCamera addTarget:filter] [movieFile addTarget:filter] response chain;; movieWriter.shouldPassthroughAudio = NO; videoCamera.audioEncodingTarget = movieWriter; movieWriter.encodingLiveVideo = NO;}

Thinking 2: what will be the impact of different audio sources on the response chain? Why?

4, the response chain
response chain with GPUImageVideoCamera and GPUImageMovie as input, and finally to GPUImageMovieWriter as output.

  • Start response chain input. [videoCamera startCameraCapture]; [movieWriter startRecording]; [movieFile startProcessing];
  • Set movieWriter end callback. [movieWriter setCompletionBlock:^{}];

Think about when 3:movieWriter callback is called? Triggered by? What is the use of thinking about 4:movieWriter’s encodingLiveVideo? Set it to YES and NO try to observe the impact?

summary

When the pit process of demo, GPUImage with Issues, but no one answered the the.
code address here, refer to the link on the pit when the time to find some information.

appendix

Several thinking questions are doing demo process is more confused and the problems encountered, the answer is:

Thinking 1: enter only one input, will always wait for the second input, there will be no output. The problem occurs when the videoCamera is declared as a temporary variable, adding that target will hold a reference to the videoCamera. In fact, there will be only one input (only the video signal), while the screen is white screen. Question 2: audio sources will lead to different CMTime different, in response to the video information by default chain CMTime first input CMTime, so when modifying audio sources need to be modified in response to the input sequence chain, or a few seconds of video files will have more than two hours (CMTime file synchronization result). On the 3:movieWriter callback in video processing call the end of the time, triggered by GPUImageMovie
reader.status = AVAssetReaderStatusCompleted. Think about the impact of 4:encodingLiveVideo is actually the expectsMediaDataInRealTime attribute, YES for the input stream is real-time, such as the camera.