IOS, Audio, Unit (1)

IOS technology has recently been doing audio projects during the official online information and documents also learn a lot, of course, the related technology of audio iOS platform, there are still many aspects, here I first outline, and then to I/O Audio Unit as an example of the concept, basic usage and ideas to explain may not be comprehensive, some details need to find relevant documents. Later I’ll analyze the source code of an open source audio engine framework on GitHub to show possible design and implementation in more complex audio technology applications.

This picture and most of the technical concepts are elaborated from the official website of apple

Core Audio

Core Audio is iOS and MAC in the system of digital audio processing infrastructure, it is a set of software application framework used for audio, all on the iOS audio interface is developed by Core Audio to provide or after it provides the interface for the package. Apple’s official layered framework for Core Audio is illustrated below:

IOS, Audio, Unit (1)
core_audio_layers.png

Low-Level

This is mostly used in the audio APP implementation of MAC and requires maximum real-time performance, and most audio APP does not need to use that layer of services. In addition, iOS also provides high-level API with higher real-time performance to meet your needs. For example, OpenAL,
I/O Kit has the ability of real-time audio processing and I/O call directly in the game, and the interaction of
Audio HAL hardware driver, audio hardware abstraction layer, to make API calls with the actual hardware phase separation,
Core MIDI to remain independent, provide software Abstract Layer
Host Time Services work MIDI flow and equipment access computer hardware clock

Mid-Level

This layer complete functions, including audio data format conversion, audio file read and write, audio stream parsing, etc.
Audio Convert plugin support Services for audio data format conversion
Audio File Services is responsible for the audio data read and write
Audio Unit Services and Audio Processing Graph Services equalizer and mixer and digital signal processing
Audio File Scream Services plugin parsing
Core Audio Clock is responsible for the flow of Services for audio clock synchronization

High-Level

Is a group of high-level application from the lower layer interface together, basically we a lot of audio development work in this layer can provide it with the necessary automatic codec processing compressed audio format
AVAudioPlayer is designed to provide a platform for IOS Objective-C interface based audio player, play, pause, loop recording and
Audio Queue Services synchronous audio and audio support can support all of the iOS
Extended Audio File Services played by Audio File and Audio Converter combination, to provide compressed and uncompressed audio file read and write ability of
OpenAL is the CoreAudio of the OpenAL standard, can play 3D mixing effect

The API Service required for different scenarios

Only audio playback, without other requirements, AVAudioPlayer can meet the demand. Its interface is simple to use, do not care about the details, it is usually available only to a playback source URL address, control and call its play, pause, stop and other methods, the observer UI APP can play status updates need to the audio stream, you need to AudioFileStreamer Audio Queue, or the local network the stream read into memory, submitted to the AudioFileStreamer analytical separation of audio frames, audio frames can be separated for AudioQueue decoding and playback of

AudioStreamer
FreeStreamer
reference AFSoundManager APP need to apply sound audio (equalizer and reverb), is in addition to data read and analysis also need to use AudioConverter or Codec the audio data into PCM data, then By AudioUnit+AUGraph for sound processing and playback,
can refer to
DouAudioStreamer
TheAmazingAudioEngine
AudioKit

Audio Unit

IOS provides audio processing plug-ins such as mixing, equalization, format conversion, real-time IO recording, playback, offline rendering, and voice intercom (VoIP), all of which belong to different AudioUnit, supporting dynamic loading and use. AudioUnit can be created separately, but more is used in Audio Processing Graph containers to achieve a variety of processing requirements, such as the following scenario:

IOS, Audio, Unit (1)
AboutAudioUnitHosting_2x.png

Audio Processing Graph APP holds the container contains two EQ Unit, a Mixer Unit, a I/O Unit, APP disk or network in two streams. Balance by EQ Unit, then at Mixer Unit after mixing processing as a road, this road will enter the I/O Unit data sent to the hardware to play. In this entire process, APP can always adjust the set of AU Graph and the working status and parameters of each Unit, dynamically access or move out of the specified Unit, and ensure thread safety.

Audio Unit type:

I/O:, Remote, I/O, Voice-Processing, I/O, Generic, Output, Mixing:, 3D, Mixer, Mutichannel, Mixer,
, Effect:,
, iPod, Equalizer,
,, Format, Conversion:, Converter, Format

AudioUnit construction method

There are two ways to create Audio Unit; take I/O Unit as an example; one is to call the unit interface creation directly, and the other is to create it through Audio Unit Graph; here are the basic processes and related code for the two creation methods:

Unit API mode (Remote, IO, Unit)

Create IO Unit BOOL result / / AudioComponentDescription = NO; outputDescription = {0}; outputDescription.componentType = kAudioUnitType_Output; outputDescription.componentSubType = kAudioUnitSubType_RemoteIO; outputDescription.componentManufacturer = kAudioUnitManufacturer_Apple; outputDescription.componentFlags = 0; outputDescription.componentFlagsMask = 0; AudioComponent comp = AudioComponentFindNext (NULL, & outputDescription; result (AudioComponentInstanceNew) = CheckOSStatus (COMP, & mVoipUnit), "couldn't create @ a new instance of RemoteIO"); if (result!) return result config IO Enable status UInt32; / / flag = 1; result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutput UnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, & flag, sizeof (flag)), "could not enable output on @ RemoteIO"); if (return result; result! Result) = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, kInputBus, & flag, sizeof (flag)), @ "AudioUnitSetProperty EnableIO" if (result);!) return result default format result; / / Config = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, & inputAudioDescription, sizeof (inputAudioDescription)), "couldn't set the input @ client format on RemoteIO"); if (result return result result!) (AudioUnitSetProperty = CheckOSStatus; (M VoipUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, kOutputBus, & outputAudioDescription, sizeof (outputAudioDescription)), "couldn't set the output @ client format on RemoteIO"); if (result!) return result Set the MaximumFramesPerSlice property.; / / This property is used to describe to an audio unit the maximum number samples it will be asked / / of to produce on any single given call to AudioUnitRender UInt32 maxFramesPerSlice = 4096; result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, & maxFramesPerSlice, sizeof (UInt32)), "couldn't set Max frames @ per slice on RemoteIO"); if (result!) return result the record CA; / / Set Llback AURenderCallbackStruct recordCallback; recordCallback.inputProc = recordCallbackFunc; recordCallback.inputProcRefCon = (__bridge void * _Nullable) (self); result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global, kInputBus, & recordCallback, sizeof (recordCallback)), "couldn't set record callback on @ RemoteIO"); if (result!) return / result; Set the playback callback AURenderCallbackStruct playbackCallback; playbackCallback.inputProc = playbackCallbackFunc; playbackCallback.inputProcRefCon = (__bridge void * _Nullable) (self); result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnit Scope_Global, kOutputBus, & playbackCallback, sizeof (playbackCallback)), "couldn't set playback callback on @ RemoteIO"); if (result!) return result; / / set buffer allocate flag = 0; result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_ShouldAllocateBuffer, kAudioUnitScope_Output, kInputBus, & flag, sizeof (flag)) "couldn't set property for, @ ShouldAllocateBuffer"); if (result!) return result Initialize the output IO instance; / / result = CheckOSStat Us (AudioUnitInitialize (mVoipUnit), @ couldn't, initialize, instance); if ((result) return result; VoiceProcessingIO YES; return;

AU Graph mode (MultiChannelMixer, Unit + Remote, IO, Unit)

AUGraph BOOL / / create result = NO; result = CheckOSStatus (NewAUGraph (& processingGraph), "couldn't create a new @ instance of AUGraph"); if (result!) return result I/O unit AudioComponentDescription; / / iOUnitDescription; iOUnitDescription.componentType = kAudioUnitType_Output; iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO; iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; iOUnitDescription.componentFlags = 0; iOUnitDescription.componentFlagsMask = 0 Multichannel mixer unit AudioComponentDescription; / / MixerUnitDescription; MixerUnitDescription.componentType = kAudioUnitType_Mixer; MixerUnitDescription .componentSubType = kAudioUnitSubType_MultiChannelMixer; MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; MixerUnitDescription.componentFlags = 0; MixerUnitDescription.componentFlagsMask = 0; AUNode iONode; for I/O unit AUNode / / node / / node for Multichannel Mixer mixerNode; unit result = CheckOSStatus (AUGraphAddNode (processingGraph, & iOUnitDescription, & iONode), "couldn't add a node @ instance of kAudioUnitSubType_RemoteIO (if); result! Return) result = CheckOSStatus (AUGraphAddNode (result; ProcessingGraph, & MixerUnitDescription, & mixerNode), "couldn't add a node @ instance of mixer unit"); if (result!) return result the AUGraph result; / / open = CheckOSStatus (AUGraphOpen (processingGraph), "couldn't get instance of mixer @ unit"); if (result!) return result; unit instance / / Obtain result = CheckOSStatus (AUGraphNodeInfo (processingGraph, mixerNode, NULL, & mMixerUnit ), "couldn't get instance of mixer @ unit"); if (return result; result! Result) = CheckOSStatus (AUGraphNodeInfo (processingGraph, iONode, NULL, & mVoipUnit), "couldn't get a new @ instance of remoteio unit"); if (result!) return result; UInt32 busCount = 2 ///////////////////////////////////////////////////////////////////////////////////////// count for mixer; / / bus unit input UInt32 guitarBus = 0; / / mixer unit bus 0 will be stereo and will take the guitar sound UInt32 be AtsBus = 1; / / mixer unit bus 1 will be mono and will take the beats sound result = CheckOSStatus (AudioUnitSetProperty (mMixerUnit, kAudioUnitProperty_ElementCount, kAudioUnitScope_Input, 0, & busCount, sizeof (busCount)), "could not set mixer @ unit input bus count (if); return result; UInt32 result!) maximumFramesPerSlice = 4096; result = CheckOSStatus (AudioUnitSetProperty ( MMixerUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, & maximumFramesPerSlice, sizeof (maximumFramesPerSlice)), "could not set mixer @ unit maximum frame per slice"); if (result!) return result Attach the input render; / / callback and context to each input bus for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber the struture th) {/ / Setup At contains the input render callback AURenderCallbackStruct playbackCallback; playbackCallback.inputProc = playbackCallbackFunc; playbackCallback.inputProcRefCon = (__bridge void * _Nullable) (self); NSLog ("Registering the render callback @ with mixer unit input bus%u", busNumber); / / Set a callback for the specified node's specified input result = CheckOSStatus (AUGraphSetNodeInputCallback (processingGraph, mixerNode, busNumber, & playbackCallback ), "couldn't set playback callback @ on mixer unit"); if (result!) return result Config mixer unit;} / / input default format result = CheckOSStatus (AudioUnitSetProperty (mMixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, guitarBus, & outputAudioDescription, sizeof (outputAudioDescription)), couldn't @ se T the input 0 client format on mixer unit "); if (return result; result! Result) = CheckOSStatus (AudioUnitSetProperty (mMixerUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, beatsBus, & outputAudioDescription, sizeof (outputAudioDescription)), couldn't set the @" input 1 client format on mixer unit "); if (result!) return result; Float64 graphSampleRate = 44100; / / Hertz; result = CheckOSStatus (AudioUnitSetProperty (mMixerUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Output, 0, & graphSampleRate, sizeof (graphSampleRate)), "couldn't set the output @ client format on mixer unit"); if (result!) return result; config void unit IO Enable //////////////////////////////////////////////////////////////////////////////////////////// / / status UInt32 flag = 1; result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, kOutputBus, & flag, sizeof (flag)), "could not enable output on @ kAudioUnitSubType_RemoteIO"); if (return result; result! Result) = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutputUnitProperty_EnableIO, K AudioUnitScope_Input, kInputBus, & flag, sizeof (flag)), "could not enable input on @ kAudioUnitSubType_RemoteIO"); if (result!) return result config VoIP unit default format; / / result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, & inputAudioDescription, Sizeof (inputAudioDescription)), "couldn't set the input @ client format on kAudioUnitSubType_RemoteIO"); if (result return result UInt32!); maxFramesPerSlice = 4096; result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioUnitProperty_MaximumFramesPerSlice, kAudioUnitScope_Global, 0, & maxFramesPerSlice, sizeof (UInt32)), "couldn't set Max frames per @ slice on KAudioUnitSubType_RemoteIO "); if (result!) return result Set the record callback AURenderCallbackStruct; / / recordCallback; recordCallback.inputProc = recordCallbackFunc; recordCallback.inputProcRefCon = (__bridge void * _Nullable) (self); result = CheckOSStatus (AudioUnitSetProperty (mVoipUnit, kAudioOutputUnitProperty_SetInputCallback, kAudioUnitScope_Global,

The input and output modes of AudioUnit data

Unit audio data processing, goes through a process of input and output, set the input and output of the audio format (may be the same or different), two Unit docking is output will be a Unit input to another Unit, or a Unit output to another Unit input, need attention is in contact to ensure the consistency of Audio Format. Take Remote I/O Unit as an example. The structure is shown below:

IOS, Audio, Unit (1)
IO_unit_2x.png

A I/O Unit contains two entity objects, and two entity objects (Element 0 and Element 1) are independent of each other, and they can be switched through the kAudioOutputUnitProperty_EnableIO property as required. Element 1 is connected with the hardware input, and input to Element 1 domain (input scope) are not visible to you, you can only read the output domain data and set the output domain of the audio format; Element 0 is connected with the hardware output, and the output of the Element domain 0 (ouput scope) are not visible to you, only you it writes the input data and set the input domain audio format.

How do you grab the data collected by the input device and send the processed data to the output device?
through the AURenderCallbackStruct structure, the two callback static method defined settings need to address Element 0/1, when Unit is configured and running, Unit scheduling threads will according to the current status of the equipment and the audio format scheduling cycle, the cycle of call you with the recording and playback callback method, the sample code is as follows:

Record callback / / for, read audio data from bufferlist static OSStatus recordCallbackFunc (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) {ASAudioEngineSingleU = *engine (__bridge ASAudioEngineSingleU*) inRefCon OSStatus; err = noErr; if (engine.audioChainIsBeingReconstructed = = NO) {@autoreleasepool {AudioBufferList bufList = [engine getBufferList: inNumberFrames]; err = AudioUnitRender ([engine recorderUnit], ioActionFlags, inTimeStamp, InBusNumber, inNumberFrames, & bufList); if (ERR) {HMLogDebug (LogModuleAudio @ AudioUnitRender error, code =%d, ERR);} else {AudioBuffer buffer = bufList.mBuffers[0]; NSData = *pcmBlock [NSData dataWithBytes: buffer.mData length:buffer.mDataByteSize]; [engine didRecordData:pcmBlock];}}}} / / for return err; play callback, fill audio data to bufferlist static OSStatus playbackCallbackFunc (void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData ASAudioEngineSingleU (__bridge) {*engine = inRefCon; err = ASAudioEngineSingleU*) OSStatus noErr; if (engine.audioChainIsBeingReconstructed = = NO) {for (int i = 0; I < ioData -> mNumberBuffers; i++) {@autoreleasepool {AudioBuffer buffer = ioData -> mBuffers[i]; NSData *pcmBlock = [engine getPlayFrame:buffer.mDataByteSize] if (pcmBlock; & & pcmBlock.length) {UInt32 = size (UInt32) MIN (buffer.mDataByteSize, [pcmBlock, length]); memcpy (buffer.mData [pcmBlock, bytes], size); buffer.mDataByteSize = size; //HMLogDebug (LogModuleAudio, "AudioUnitRender PCM data has @ filled); } else {buffer.mDataByteSize = 0; *ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;}}} / / end pool / / end / / end if return err for};

}

Examples of AudioUnit construction in different scenarios

I/O render free

From the input device over the data acquisition, after MutilChannelMixer Unit, and then sent to the output device to play, the way of construction lies in the middle of the Unit to mic to collect data over the sound phase adjustment and adjust volume

IOS, Audio, Unit (1)
IOWithoutRenderCallback_2x.png

I/O has rendering

This architecture adds rendercallback between input and output, and can do some processing on data collected by hardware (for example, gain, modulation, sound effects, etc.) and then send it to output playback

IOS, Audio, Unit (1)
IOWithRenderCallback_2x.png

Output only and render

APP for music, games, and synthesizers, uses only the output side of the IO Unit, and is responsible for the extraction, collation, and preparation of the broadcast source in the rendercallback, a relatively simple construction method

IOS, Audio, Unit (1)
OutputOnlyWithRenderCallback_2x.png

More complex construction

The input has two audio streams, all of which capture data via the rendercallback, where one of the audio streams is sent directly to the Mixer Unit, and the other is passed into the Mixer Unit after EQ Unit processing,

IOS, Audio, Unit (1)
OutputOnlyWithRenderCallbackExtended_2x.png

Tips

1. multithreading and memory management

As far as possible to avoid locking and time-consuming operation of the high render callback method, which can maximize the real-time performance of data or data collection, if there are different threads to read and write, will need to be protected by the lock, pthread recommended lock method of input and output performance than the other lock to high
audio the general is a continuous process, in the callback collection and playback, should try to reuse buffer and avoid multiple copies of buffer, but not every callback were re apply and release, in the proper position with @ autoreleasepool to avoid long time running memory rising

2. format

Core Audio defined in Type AudioStreamBasicDescription structure, Audio Unit and many other audio API format configuration on the need to use it, according to the needs of the structure filled with information correctly, the following is 44.1K, stereo, filled with examples of 16bit

AudioDescription.mSampleRate = 44100; audioDescription.mChannelsPerFrame = 2; audioDescription.mBitsPerChannel = 16; audioDescription.mFramesPerPacket = 1; audioDescription.mFormatID = kAudioFormatLinearPCM; audioDescription.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; audioDescription.mBytesPerFrame = (audioDescription.mBitsPerChannel/8) * audioDescription.mChannelsPerFrame; audioDescription.mBytesPerPacket = audioDescription.mBytesPerFrame;

Apple officials recommend that the entire Audio, Processing, Graph, or Unit be shipped in the same audio format as possible, although Audio Unit’s input and output may vary. Also, the input and output connections between the Unit must be consistent.

3. tone quality

In the process of using Audio Unit, the format can be changed dynamically, but there is a situation in the destruction of Unit before the best return to default when creating a format, or even Unit reconstruction in the destruction after possible broadcasting quality variation (volume small, rough voice) situation. The use of
in VoiceProcessing I/O Unit process, encountered in some of the iPhone after the opening speaker, Unit collected the data from the Mic is empty or noise, from the APP STORE to download other VOIP types of APP also exist the problem, then AudioUnitSubType will disappear into RemoteIO type after problems in doubt apple VoiceProcessing Unit on the bug echo cancellation function processing

4. AudioSession

Since the use of audio features, AudioSession is used as the functional requirements, follow up, and it is also hiding many related problems, such as routing management (receiver, speaker wire headset, Bluetooth headset, interrupt processing (interruption), iPhone call, Audio Unit) in here, it is not a detailed description, need note that

  1. Audio routing changes (user and headset, or forced to switch code to call the iOS hardware) relates to the input and output devices, I/O type Unit acquisition and playback thread in the process of switching will be blocked for a certain period of time (about 200ms), if it is to voice intercom real-time application scenarios to consider higher packet loss strategy.
  2. In the APP work, iPhone calls or active users to switch to other audio APP, interrupt mechanism timely processing of audio, in the stop and recovery Unit the right time to work, because the iOS platform exclusive way of resources, iPhone calls and other operations, APP Unit is unable to initialize or continue to work the.