2011-03-30 110 views
4

我是iOS音頻技術的新手。iOS - 播放效果流媒體(mp3)音頻

我正在開發一個播放流式音頻(mp3)的應用程序,計劃添加一些效果,如iPod均衡器,平移控制。

什麼是最好的方式來實現這一點。

_ 我曾嘗試使用馬特·加拉格爾的AudioStreamer API(http://cocoawithlove.com/2008/09/streaming-and-playing-live-mp3-stream.html)。我能夠播放流媒體音頻。但我不知道如何使用AudioQueue _添加效果。

從Apple文檔中,我瞭解到,AudioUnit可用於添加效果。但流式格式應該在線性PCM中。

基本上我想添加效果和播放流音頻。

現在我很困惑,前進的道路。

有人能給出一個方向前進。任何幫助,高度讚賞。

感謝

Sasikumar

回答

4

IOS版本我想你應該明確地使用AudioUnits。它是

顯得多麼簡單:

1)創建AudioUnits

// OUTPUT unit 
AudioComponentDescription iOUnitDescription; 
iOUnitDescription.componentType = kAudioUnitType_Output; 
iOUnitDescription.componentSubType = kAudioUnitSubType_RemoteIO; 
iOUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; 
iOUnitDescription.componentFlags = 0; 
iOUnitDescription.componentFlagsMask = 0; 

// MIXER unit 
AudioComponentDescription MixerUnitDescription; 
MixerUnitDescription.componentType   = kAudioUnitType_Mixer; 
MixerUnitDescription.componentSubType  = kAudioUnitSubType_MultiChannelMixer; 
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; 
MixerUnitDescription.componentFlags   = 0; 
MixerUnitDescription.componentFlagsMask  = 0; 

// PLAYER unit 
AudioComponentDescription playerUnitDescription; 
playerUnitDescription.componentType = kAudioUnitType_Generator; 
playerUnitDescription.componentSubType = kAudioUnitSubType_AudioFilePlayer; 
playerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; 

// EQ unit 
AudioComponentDescription EQUnitDescription; 
EQUnitDescription.componentType   = kAudioUnitType_Effect; 
EQUnitDescription.componentSubType  = kAudioUnitSubType_AUiPodEQ; 
EQUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple; 
EQUnitDescription.componentFlags   = 0; 
EQUnitDescription.componentFlagsMask  = 0; 

等。

2)創建節點

//// 
//// EQ NODE 
//// 
err = AUGraphAddNode(processingGraph, &EQUnitDescription, &eqNode); 
if (err) { NSLog(@"eqNode err = %ld", err); } 

//// 
//// FX NODE 
//// 
err = AUGraphAddNode(processingGraph, &FXUnitDescription, &fxNode); 
if (err) { NSLog(@"fxNode err = %ld", err); } 

//// 
//// VFX NODE 
//// 
err = AUGraphAddNode(processingGraph, &VFXUnitDescription, &vfxNode); 
if (err) { NSLog(@"vfxNode err = %ld", err); } 

/// 
/// MIXER NODE 
/// 
err = AUGraphAddNode (processingGraph, &MixerUnitDescription, &mixerNode); 
if (err) { NSLog(@"mixerNode err = %ld", err); } 

/// 
/// OUTPUT NODE 
/// 
err = AUGraphAddNode(processingGraph, &iOUnitDescription, &ioNode); 
if (err) { NSLog(@"outputNode err = %ld", err); } 

//// 
/// PLAYER NODE 
/// 
err = AUGraphAddNode(processingGraph, &playerUnitDescription, &audioPlayerNode); 
if (err) { NSLog(@"audioPlayerNode err = %ld", err); } 

3)將其連接

//// mic /lineIn ----> vfx bus 0 
err = AUGraphConnectNodeInput(processingGraph, ioNode, 1, vfxNode, 0); 
if (err) { NSLog(@"vfxNode err = %ld", err); } 

//// vfx ----> mixer 
err = AUGraphConnectNodeInput(processingGraph, vfxNode, 0, mixerNode, micBus); 
if (err) { NSLog(@"vfxNode err = %ld", err); } 

//// player ----> fx 
err = AUGraphConnectNodeInput(processingGraph, audioPlayerNode, 0, fxNode, 0); 
if (err) { NSLog(@"audioPlayerNode err = %ld", err); } 

//// fx ----> mixer 
err = AUGraphConnectNodeInput(processingGraph, fxNode, 0, mixerNode, filePlayerBus); 
if (err) { NSLog(@"audioPlayerNode err = %ld", err); } 

///// mixer ----> eq 
err = AUGraphConnectNodeInput(processingGraph, mixerNode, 0, eqNode, 0); 
if (err) { NSLog(@"mixerNode err = %ld", err); } 

//// eq ----> output 
err = AUGraphConnectNodeInput(processingGraph, eqNode, 0, ioNode, 0); 
if (err) { NSLog(@"eqNode err = %ld", err); } 

4)設置渲染回調

// let's say a mic input callback 
    AURenderCallbackStruct lineInrCallbackStruct = {}; 
    lineInrCallbackStruct.inputProc = &micLineInCallback; 
    lineInrCallbackStruct.inputProcRefCon = (void*)self; 
    err = AudioUnitSetProperty(
          vfxUnit, 
          kAudioUnitProperty_SetRenderCallback, 
          kAudioUnitScope_Global, 
          0, 
          &lineInrCallbackStruct, 
          sizeof(lineInrCallbackStruct)); 

5)在回調處理音頻緩衝

static OSStatus micLineInCallback (void     *inRefCon, 
            AudioUnitRenderActionFlags *ioActionFlags, 
            const AudioTimeStamp   *inTimeStamp, 
            UInt32      inBusNumber, 
            UInt32      inNumberFrames, 
            AudioBufferList    *ioData) 
{ 
    MixerHostAudio *THIS = (MixerHostAudio *)inRefCon; 
    AudioUnit rioUnit = THIS.ioUnit; // io unit which has the input data from mic/lineIn 
    OSStatus renderErr; 
    OSStatus err; 
    UInt32 bus1 = 1;     // input bus 
    int i; 

    renderErr = AudioUnitRender(
            rioUnit, 
            ioActionFlags, 
            inTimeStamp, 
            bus1, 
            inNumberFrames, 
            ioData); 

    //// do something with iOData like getting left and right channels 

AudioUnitSampleType *inSamplesLeft;   // convenience pointers to sample data 
    AudioUnitSampleType *inSamplesRight; 

    int isStereo;    // c boolean - for deciding how many channels to process. 
    int numberOfChannels;  // 1 = mono, 2= stereo 

    // Sint16 buffers to hold sample data after conversion 

    SInt16 *sampleBufferLeft = THIS.conversionBufferLeft; 
    SInt16 *sampleBufferRight = THIS.conversionBufferRight; 
    SInt16 *sampleBuffer; 

    // start the actual processing 

    numberOfChannels = THIS.displayNumberOfInputChannels; 
    isStereo = numberOfChannels > 1 ? 1 : 0; // decide stereo or mono 


    // copy all the input samples to the callback buffer - after this point we could bail and have a pass through 

    renderErr = AudioUnitRender(rioUnit, ioActionFlags, 
           inTimeStamp, bus1, inNumberFrames, ioData); 
    if (renderErr < 0) { 
     return renderErr; 
    } 

    inSamplesLeft = (AudioUnitSampleType *) ioData->mBuffers[0].mData; // left channel 
    fixedPointToSInt16(inSamplesLeft, sampleBufferLeft, inNumberFrames); 

    if(isStereo) { 
     inSamplesRight = (AudioUnitSampleType *) ioData->mBuffers[1].mData; // right channel 
     fixedPointToSInt16(inSamplesRight, sampleBufferRight, inNumberFrames); 
    } 

我通過探索像

Apple MixerHost音頻單元應用

The Audio Unit Programming Guide蘋果

AudioGraph是蘋果偉大的文檔得知這個您可以在實際的AudioUnit編程中使用最全面的示例代碼/「非官方」文檔。

希望這個幫助,祝你好運!

1

看看PureData的音頻處理 - libpd是它