2015-07-19 114 views
9

我正在嘗試使用AVAssetReader將其讀取到視頻文件中,並將音頻傳遞給CoreAudio進行處理(添加效果和內容),然後使用AVAssetWriter將其保存回磁盤。我想指出的是,如果我將我的輸出節點的AudioComponentDescription上的componentSubType設置爲RemoteIO,則通過揚聲器可以正常播放。這讓我有信心,我的AUGraph已經正確設置,因爲我可以聽到正在工作的東西。我將子類型設置爲GenericOutput,但我可以自己渲染並獲取調整後的音頻。將AudioBufferList轉換爲CMBlockBufferRef時出錯

我正在閱讀音頻,並將CMSampleBufferRef傳遞給copyBuffer。這將音頻放入一個循環緩衝區,稍後將會讀取。

- (void)copyBuffer:(CMSampleBufferRef)buf { 
    if (_readyForMoreBytes == NO) 
    { 
     return; 
    } 

    AudioBufferList abl; 
    CMBlockBufferRef blockBuffer; 
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer); 

    UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf); 
    BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size); 

    if (!bytesCopied){ 
     / 
     _readyForMoreBytes = NO; 

     if (size > kRescueBufferSize){ 
      NSLog(@"Unable to allocate enought space for rescue buffer, dropping audio frame"); 
     } else { 
      if (rescueBuffer == nil) { 
       rescueBuffer = malloc(kRescueBufferSize); 
      } 

      rescueBufferSize = size; 
      memcpy(rescueBuffer, abl.mBuffers[0].mData, size); 
     } 
    } 

    CFRelease(blockBuffer); 
    if (!self.hasBuffer && bytesCopied > 0) 
    { 
     self.hasBuffer = YES; 
    } 
} 

接下來我打電話給processOutput。這將在outputUnit上執行手動reder。當調用AudioUnitRender時,它會調用下面的playbackCallback,這是我的第一個節點上連接的輸入回​​調。 playbackCallback將數據從循環緩衝區中取出,並將其饋送到傳入的audioBufferList中。就像我之前說過的,如果輸出設置爲RemoteIO,則會導致音頻在揚聲器上正確播放。當AudioUnitRender完成時,它返回noErr,並且bufferList對象包含有效數據。 當我調用CMSampleBufferSetDataBufferFromAudioBufferList雖然我得到kCMSampleBufferError_RequiredParameterMissing(-12731)

-(CMSampleBufferRef)processOutput 
{ 
    if(self.offline == NO) 
    { 
     return NULL; 
    } 

    AudioUnitRenderActionFlags flags = 0; 
    AudioTimeStamp inTimeStamp; 
    memset(&inTimeStamp, 0, sizeof(AudioTimeStamp)); 
    inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid; 
    UInt32 busNumber = 0; 

    UInt32 numberFrames = 512; 
    inTimeStamp.mSampleTime = 0; 
    UInt32 channelCount = 2; 

    AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1)); 
    bufferList->mNumberBuffers = channelCount; 
    for (int j=0; j<channelCount; j++) 
    { 
     AudioBuffer buffer = {0}; 
     buffer.mNumberChannels = 1; 
     buffer.mDataByteSize = numberFrames*sizeof(SInt32); 
     buffer.mData = calloc(numberFrames,sizeof(SInt32)); 

     bufferList->mBuffers[j] = buffer; 

    } 
    CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), @"AudioUnitRender outputUnit"); 

    CMSampleBufferRef sampleBufferRef = NULL; 
    CMFormatDescriptionRef format = NULL; 
    CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid }; 
    AudioStreamBasicDescription audioFormat = self.audioFormat; 
    CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), @"CMAudioFormatDescriptionCreate"); 
    CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), @"CMSampleBufferCreate"); 
    CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), @"CMSampleBufferSetDataBufferFromAudioBufferList"); 

    return sampleBufferRef; 
} 


static OSStatus playbackCallback(void *inRefCon, 
           AudioUnitRenderActionFlags *ioActionFlags, 
           const AudioTimeStamp *inTimeStamp, 
           UInt32 inBusNumber, 
           UInt32 inNumberFrames, 
           AudioBufferList *ioData) 
{ 
    int numberOfChannels = ioData->mBuffers[0].mNumberChannels; 
    SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData; 

    / 
    memset(outSample, 0, ioData->mBuffers[0].mDataByteSize); 

    MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon; 

    if (p.hasBuffer){ 
     int32_t availableBytes; 
     SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes); 

     int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels; 

     int bytesToRead = MIN(availableBytes, requestedBytesSize); 
     memcpy(outSample, bufferTail, bytesToRead); 
     TPCircularBufferConsume([p getBuffer], bytesToRead); 

     if (availableBytes <= requestedBytesSize*2){ 
      [p setReadyForMoreBytes]; 
     } 

     if (availableBytes <= requestedBytesSize) { 
      p.hasBuffer = NO; 
     }  
    } 
    return noErr; 
} 

的CMSampleBufferRef我通過查找有效(以下爲對象的從調試器轉儲)

CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180 
    invalid = NO 
    dataReady = NO 
    makeDataReadyCallback = 0x0 
    makeDataReadyRefcon = 0x0 
    formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> { 
    mediaType:'soun' 
    mediaSubType:'lpcm' 
    mediaSpecific: { 
    ASBD: { 
    mSampleRate: 44100.000000 
    mFormatID: 'lpcm' 
    mFormatFlags: 0xc2c 
    mBytesPerPacket: 2 
    mFramesPerPacket: 1 
    mBytesPerFrame: 2 
    mChannelsPerFrame: 1 
    mBitsPerChannel: 16 } 
    cookie: {(null)} 
    ACL: {(null)} 
    } 
    extensions: {(null)} 
} 
    sbufToTrackReadiness = 0x0 
    numSamples = 512 
    sampleTimingArray[1] = { 
    {PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}}, 
    } 
    dataBuffer = 0x0 

緩衝區列表看起來像這樣

Printing description of bufferList: 
(AudioBufferList *) bufferList = 0x00007f87d280b0a0 
Printing description of bufferList->mNumberBuffers: 
(UInt32) mNumberBuffers = 2 
Printing description of bufferList->mBuffers: 
(AudioBuffer [1]) mBuffers = { 
    [0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00) 
} 

真叫在這裏,希望有人能幫助。謝謝,

萬一它很重要,我在ios 8.3模擬器中調試這個,音頻來自我在iPhone 6上拍攝的mp4,然後保存到我的筆記本電腦。

我已閱讀以下問題,但仍無濟於事,事情不起作用。

How to convert AudioBufferList to CMSampleBuffer?

Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results

CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731

core audio offline rendering GenericOutput

UPDATE

我捅了周圍的一些越來越注意到,當我的音響前右AudioBufferList UnitRender運行是這樣的:

bufferList->mNumberBuffers = 2, 
bufferList->mBuffers[0].mNumberChannels = 1, 
bufferList->mBuffers[0].mDataByteSize = 2048 

mDataByteSize是numberFrames *的sizeof(SINT32),即512 * 4。當我看看playbackCallback通過AudioBufferList,列表如下:

bufferList->mNumberBuffers = 1, 
bufferList->mBuffers[0].mNumberChannels = 1, 
bufferList->mBuffers[0].mDataByteSize = 1024 

不能確定其他緩衝區的位置,或其他1024字節的大小......

如果當我拿完調用瑞德納如果我做這樣的事情

AudioBufferList newbuff; 
newbuff.mNumberBuffers = 1; 
newbuff.mBuffers[0] = bufferList->mBuffers[0]; 
newbuff.mBuffers[0].mDataByteSize = 1024; 

,並通過newbuff關閉以CMSampleBufferSetDataBufferFromAudioBufferList錯誤消失。

如果我嘗試設置BufferList的大小有1個mNumberBuffers或其mDataByteSize是numberFrames *的sizeof(SInt16)我得到一個-50呼籲AudioUnitRender

更新2

我迷上了一個渲染回調,以便我可以在揚聲器上播放聲音時檢查輸出。我注意到揚聲器的輸出也有一個帶有2個緩衝區的AudioBufferList,輸入回調期間的mDataByteSize是1024,呈現回調是2048,這與我手動調用AudioUnitRender時看到的相同。當我檢查呈現的AudioBufferList中的數據時,我注意到2個緩衝區中的字節是相同的,這意味着我可以忽略第二個緩衝區。但是我不確定如何處理這樣的事實,即在數據被呈現而不是1024之後,數據的大小是2048.關於爲什麼會出現這種情況的任何想法?在通過音頻圖形之後它是否更多是一種原始形式,這就是爲什麼大小加倍的原因?

回答

1

聽起來像你正在處理的問題是因爲渠道數量的差異。你在2048塊而不是1024塊數據中看到的數據是因爲它將兩個通道(立體聲)回饋給你。檢查並確保所有音頻單元均已正確配置爲在整個音頻圖中使用單聲道,包括音高單元和任何音頻格式說明。

要特別注意的一件事是,致電AudioUnitSetProperty可能會失敗 - 所以一定要將其包裝在CheckError()中。

+0

找出這裏發生了什麼。我正在接受單聲道 - >將它發送到將其轉換爲立體聲的音調單元。我傳遞給CMAudioFormatDescriptionCreate的格式是單聲道的,它需要是立體聲,因爲這是AudioBufferList中的數據。 – odyth

相關問題