2011-04-08 115 views
3

我試圖製作的應用程序的主要目標是點對點視頻流。 (有點像使用藍牙/ WiFi的FaceTime)。如何從AVCaptureAudioDataOutput播放音頻採樣緩衝區

使用AVFoundation,我能夠捕捉視頻/音頻採樣緩衝區。然後我發送視頻/ audo採樣緩衝區數據。現在的問題是在接收端處理樣本緩衝區數據。

至於視頻採樣緩衝區,我能夠從採樣緩衝區獲得UIImage。但對於音頻採樣緩衝區,我不知道如何處理它,所以我可以播放音頻。

所以問題是如何處理/播放音頻採樣緩衝區

現在我只是在繪製波形,就像蘋果的波浪示例代碼:

CMSampleBufferRef sampleBuffer; 

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer); 
NSUInteger channelIndex = 0; 

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); 
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16)); 
size_t lengthAtOffset = 0; 
size_t totalLength = 0; 
SInt16 *samples = NULL; 
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples)); 

int numSamplesToRead = 1; 
for (int i = 0; i < numSamplesToRead; i++) { 

    SInt16 subSet[numSamples/numSamplesToRead]; 
    for (int j = 0; j < numSamples/numSamplesToRead; j++) 
     subSet[j] = samples[(i * (numSamples/numSamplesToRead)) + j]; 

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples/numSamplesToRead)]; 
    double scaledSample = (double) ((audioSample/SINT16_MAX)); 

    // plot waveform using scaledSample 
    [updateUI:scaledSample]; 
} 
+0

回答我自己的問題。我不認爲有一種方法可以播放音頻樣本,而不必先將它保存到文件中。可能有解決方案,但我無法找到一個解決方案。 – calampunay 2011-06-16 15:53:29

+2

當然,您可以播放它,請參閱回放部分[here](http://atastypixel.com/blog/using-remoteio-audio-unit/)。 – 2011-07-07 15:36:30

回答

-4

爲了顯示視頻,你可以使用 (這裏:讓ARGB畫面,並轉換成的Qt(諾基亞QT)而QImage則可以通過其他圖像替換)

地方它委託類

- (void)captureOutput:(AVCaptureOutput *)captureOutput 
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
      fromConnection:(AVCaptureConnection *)connection 

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; 

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 

CVPixelBufferLockBaseAddress(imageBuffer,0); 

SVideoSample sample; 

sample.pImage  = (char *)CVPixelBufferGetBaseAddress(imageBuffer); 
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
sample.width  = CVPixelBufferGetWidth(imageBuffer); 
sample.height  = CVPixelBufferGetHeight(imageBuffer); 

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32); 

self->m_receiver->eventReceived(img); 

CVPixelBufferUnlockBaseAddress(imageBuffer,0); 
[pool drain]; 
相關問題