2015-10-12 224 views
0

我正在嘗試從麥克風中取出音頻,並將180°相移應用於該輸入流並輸出。如何移動音頻單元輸出的相位180度

這裏是我用來初始化會話並捕獲音頻(採樣率設置爲44.1 KHz)

OSStatus status = noErr; 

status = AudioSessionSetActive(true); 
assert(status == noErr); 

UInt32 category = kAudioSessionCategory_PlayAndRecord; 
status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(UInt32), &category); 
assert(status == noErr); 

float aBufferLength = 0.002902; // In seconds 


status = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
           sizeof(aBufferLength), &aBufferLength); 

assert(status == noErr); 

AudioComponentDescription desc; 
desc.componentType = kAudioUnitType_Output; 
desc.componentSubType = kAudioUnitSubType_RemoteIO; 
desc.componentFlags = 0; 
desc.componentFlagsMask = 0; 
desc.componentManufacturer = kAudioUnitManufacturer_Apple; 

// get AU component 
AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); 

// create audio unit by component 
status = AudioComponentInstanceNew(inputComponent, &_audioState->audioUnit); 
assert(status == noErr); 

// record io on the input bus 
UInt32 flag = 1; 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioOutputUnitProperty_EnableIO, 
           kAudioUnitScope_Input, 
           1, /*input*/ 
           &flag, 
           sizeof(flag)); 
assert(status == noErr); 

// play on io on the output bus 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioOutputUnitProperty_EnableIO, 
           kAudioUnitScope_Output, 
           0, /*output*/ 
           &flag, 
           sizeof(flag)); 

assert(status == noErr); 


// Fetch sample rate, in case we didn't get quite what we requested 
Float64 achievedSampleRate; 
UInt32 size = sizeof(achievedSampleRate); 
status = AudioSessionGetProperty(kAudioSessionProperty_CurrentHardwareSampleRate, &size, &achievedSampleRate); 
if (achievedSampleRate != SAMPLE_RATE) { 
    NSLog(@"Hardware sample rate is %f", achievedSampleRate); 
} else { 
    achievedSampleRate = SAMPLE_RATE; 
    NSLog(@"Hardware sample rate is %f", achievedSampleRate); 
} 


// specify stream format for recording 
AudioStreamBasicDescription audioFormat; 
audioFormat.mSampleRate = achievedSampleRate; 
audioFormat.mFormatID = kAudioFormatLinearPCM; 
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger; 
audioFormat.mFramesPerPacket = 1; 
audioFormat.mChannelsPerFrame = 1; 
audioFormat.mBitsPerChannel = 16; 
audioFormat.mBytesPerPacket = 2; 
audioFormat.mBytesPerFrame = 2; 

// set the format on the output stream 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioUnitProperty_StreamFormat, 
           kAudioUnitScope_Output, 
           kInputBus, 
           &audioFormat, 
           sizeof(audioFormat)); 

assert(status == noErr); 

// set the format on the input stream 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioUnitProperty_StreamFormat, 
           kAudioUnitScope_Input, 
           kOutputBus, 
           &audioFormat, 
           sizeof(audioFormat)); 
assert(status == noErr); 

AURenderCallbackStruct callbackStruct; 
memset(&callbackStruct, 0, sizeof(AURenderCallbackStruct)); 
callbackStruct.inputProc = RenderCallback; 
callbackStruct.inputProcRefCon = _audioState; 

// set input callback 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioOutputUnitProperty_SetInputCallback, 
           kAudioUnitScope_Global, 
           kInputBus, 
           &callbackStruct, 
           sizeof(callbackStruct)); 
assert(status == noErr); 

callbackStruct.inputProc = PlaybackCallback; 
callbackStruct.inputProcRefCon = _audioState; 

// set Render callback for output 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioUnitProperty_SetRenderCallback, 
           kAudioUnitScope_Global, 
           kOutputBus, 
           &callbackStruct, 
           sizeof(callbackStruct)); 
assert(status == noErr); 

flag = 0; 

// allocate render buffer 
status = AudioUnitSetProperty(_audioState->audioUnit, 
           kAudioUnitProperty_ShouldAllocateBuffer, 
           kAudioUnitScope_Output, 
           kInputBus, 
           &flag, 
           sizeof(flag)); 
assert(status == noErr); 

_audioState->audioBuffer.mNumberChannels = 1; 
_audioState->audioBuffer.mDataByteSize = 256 * 2; 
_audioState->audioBuffer.mData = malloc(256 * 2); 

// initialize the audio unit 
status = AudioUnitInitialize(_audioState->audioUnit); 
assert(status == noErr); 
} 

有誰知道一種方式來轉移階段創建一個破壞性的正弦代碼波?我聽說過使用vDSP做帶通濾波,但我不確定...

+4

180度很簡單。您只需要翻轉輸入信號的符號 - 所以x值的樣本變爲-x,反之亦然。 – jaket

+0

您的問題標題詢問如何進行180度相移(又名倒相)。你最後一句話說你想改變階段並創造一個破壞性的正弦波。您可以找到基頻,然後找到峯值並將其與反相正弦波相加,但它聽起來像純音混入,因爲您的基頻不會是正弦波。你應該澄清說你正在嘗試做什麼。你想消除反饋,取消聲音,取消背景噪音等?你想做什麼,因爲你的方法很奇怪? – jaybers

+0

採樣頻帶上的主動噪聲消除。很明顯,我不是一個純粹的核心音頻/音頻單元的東西,而是一個科學背景,與電氣工程相關,我真的感覺在黑暗中的燈開關。 – patrickjquinn

回答

1

除非你知道從麥克風到輸入緩衝器的延遲,從輸出緩衝器到揚聲器的延遲,頻率你想取消,並且有些知道這些頻率在那段時間是平穩的,你不能可靠地創建一個180度相移來取消目的。相反,您將嘗試取消之前發生十幾個或更多毫秒的聲音,如果在此期間頻率發生了變化,您最終可能會添加聲音而不是取消聲音。同樣,如果聲源,揚聲器源和聽衆之間的距離變化足夠大的一部分波長,則揚聲器輸出最終可能加倍地增加聲源的響度,而不是取消它。對於1 kHz的聲音,這是一個6英寸的運動。

主動噪聲消除需要非常準確地瞭解進出時間滯後;包括麥克風,輸入濾波器和揚聲器響應以及ADC/DAC延遲。 Apple沒有詳細說明,而且它們在iOS設備型號之間很可能不同。給定精確的進入延遲的知識以及源信號頻率的準確分析(通過FFT),在每個頻率上可能需要一些不同於180度的相移,以試圖取消靜止資源。