2012-10-26 23 views
4

我正嘗試根據麥克風的輸入創建自己的自定義音效單元。此應用程序允許從麥克風到揚聲器的同時輸入/輸出。我可以使用模擬器應用效果和工作,但是當我嘗試在iPhone上測試時,我聽不到任何聲音。我貼我的代碼,如果有人能幫助我:當從話筒新的音頻數據可用iOS:Audio Unit RemoteIO無法在iPhone上工作

- (id) init{ 
    self = [super init]; 

    OSStatus status; 

    // Describe audio component 
    AudioComponentDescription desc; 
    desc.componentType = kAudioUnitType_Output; 
    desc.componentSubType = kAudioUnitSubType_RemoteIO; 
    desc.componentFlags = 0; 
    desc.componentFlagsMask = 0; 
    desc.componentManufacturer = kAudioUnitManufacturer_Apple; 

    // Get component 
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc); 

    // Get audio units 
    status = AudioComponentInstanceNew(inputComponent, &audioUnit); 
    checkStatus(status); 

    // Enable IO for recording 
    UInt32 flag = 1; 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioOutputUnitProperty_EnableIO, 
            kAudioUnitScope_Input, 
            kInputBus, 
            &flag, 
            sizeof(flag)); 
    checkStatus(status); 

    // Enable IO for playback 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioOutputUnitProperty_EnableIO, 
            kAudioUnitScope_Output, 
            kOutputBus, 
            &flag, 
            sizeof(flag)); 
    checkStatus(status); 

    // Describe format 
    AudioStreamBasicDescription audioFormat; 
    audioFormat.mSampleRate   = 44100.00; 
    audioFormat.mFormatID   = kAudioFormatLinearPCM; 
    audioFormat.mFormatFlags  = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked; 
    audioFormat.mFramesPerPacket = 1; 
    audioFormat.mChannelsPerFrame = 1; 
    audioFormat.mBitsPerChannel  = 16; 
    audioFormat.mBytesPerPacket  = 2; 
    audioFormat.mBytesPerFrame  = 2; 


    // Apply format 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioUnitProperty_StreamFormat, 
            kAudioUnitScope_Output, 
            kInputBus, 
            &audioFormat, 
            sizeof(audioFormat)); 
    checkStatus(status); 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioUnitProperty_StreamFormat, 
            kAudioUnitScope_Input, 
            kOutputBus, 
            &audioFormat, 
            sizeof(audioFormat)); 
    checkStatus(status); 


    // Set input callback 
    AURenderCallbackStruct callbackStruct; 
    callbackStruct.inputProc = recordingCallback; 
    callbackStruct.inputProcRefCon = self; 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioOutputUnitProperty_SetInputCallback, 
            kAudioUnitScope_Global, 
            kInputBus, 
            &callbackStruct, 
            sizeof(callbackStruct)); 
    checkStatus(status); 

    // Set output callback 
    callbackStruct.inputProc = playbackCallback; 
    callbackStruct.inputProcRefCon = self; 
    status = AudioUnitSetProperty(audioUnit, 
            kAudioUnitProperty_SetRenderCallback, 
            kAudioUnitScope_Global, 
            kOutputBus, 
            &callbackStruct, 
            sizeof(callbackStruct)); 
    checkStatus(status); 


    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per frame, thus 2 bytes per frame). 
    // Practice learns the buffers used contain 512 frames, if this changes it will be fixed in processAudio. 
    tempBuffer.mNumberChannels = 1; 
    tempBuffer.mDataByteSize = 512 * 2; 
    tempBuffer.mData = malloc(512 * 2); 

    // Initialise 
    status = AudioUnitInitialize(audioUnit); 
    checkStatus(status); 

    return self; 
} 

此回調被調用。但是當我在iPhone上測試時,從來沒有進入這裏:

static OSStatus recordingCallback(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { 
    AudioBuffer buffer; 

    buffer.mNumberChannels = 1; 
    buffer.mDataByteSize = inNumberFrames * 2; 
    buffer.mData = malloc(inNumberFrames * 2); 

    // Put buffer in a AudioBufferList 
    AudioBufferList bufferList; 
    bufferList.mNumberBuffers = 1; 
    bufferList.mBuffers[0] = buffer; 

    // Then: 
    // Obtain recorded samples 

    OSStatus status; 

    status = AudioUnitRender([iosAudio audioUnit], 
          ioActionFlags, 
          inTimeStamp, 
          inBusNumber, 
          inNumberFrames, 
          &bufferList); 
    checkStatus(status); 

    // Now, we have the samples we just read sitting in buffers in bufferList 
    // Process the new data 
    [iosAudio processAudio:&bufferList]; 

    // release the malloc'ed data in the buffer we created earlier 
    free(bufferList.mBuffers[0].mData); 

    return noErr; 
} 

回答

5

我解決了我的問題。我只需要在播放/錄製之前初始化AudioSession。我用下面的代碼這樣做:

OSStatus status; 

AudioSessionInitialize(NULL, NULL, NULL, self); 
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord; 
status = AudioSessionSetProperty (kAudioSessionProperty_AudioCategory, 
           sizeof (sessionCategory), 
           &sessionCategory); 

if (status != kAudioSessionNoError) 
{ 
    if (status == kAudioServicesUnsupportedPropertyError) { 
     NSLog(@"AudioSessionInitialize failed: unsupportedPropertyError"); 
    }else if (status == kAudioServicesBadPropertySizeError) { 
     NSLog(@"AudioSessionInitialize failed: badPropertySizeError"); 
    }else if (status == kAudioServicesBadSpecifierSizeError) { 
     NSLog(@"AudioSessionInitialize failed: badSpecifierSizeError"); 
    }else if (status == kAudioServicesSystemSoundUnspecifiedError) { 
     NSLog(@"AudioSessionInitialize failed: systemSoundUnspecifiedError"); 
    }else if (status == kAudioServicesSystemSoundClientTimedOutError) { 
     NSLog(@"AudioSessionInitialize failed: systemSoundClientTimedOutError"); 
    }else { 
     NSLog(@"AudioSessionInitialize failed! %ld", status); 
    } 
} 


AudioSessionSetActive(TRUE); 

...

+0

能否請您分享有關該功能的完整代碼? – iVipS

相關問題