2

我正在製作一個應用程序,可以進行文本轉語音和語音轉文本。iOS:使用SFSpeechRecognizer進行錄製後,AVSpeechSynthesizer不起作用

我現在遇到的問題是,使用AVSpeechSynthesizer的文本到語音轉換工作正常。但是在我使用SFSpeechRecognizer進行錄音和文字處理之後,文本到語音轉換停止工作(即不會回覆)。

我對swift也很陌生。但是我從幾個不同的教程中獲得了這些代碼,並嘗試將它們合併在一起。

這裏是我的代碼:

private var speechRecognizer = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))! 
private var recognitionRequest: SFSpeechAudioBufferRecognitionRequest? 
private var recognitionTask: SFSpeechRecognitionTask? 
private var audioEngine = AVAudioEngine() 

    @objc(speak:location:date:callback:) 
    func speak(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) ->()) -> Void { 
     let utterance = AVSpeechUtterance(string: name) 
     let synthesizer = AVSpeechSynthesizer() 
     synthesizer.speak(utterance) 
    } 


    @available(iOS 10.0, *) 
    @objc(startListening:location:date:callback:) 
    func startListening(name: String, location: String, date: NSNumber,_ callback: @escaping (NSObject) ->()) -> Void { 
     if audioEngine.isRunning { 
      audioEngine.stop() 
      recognitionRequest?.endAudio() 


     } else { 

      if recognitionTask != nil { //1 
       recognitionTask?.cancel() 
       recognitionTask = nil 
      } 

      let audioSession = AVAudioSession.sharedInstance() //2 
      do { 
       try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord) 
       try audioSession.setMode(AVAudioSessionModeMeasurement) 
       try audioSession.setActive(true, with: .notifyOthersOnDeactivation) 
      } catch { 
       print("audioSession properties weren't set because of an error.") 
      } 

      recognitionRequest = SFSpeechAudioBufferRecognitionRequest() //3 

      guard let inputNode = audioEngine.inputNode else { 
       fatalError("Audio engine has no input node") 
      } //4 

      guard let recognitionRequest = recognitionRequest else { 
       fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object") 
      } //5 

      recognitionRequest.shouldReportPartialResults = true //6 

      recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in //7 

       var isFinal = false //8 

       if result != nil { 

        print(result?.bestTranscription.formattedString) //9 
        isFinal = (result?.isFinal)! 
       } 

       if error != nil || isFinal { //10 
        self.audioEngine.stop() 
        inputNode.removeTap(onBus: 0) 

        self.recognitionRequest = nil 
        self.recognitionTask = nil 


       } 
      }) 

      let recordingFormat = inputNode.outputFormat(forBus: 0) //11 
      inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in 
       self.recognitionRequest?.append(buffer) 
      } 

      audioEngine.prepare() //12 

      do { 
       try audioEngine.start() 
      } catch { 
       print("audioEngine couldn't start because of an error.") 
      } 




     } 

    } 
+0

你在哪裏調用'speak'函數? –

+0

你的問題解決了@塞繆爾·門德斯 –

回答

1

他們都有一個AVAudioSession。

對於AVSpeechSynthesizer我想它必須被設置爲:

_audioSession.SetCategory(AVAudioSessionCategory.Playback, 
AVAudioSessionCategoryOptions.MixWithOthers); 

和對於SFSpeechRecognizer:

_audioSession.SetCategory(AVAudioSessionCategory.PlayAndRecord, 
AVAudioSessionCategoryOptions.MixWithOthers); 

希望它能幫助。