Swift SFSpeechRecognizer追加现有的UITextView内容

时间:2018-11-21 15:37:47

标签: swift sfspeechrecognizer

我在我的应用程序中使用了SFSpeechRecognizer,由于使用了专用按钮(开始语音识别),它可以很好地缓解最终用户在UITextView中输入注释的情况。

但是,如果用户先手动输入一些文本,然后开始其语音识别,则先前输入的手动文本将被删除。如果用户在同一UITextView上执行两次语音识别(用户“语音”记录其文本的第一部分,然后停止记录,最后重新开始记录),则也是这种情况,先前的文本将被删除。 / p>

因此,我想知道如何将SFSpeechRecognizer识别的文本追加到现有文本之后。

这是我的代码:

func recordAndRecognizeSpeech(){

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        self.decaration.text = (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }

}

我试图更新

self.decaration.text = (result?.bestTranscription.formattedString)!

作者

self.decaration.text += (result?.bestTranscription.formattedString)!

但是每个识别的句子都会加一个双引号。

知道我该怎么做吗?

1 个答案:

答案 0 :(得分:1)

在启动识别系统之前尝试保存文本。

func recordAndRecognizeSpeech(){
    // one change here
    let defaultText = self.decaration.text

    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(AVAudioSessionCategoryRecord)
        try audioSession.setMode(AVAudioSessionModeMeasurement)
        try audioSession.setActive(true, with: .notifyOthersOnDeactivation)
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    self.recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    guard let inputNode = audioEngine.inputNode else {
        fatalError("Audio engine has no input node")
    }
    let recognitionRequest = self.recognitionRequest
    recognitionRequest.shouldReportPartialResults = true

    recognitionTask = speechRecognizer?.recognitionTask(with: recognitionRequest, resultHandler: { (result, error) in
        var isFinal = false
        // one change here
        self.decaration.text = defaultText + " " + (result?.bestTranscription.formattedString)!

        isFinal = (result?.isFinal)!
        let bottom = NSMakeRange(self.decaration.text.characters.count - 1, 1)
        self.decaration.scrollRangeToVisible(bottom)

        if error != nil || isFinal {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            self.recognitionTask = nil
            self.recognitionRequest.endAudio()
            self.oBtSpeech.isEnabled = true
        }
    })
    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest.append(buffer)
    }
    audioEngine.prepare()

    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }
}

result?.bestTranscription.formattedString返回整个已识别的短语,这就是为什么每次收到self.decaration.text的响应时都要重置SFSpeechRecognnizer的原因。