直接使用SpeechRecognizer API - onResults()保持返回null

时间:2011-04-18 12:05:02

标签: android speech-recognition voice-recognition

我一直在尝试关注the example in this post

由于我没有尝试在服务中实现此功能,而是在标准活动中实现,因此我没有遇到the aforementioned post中描述的问题。

然而,我继续得到“没有语音结果” - 正如在该帖子中实现的那样, getStringArrayList(RecognizerIntent.EXTRA_RESULTS)返回null。

显然,我错过了需要做什么另外

recognizer.setRecognitionListener(listener);
recognizer.startListening(intent);    

我错过了什么?

除了startListening()之外我还需要startActivityForResult()吗?如果是这样,我已经尝试了这个,但它调用了完整的谷歌语音搜索活动(这是我试图避免的,就像@ vladimir.vivien写的here)。由于2个识别器同时运行,这会产生更多问题......

起初我认为缺少的是实际提交给Google的服务器,但是当我从语音识别会话开始直到结束时检查LogCat输出(见下文)时,我看到它实际上创建了一个TCP会话http://www.google.com/m/voice-search

所以显而易见的问题是我错过了什么?

04-18 07:02:17.770: INFO/RecognitionController(623): startRecognition(#Intent;action=android.speech.action.RECOGNIZE_SPEECH;S.android.speech.extra.LANGUAGE_MODEL=free_form;S.android.speech.extra.PROMPT=LEARNSR;S.calling_package=com.example.learnsr.SrActivity;end)
04-18 07:02:17.770: INFO/RecognitionController(623): State change: STARTING -> STARTING
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.780: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:17.780: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:17.780: INFO/AudioService(164):  AudioFocus  requestAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:17.780: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=262144;vr_mode=1, tid 155, calling tid 121
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): do input routing device 40000
04-18 07:02:17.790: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:17.790: INFO/RecognitionController(623): State change: STARTING -> RECOGNIZING
04-18 07:02:17.790: INFO/ServerConnectorImpl(623): Starting TCP session, url=http://www.google.com/m/voice-search
04-18 07:02:17.930: DEBUG/ServerConnectorImpl(623): Created session a7918495c042db1746d3e09514baf621
04-18 07:02:17.930: INFO/ServerConnectorImpl(623): Creating TCP connection to 74.125.115.126:19294
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:17.980: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:18.070: INFO/ServerConnectorImpl(623): startRecognize RecognitionParameters{session=a7918495c042db1746d3e09514baf621,request=1}
04-18 07:02:18.390: INFO/RecognitionController(623): onReadyForSpeech, noise level:10.29969, snr:-0.42756215
04-18 07:02:19.760: DEBUG/dalvikvm(659): GC_EXPLICIT freed 5907 objects / 353648 bytes in 67ms
04-18 07:02:21.030: INFO/AudioHardwareQSD(121): AudioHardware pcm playback is going to standby.
04-18 07:02:24.090: INFO/RecognitionController(623): onBeginningOfSpeech
04-18 07:02:24.760: DEBUG/dalvikvm(669): GC_EXPLICIT freed 1141 objects / 74296 bytes in 48ms
04-18 07:02:25.080: INFO/RecognitionController(623): onEndOfSpeech
04-18 07:02:25.080: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.140: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.200: INFO/RecognitionController(623): State change: RECOGNIZING -> RECOGNIZED
04-18 07:02:25.200: INFO/RecognitionController(623): Final state: RECOGNIZED
04-18 07:02:25.260: INFO/ServerConnectorImpl(623): ClientReport{session_id=a7918495c042db1746d3e09514baf621,request_id=1,application_id=intent-speech-api,client_perceived_request_status=0,request_ack_latency_ms=118,total_latency_ms=7122,user_perceived_latency_ms=116,network_type=1,endpoint_trigger_type=3,}
04-18 07:02:25.260: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:25.270: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.270: INFO/RecognitionController(623): State change: RECOGNIZED -> PAUSED
04-18 07:02:25.270: INFO/AudioService(164):  AudioFocus  abandonAudioFocus() from android.media.AudioManager@46036948
04-18 07:02:25.270: INFO/ClientReportSender(623): Sending 1 client reports over HTTP
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.
04-18 07:02:25.280: DEBUG/AudioFlinger(121): setParameters(): io 3, keyvalue routing=0, tid 155, calling tid 121
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): Routing audio to Speakerphone
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Switching audio device to 
04-18 07:02:25.280: DEBUG/AudioHardwareQSD(121): Speakerphone
04-18 07:02:25.280: INFO/AudioHardwareQSD(121): AudioHardware PCM record is going to standby.

3 个答案:

答案 0 :(得分:2)

根据the documentation of the listener,您需要使用发给onResults()的包中的SpeechRecognizer.RESULTS_RECOGNITION请求结果。你试过吗?

使用RECOGNIZE_SPEECH意图时将使用RecognizerIntent.EXTRA_RESULTS。

答案 1 :(得分:1)

这段代码完美无缺:

package com.example.android.voicerecognitionservice;

import java.util.ArrayList;

import android.app.Activity;
import android.content.Context;
import android.content.Intent;
import android.media.AudioManager;
import android.os.Bundle;
import android.speech.RecognitionListener;
import android.speech.RecognizerIntent;
import android.speech.SpeechRecognizer;
import android.widget.TextView;
import android.widget.Toast;

public class VoiceRecognitionSettings extends Activity implements RecognitionListener {
  /** Text display */
  private TextView blurb;

  /** Parameters for recognition */
  private Intent recognizerIntent;

  /** The ear */
  private SpeechRecognizer recognizer;

  @Override
  public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    setContentView(R.layout.speech);

    blurb = (TextView) findViewById(R.id.text1);

  //  muteSystemAudio();

    recognizer = SpeechRecognizer.createSpeechRecognizer(this);
    recognizer.setRecognitionListener(this);

    recognizerIntent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, "com.example.android.voicerecognitionservice");
    recognizerIntent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS, true);

    recognizer.startListening(recognizerIntent);
  }

  @Override
  public void onBeginningOfSpeech() {
    blurb.append("[");
  }

  @Override
  public void onBufferReceived(byte[] arg0) {
  }

  @Override
  public void onEndOfSpeech() {
    blurb.append("] ");
  }

  @Override
  public void onError(int arg0) {
  }

  @Override
  public void onEvent(int arg0,
                      Bundle arg1) {
  }

  @Override
  public void onPartialResults(Bundle arg0) {
  }

  @Override
  public void onReadyForSpeech(Bundle arg0) {
    blurb.append("> ");
  }

@Override
  public void onResults(Bundle bundle) {
    ArrayList<String> results = bundle.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
    blurb.append(results.toString() + "\n");

//    if (results!=null){
//        Toast.makeText(VoiceRecognitionSettings.this,results.toString()+"55", Toast.LENGTH_LONG).show();
// 
//    }else{
//        Toast.makeText(VoiceRecognitionSettings.this,"vide", Toast.LENGTH_LONG).show();
//
//    }
    recognizer.startListening(recognizerIntent);


  }

  @Override
  public void onRmsChanged(float arg0) {
  }

  public void muteSystemAudio(){
        AudioManager amanager=(AudioManager)getSystemService(Context.AUDIO_SERVICE);
        amanager.setStreamMute(AudioManager.STREAM_SYSTEM, true);
    }
}

试试他

答案 2 :(得分:0)

我没有直接回答你的问题,但我建议尝试以不同的方式实现你想要的功能。

请参阅satur9nine的评论。 你为什么要写一个speechrecognizer类?另一个人试图将其作为一项服务,但由于你是从一项活动中做到这一点,你可以轻松地发起一个意图。这将为您节省大量的精力。

这是来自谷歌的两个api教程链接(我只是重新发布):

http://developer.android.com/resources/articles/speech-input.html

http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html