android - how to build BufferReceived() to capture voice using RecognizerIntent? -


i working on android application using recognizerintent.action_recognize_speech,,, problem don't know how create buffer capture voice user inputs. read alot on stack overflow, don't understand how include buffer , recognition service call code. , how play contents saved buffer.

this code:

       public class voice extends activity implements onclicklistener {    byte[] sig = new byte[500000] ;    int sigpos = 0 ;        listview lv;    static final int check =0;    protected static final string tag = null;  @override protected void oncreate(bundle savedinstancestate) {        // todo auto-generated method stub     super.oncreate(savedinstancestate);       setcontentview(r.layout.voice);      intent intent = new intent(recognizerintent.action_recognize_speech);         intent.putextra(recognizerintent.extra_language_model,             recognizerintent.language_model_free_form);     intent.putextra(recognizerintent.extra_calling_package,             "com.domain.app");      speechrecognizer recognizer = speechrecognizer             .createspeechrecognizer(this.getapplicationcontext());      recognitionlistener listener = new recognitionlistener() {          @override         public void onresults(bundle results) {             arraylist<string> voiceresults = results                     .getstringarraylist(speechrecognizer.results_recognition);             if (voiceresults == null) {                 log.e(tag, "no voice results");             } else {                 log.d(tag, "printing matches: ");                 (string match : voiceresults) {                     log.d(tag, match);                 }             }         }          @override         public void onreadyforspeech(bundle params) {             log.d(tag, "ready speech");         }          @override         public void onerror(int error) {             log.d(tag,                     "error listening speech: " + error);         }          @override         public void onbeginningofspeech() {             log.d(tag, "speech starting");         }          @override         public void onbufferreceived(byte[] buffer) {             // todo auto-generated method stub             textview display=(textview)findviewbyid (r.id.text1);                     display.settext("true");                 system.arraycopy(buffer, 0, sig, sigpos, buffer.length) ;               sigpos += buffer.length ;          }          @override         public void onendofspeech() {             // todo auto-generated method stub          }          @override         public void onevent(int eventtype, bundle params) {             // todo auto-generated method stub          }          @override         public void onpartialresults(bundle partialresults) {             // todo auto-generated method stub          }          @override         public void onrmschanged(float rmsdb) {             // todo auto-generated method stub          }     };     recognizer.setrecognitionlistener(listener);     recognizer.startlistening(intent);         startactivityforresult(intent,check);  }  @override public void onclick(view arg0) {     // todo auto-generated method stub  }    } 

the android speech recognition api (as of api level 17) not offer reliable way capture audio.

you can use "buffer received" callback note that

recognitionlistener says onbufferreceived:

more sound has been received. purpose of function allow giving feedback user regarding captured audio. there no guarantee method called.

buffer: buffer containing sequence of big-endian 16-bit integers representing single channel audio stream. sample rate implementation dependent.

and recognitionservice.callback says bufferreceived:

the service should call method when sound has been received. purpose of function allow giving feedback user regarding captured audio.

buffer: buffer containing sequence of big-endian 16-bit integers representing single channel audio stream. sample rate implementation dependent.

so callback feedback regarding captured audio , not captured audio itself, i.e. maybe reduced version of visualization purposes. also, "there no guarantee method called", i.e. google voice search might provide in v1 decide remove in v2.

note method can called multiple times during recognition. not documented if buffer represents complete recorded audio or snippet since last call. (i'd assume latter, need test speech recognizer.)

so, in implementation should copy buffer global variable saved e.g. wav-file once recognition has finished.


Comments

Popular posts from this blog

linux - Does gcc have any options to add version info in ELF binary file? -

android - send complex objects as post php java -

charts - What graph/dashboard product is facebook using in Dashboard: PUE & WUE -