Voice Recognition Commands Android - android

So I've searched far and wide for some sort of solution to the issue regarding removing Google's Voice Recognition UI dialog when a user wants to perform a Voice Command but have been unable to find any solution. I am trying to implement an app which displays a menu to the user and the user can either click on the options or say the options out loud which will open the new pages. So far ive been unable to implement this unless I use Googles RecognizerIntent but I dont want the dialog box to pop up. Anyone have any ideas? Or has anyone solved this issue or found a workaround? Thanks
EDIT: As a compromise maybe there is a way to move the dialog to the bottom of the screen while still being able to view my menu?

Does How can I use speech recognition without the annoying dialog in android phones help?
I'm pretty sure the Nuance/Dragon charges for production or commercial applications that use their services. If this is just a demo, you may be fine with the developer account. Android speech services are free for all Android applications.

You know that you can do this with google's APIs.
You've probably been looking at the documentation for the speech recognition intent. Look instead at the RecognitionListener interface to the speech recognition APIs.
Here's some code to help you
public class SpeechRecognizerExample extends Activity implements RecognitionListener{
//This would go down in your onCreate
SpeechRecognizer recognizer = SpeechRecognizer.createSpeechRecognizer(this);
recognizer.setRecognitionListener(this);
//Then you'd need to start it when the user clicks or selects a text field or something
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
//intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, "zh");
intent.putExtra("calling_package",
"yourcallingpackage");
recognizer.startListening(intent);
//Then you'd need to implement the RecognitionListener functions - basically works just like a click listener
Here's the docs for a RecognitionListener:
http://developer.android.com/reference/android/speech/RecognitionListener.html

Related

what is the purpose of EXTRA_CALLING_PACKAGE in android studio

im now writing STT in android studio and i have a question for some code lines.
intent=new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE,getPackageName());
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE,"en-US");
The first line is set intent fot getting input of user's speech and the last one is for setting language that we going to use. but what about a second line?
Even though i read public documentation, cannot understand.
'The extra key used in an intent to the speech recognizer for voice search'
i understand that like this: after getting input of speech from the first line, use the input in the intent - and what kind of intent? - to the speech recognizer for voice search.
but still not sure..
can you give me an explanation?4
Thank you in advance
it's a flag that is used by voice search API to identify the called to this API (your application) so the voice search implements the callbacks and ... based on your package name...

Voice recognition - with templates (android wear)

I am trying to develop an application for Android wear that on a button click will ask the user to speak something and send it to a webserver. I also need to have a list of pre-defined templates, similar to what Hangouts works.
What I have tried:
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Send to server");
startActivityForResult(intent, SPEECH_REQUEST_CODE);
This works, but I cannot supply the user a set of pre-defined templates.
Reading this - https://developer.android.com/training/wearables/notifications/voice-input.html I see that it is possible to do this in a notification... but this will not be in the front, I need this UI to be modal/blocking, so a notification is not good for my use case.
What are my options? How can I implement this?
Unfortunately, other than Receiving Voice Input in a Notification there is no way to use voice recognition with pre-defined text responses.
Based on the documentation : Adding Voice Capabilities
Voice actions are an important part of the wearable experience. They let users carry out actions hands-free and quickly. Wear provides two types of voice actions:
System-provided
These voice actions are task-based and are built into the Wear platform. You filter for them in the activity that you want to start when the voice action is spoken. Examples include "Take a note" or "Set an alarm".
App-provided
These voice actions are app-based, and you declare them just like a launcher icon. Users say "Start " to use these voice actions and an activity that you specify starts.
Also as discussed in 24543484 and 22630600, both implemented a notification in their android to get the voice input.
Hope this helps.

android - How to detect speech

In android is there a Api i can use to detect when the user speaks into the mic ?
So im expecting there is a voice recognition build into android or some speech to text api i can use to detect someone speaking, any ideas ? Can ACTION_RECOGNIZE_SPEECH help me?
The way it works is you create an Intent with ACTION_RECOGNIZE_SPEECH and call startActivityForResult(). Then in your onActivityResult() override, you can pull the speech-to-text data from the result Intent extras.
Here's a neat tutorial to get you started:
Make your next Android app a good listener -- TechRepublic

Voice recognition in android app that is always listening [duplicate]

I am looking at doing speech recognition in android. The program needs to have continuous speech recognition. The library only needs to be about 10 words. I have considered using Googles api, but I don't think it will work. (I cannot have anything covering the screen). I have been looking into other ways but nothing seems like it will work. Is it possible to use java's speech recognition library, or is there any other way of going about this?
In summary
1. Need continuous speech input
2. 10 words at max
3. can train if necessary
4. overview of program - display screen, wait for voice input or touch input, update screen repeat
5. cannot cover what is being displayed on the screen
Any help would be appreciated.
Thanks in advance
I think you would have to capture audio directly from the phone's microphone and stream it to your own recognition service. The Google recognition APIs are built as an Intent that launches their own Recognition dialog and gives you back results. If you want continuous recognition without a UI, you'll have to build that functionality yourself.
CMUSphinx has recently implemented continuous listening on Android platform. You can find the demo on the wiki page
You can configure one or multiple keywords to listen to, the default keyword is "oh mighty computer". You also can configure the detection threshold. Currently supported languages are US English and few others (French, Spanish, Russian, etc). You can train your own model for your language.
Listening is simple, you create a recognizer and just add keyword spotting search:
recognizer = defaultSetup()
.setAcousticModel(new File(modelsDir, "hmm/en-us-semi"))
.setDictionary(new File(modelsDir, "lm/cmu07a.dic"))
.setKeywordThreshold(1e-5f)
.getRecognizer();
recognizer.addListener(this);
recognizer.addKeywordSearch(KWS_SEARCH_NAME, KEYPHRASE);
switchSearch(KWS_SEARCH_NAME);
and define a listener:
#Override
public void onPartialResult(Hypothesis hypothesis) {
String text = hypothesis.getHypstr();
if (text.equals(KEYPHRASE))
// do something
}
Instead of single key phrase you can specify a commands file path on a filesystem:
recognizer.addKeywordSearch(KWS_SEARCH, new File(assetsDir,
"commands.lst").toString());
Which commands file commands.lst containing commands one per line:
oh might computer
ok google
hello dude
To put this file on filesystem you can put it in assets and run syncAssets on application start.
Here is another way (if you are planning to use Phonegap/Cordova).
https://stackoverflow.com/a/39695412/3603128
1) It listens continuously.
2) Does not display (occupy) on screen.
Use CMUSphinx library:
It will work in offline mode
You can name it up
It will start listens when you call his name

Recorded sound file (ala google now, google keep) - RecognizerIntent/Listener

I have been developing an application that uses the recognizerIntent to get voice input. However, since jelly bean was launched, I have not been able to get the actual sound file from my voice input.
In the recognitionListener (http://developer.android.com/reference/android/speech/RecognitionListener.html) there is a method called onBufferReceived. However, there are no promises that this method will be called, and when I implemented it, it never got called. Is there any way to force this method to execute or what is the "best-practice"-approach to gather the sound file that the recognizerIntent analyzes?
It should be possible since both google now can do it with the voice-command "note-to-self", and Google Keep:s voice-notes does the same.
Thanks
I don't think there is a way to force it. It clearly depends on the recognition service implementation. If Google decides to not call onBufferReceived then there is no way to get the actual data that is used. Note that the mentioned Google apps don't use the (public) Intent / Service API to access the speech recognition but seem to use a private API within the apps (the speech recognition might be bundled within their apps).

Categories

Resources