Is there a way to voice activation by my own phrase? - android

I want to create a simple application which will make some tasks by voice commands. I want to start listening for commands with my own phrase like a "Hello device". Is it possible with Android speech recognition API? How to implement activation by my own phrase?
I was searching about this before asking, but couldn't find information about activation. I know about pocket-sphinx, but I need to implement it with Google APIs.

CMUSphinx is a real solution for this problem, continuous speech recognition takes too much resources and will drain your battery in an hour while keyword spotting mode is fast enough to detect just a keyphrase.
You might be interested that Google introduced new API in v21 for the similar task:
http://developer.android.com/reference/android/service/voice/AlwaysOnHotwordDetector.html
You could use that, but it will quite seriously restrict your userbase.

Send RecognizerIntent.
Here you have a tutorial how to implement voice recognition.
I want to start listening for commands with my own phrase like a
"Hello device". Is it possible with Android speech recognition API?
You cannot record a phrase, but you can listen to everything and then ask the recognition engine for words it heard.
Relevant code fragment from the tutorial:
// Populate the wordsList with the String values the recognition engine thought it heard
ArrayList<String> matches = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
So, you can check if it heard your plain English command, like "Hello device" and if yes, do something.

import android.speech.RecognizerIntent;
import android.content.Intent;
import java.util.ArrayList;
import java.util.List;
Step 1: Put this into a method that will start the speech recognizer, you can name anything meaningful like void startSpeech() or something.
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName());
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 3);
startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE);
Step 2: This will require us to override the onActivityResult() method to work with the above method.
Then start to implement the onActivityResult() method below:
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
String wordStr = null;
String[] words = null;
String firstWord = null;
String secondWord = null;
ArrayList<String> matches = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
if(requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK)
{
wordStr = matches.get(0);
words = wordStr.split(" ");
firstWord = words[0];
secondWord = words[1];
}
if (firstWord.equals("open"))
{
// DO SOMETHING HERE
}
}

Related

ARcore and mobile-vision

I need to create an android app doing mainly two things.
1) Detect price and barcode
2) Creating AR content around the detected price/barcode
For the detection part, I use google mobile-vision and for the AR part I use ARcore. The problem I have is that Arcore doesnt allow auto focus so I dont have a good enough resolution to read the prices or bar codes.
So I was wondering if there was a standard way to do text recognition and AR in the same app.
Thank you.
You can implement them in the same app, on different activities. if you are using the mobile vision API. you can set the start the intent for detection with startActivityForResult and when the result is returned. you can Implement a transistion in the onActivityResult part. Since the AR depends on the detected data you can pass the information to the AR activity using putExtra. use this as a template
fab.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
Intent i = new Intent(DetectActivity.this, ScanActivity.class);
startActivityForResult(i, REQUEST_CODE);
}
});
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == REQUEST_CODE && resultCode == RESULT_OK) {
if (data != null) {
final Barcode barcode = data.getParcelableExtra("barcode");
String rslt=barcode.displayValue;
Intent intent =new Intent(DetectActivity.this, ArActivity.class);
intent.putExtra("link", rslt);
startActivity(intent);
finish();
Hope this helps,
ScanActivity is the normal Camera View SurfaceView activity that mobile vision uses
I haven't used ARcore, but have done a reasonable amount of detection. This was mostly done using a surface view extension showing and initialising a camera1 api view with detection interface and callbacks.
It is difficult to tell what might be going wrong without any code available or how you have gone about this, any chance you could provide some?

How to make a button open the text to speech mic, then add a list item of the spoken text in android?

So I am trying to make a simple to do list app where it has only a mic button and the list. I am very new to android app dev, i have managed to figure out a text input into the list and how to get the text to speech up and put the spoken text in a text field. All this was achieved through a mix of tutorials. I can't seem to figure out how to bring the 2 together.
Any tips?
Here I leave you a great tutorial considering that you didnt post any code.
This tutorial, shows you how to make speech recognition with a button and then it makes a list with the possible spoken text. It works perfectly, I tried once.
Try this code to open the mic on button --> OnClickListener.
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
startActivityForResult(intent, nRESULT_SPEECH);
}
nRESULT_SPEECH is your code which you can give anything as 0, 1, 2,etc;
You will get the word spoken in this callback method onActivityResult
#Override public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
switch (requestCode) {
case nRESULT_SPEECH:
if (null != data) {
ArrayList<String> text = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
String textCapturedFromVoice=text.get(0);
}
break;
}
}
Once u will get the text in textCapturedFromVoice, you can add this to ur list.

Android-TV speech recognition with manual input

I've implementing voice-search on my Android-TV app using the following small code
private void displaySpeechRecognizer() {
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
startActivityForResult(intent, SPEECH_REQUEST_CODE);
}
#Override
public void onActivityResult(int requestCode, int resultCode,
Intent data) {
if (requestCode == SPEECH_REQUEST_CODE && resultCode == -1) {
List<String> results = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
String spokenText = results.get(0);
Toast.makeText(getActivity(), spokenText, Toast.LENGTH_LONG).show();
}
super.onActivityResult(requestCode, resultCode, data);
}
This is all working well and I get the result back for further processing.
The problem is that I also want to give the user the possibility to enter the search-string manually with the virtual keyboard.
In Googles own apps you do this be simply pressing RIGHT on the remote to give focus to the textbox after pressing the voice-search icon.
In my example above I can see the "built-in-textbox" when I press the search icon but if I try to navigate to it the search is interrupted and closed.
How do I access the search-textbox? This should cancel voice-input and bring up the keyboard instead, just like Play Store app.
Are you using Leanback support library for your Android TV app design?
I guess "Google Play Store app" and "YouTube app" are using BrowseFragment & SearchFragment for search. These Fragments are providing build-in search UI.
For the implementation, see Google's sample source code or SearchFragment – Android TV app Tutorial 12.

android, text to speech

I'm playing with text to speech to make my testapp a little more fun. It works in the emulator but not on my phone since my default locale isn't english.
However, the texts are english so the tts should of course use english. As far as I know I can implement an autoninstall, something like
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
// Set preferred language to US english.
int result = mtts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA ||
result == TextToSpeech.LANG_NOT_SUPPORTED) {
// Lanuage data is missing or the language is not supported.
Log.e(TAG, "Language is not available.");
} else {
// The TTS engine has been successfully initialized.
speak();
}
} else {
// missing data, install it
Intent installIntent = new Intent();
installIntent.setAction(
TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
}
But, do I want to? Does installing locales take a lot of space? Does it mess up something else?
regards
You should execute this:
// missing data, install it
Intent installIntent = new Intent();
installIntent.setAction(
TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
when you get LANG_MISSING_DATA
I would only try the install in the case where "LANG_MISSING_DATA" not for "LANG_NOT_SUPPORTED". Since it starts another activity and the user can choose whether they download it or not, I wouldn't worry too much about it taking space. No, it shouldn't mess anything up.
Android allows you convert your text into voice. Not only you can convert it but it also allows you to speak text in variety of different languages.
Android provides TextToSpeech class for this purpose.
For more detail please follow this tutorial :-
http://a-droidtech.blogspot.in/2015/06/android-text-to-speech-tutorial-android.html

a basic Text to speech app not working

i tried the following on the emulator but the as the app starts it gives a runtime error . could someone please help me on this . Heres' the code i tried
package com.example.TextSpeaker;
import java.util.Locale;
import android.app.Activity;
import android.content.Intent;
import android.os.Bundle;
import android.speech.tts.TextToSpeech;
import android.speech.tts.TextToSpeech.OnInitListener;
public class TextSpeaker extends Activity {
/** Called when the activity is first created. */
int MY_DATA_CHECK_CODE = 0;
private TextToSpeech mtts;
String test1="hello world";
String test2="hi i am working fine";
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
Intent myintent = new Intent();
myintent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(myintent, MY_DATA_CHECK_CODE);
}
protected void onActivityResult(int requestcode,int resultcode,Intent data)
{
if(requestcode == MY_DATA_CHECK_CODE)
{
if(resultcode==TextToSpeech.Engine.CHECK_VOICE_DATA_PASS)
{
// success so create the TTS engine
mtts = new TextToSpeech(this,(OnInitListener) this);
mtts.setLanguage(Locale.ENGLISH);
mtts.speak(test1, TextToSpeech.QUEUE_FLUSH, null);
mtts.speak(test2, TextToSpeech.QUEUE_FLUSH, null);
}
else
{
//install the Engine
Intent install = new Intent();
install.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(install);
}
}
}
}
I've stumbled upon this question well over a year after it was posted, but I'm going to go ahead and answer anyway, in the sheer hope it'll help anyone else who ends up here in the future.
I'm writing this against 2.1, so apologies if you were working with <2.1 (no tags on question)
There are a few things I can spot immediately that may give you a little grief.
Firstly, the following :
mtts.speak(test1, TextToSpeech.QUEUE_FLUSH, null);
mtts.speak(test2, TextToSpeech.QUEUE_FLUSH, null);
If I understand the TextToSpeech API correctly, using QUEUE_FLUSH will flush out anything thats currently being spoken, so its possible that the second line is executing before the first has actually spoken, and you'd experience what you've stated above, that only the last one is being spoken.
Ideally, you only want one of those lines there, if the user puts in a different String, then just pass that through and let it flush out.
Next, you should invest in an onDestroy override, in here you can shutdown the mtts object, this prevents your app from hogging usage to the TTS engine, its always nice to free up resources when you're done with them, you wouldn't leave a ResultSet open now would you?!
#Override
public void onDestroy
() {
// Don't forget to shutdown!
if (mTts != null) {
mTts.stop();
mTts.shutdown();
}
super.onDestroy();
}
Also, as you state, it'll only speak English, because of the line you're using :
mtts.setLanguage(Locale.ENGLISH);
That's easy to correct, just set a different locale. Perhaps have some buttons and set the locale accordingly. I believe that the Google TTS engine currently only supports English, French, German, Italian and Spanish, but 3rd party TTS engines may offer more.
If all else fails, I wrote a tutorial here that might be of use.
Good luck!

Categories

Resources