I used SpeechRecognizer on android to recognize the User's voice.
It worked well until uninstall the Google App.
(https://play.google.com/store/apps/details?id=com.google.android.googlequicksearchbox&hl=en)
I updated the Google App, but I got errors such as "bind to recognition service failed".
How can I make the app run successfully?
What should I do to use SpeechRecognizer normally?
Thanks.
Update manifest
I'm using Algolia's voice input library and it was failing to take voice input on Pixel 2 and android 11 devices. The reason being unable to bind to voice recognition service.
To solve it, In the manifest file, insert this query element just under your opening tag:
<queries>
<package android:name="com.google.android.googlequicksearchbox"/>
</queries>
I know I am answering this a bit late but I have struggled with this error for a while now. It turns out you need to activate Google's Quick Search Box. So the solution I used is: I check if the SpeechRecognizer is available (using isRecognitionAvailable(context)). If the SpeechRecognizer is not available, you can activate it like this :
if(!SpeechRecognizer.isRecognitionAvailable(mainActivity)){
String appPackageName = "com.google.android.googlequicksearchbox";
try {
mainActivity.startActivity(new Intent(Intent.ACTION_VIEW,
Uri.parse("market://details?id=" + appPackageName)));
} catch (android.content.ActivityNotFoundException anfe) {
mainActivity.startActivity(new Intent(Intent.ACTION_VIEW,
Uri.parse("https://play.google.com/store/apps/details?id=" + appPackageName)));
}
}
Every time the Google app is updated some way or the other there is always an issue with the speech recognizer callbacks. Either Google periodically changes their timeout clause or some weird issues like yours pops out of nowhere.
You need to make your code dynamic in such a way that even if there is an error in the speech callback methods, you need to catch that error and try listening again automatically. This has been discussed widely in this post and there are plenty of answers provided for you to check and implement them based on your requirement.
If you don't want this you can always try out DroidSpeech library which takes care of these speech error issues whenever something pops up and provides you with continuous voice recognition.
Just implement the library using Gradle and add the following lines of code.
DroidSpeech droidSpeech = new DroidSpeech(this, null);
droidSpeech.setOnDroidSpeechListener(this);
To start listening to the user call the below code,
droidSpeech.startDroidSpeechRecognition();
And you will get the voice result in the listener method,
#Override
public void onDroidSpeechFinalResult(String
finalSpeechResult, boolean droidSpeechWillListen)
{
}
You need to add this in the manifest like so:
<uses-permission android:name="android.permission.RECORD_AUDIO" />
**<queries>
<intent>
<action android:name="android.speech.RecognitionService" />
</intent>
</queries>**
Related
According to the system voice command docs, you can open an application with a voice command. e.g. OK Google - open foobar. Also according to the docs, this Works by default; no specific intent.
In my sample development app, this isn't working. I've tried adding a few combinations of action and category permutations to the intent-filter, but no luck so far.
I'm targeting a minimum SDK of 23, testing on a device with 6.0.1.
Should this work, and if so, what are the changes to a new empty activity project I need to enable it?
As far as I am aware, Google simply iterates over a list of installed applications and opens the corresponding application if it finds an exact match.
To test this, use the following Intent
final String PACKAGE_NAME_GOOGLE_NOW = "com.google.android.googlequicksearchbox";
final String GOOGLE_NOW_SEARCH_ACTIVITY = ".SearchActivity";
final String APP_NAME = "Open " +getString(R.string.app_name);
final Intent startMyAppIntent = new Intent(Intent.ACTION_WEB_SEARCH);
startMyAppIntent.setComponent(new ComponentName(PACKAGE_NAME_GOOGLE_NOW,
PACKAGE_NAME_GOOGLE_NOW + GOOGLE_NOW_SEARCH_ACTIVITY));
startMyAppIntent.putExtra(SearchManager.QUERY, APP_NAME);
startMyAppIntent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
try {
startActivity(startMyAppIntent);
} catch (final ActivityNotFoundException e) {
e.printStackTrace();
}
If this opens your application, then it is simply a case of the phonetics of your application name, or how Google interprets your pronunciation of it.
I do think that there should be an option to add a 'phonetic app label' to the application's manifest (or some other globally available configuration file), so Google could open your application if the unique name is not common enough to generate a voice search result.
If this doesn't open your application, check that you are correctly defining your application name in the manifest as follows:
<application
android:label="#string/app_name"
It looks as though Google has made offline speech recognition available from Google Now for third-party apps. It is being used by the app named Utter.
Has anyone seen any implementations of how to do simple voice commands with this offline speech rec? Do you just use the regular SpeechRecognizer API and it works automatically?
Google did quietly enable offline recognition in that Search update, but there is (as yet) no API or additional parameters available within the SpeechRecognizer class. {See Edit at the bottom of this post} The functionality is available with no additional coding, however the user’s device will need to be configured correctly for it to begin working and this is where the problem lies and I would imagine why a lot of developers assume they are ‘missing something’.
Also, Google have restricted certain Jelly Bean devices from using the offline recognition due to hardware constraints. Which devices this applies to is not documented, in fact, nothing is documented, so configuring the capabilities for the user has proved to be a matter of trial and error (for them). It works for some straight away – For those that it doesn't, this is the ‘guide’ I supply them with.
Make sure the default Android Voice Recogniser is set to Google not
Samsung/Vlingo
Uninstall any offline recognition files you already have installed
from the Google Voice Search Settings
Go to your Android Application Settings and see if you can uninstall
the updates for the Google Search and Google Voice Search
applications.
If you can't do the above, go to the Play Store see if you have the
option there.
Reboot (if you achieved 2, 3 or 4)
Update Google Search and Google Voice Search from the Play Store (if
you achieved 3 or 4 or if an update is available anyway).
Reboot (if you achieved 6)
Install English UK offline language files
Reboot
Use utter! with a connection
Switch to aeroplane mode and give it a try
Once it is working, the offline recognition of other languages,
such as English US should start working too.
EDIT: Temporarily changing the device locale to English UK also seems to kickstart this to work for some.
Some users reported they still had to reboot a number of times before it would begin working, but they all get there eventually, often inexplicably to what was the trigger, the key to which are inside the Google Search APK, so not in the public domain or part of AOSP.
From what I can establish, Google tests the availability of a connection prior to deciding whether to use offline or online recognition. If a connection is available initially but is lost prior to the response, Google will supply a connection error, it won’t fall-back to offline. As a side note, if a request for the network synthesised voice has been made, there is no error supplied it if fails – You get silence.
The Google Search update enabled no additional features in Google Now and in fact if you try to use it with no internet connection, it will error. I mention this as I wondered if the ability would be withdrawn as quietly as it appeared and therefore shouldn't be relied upon in production.
If you intend to start using the SpeechRecognizer class, be warned, there is a pretty major bug associated with it, which require your own implementation to handle.
Not being able to specifically request offline = true, makes controlling this feature impossible without manipulating the data connection. Rubbish. You’ll get hundreds of user emails asking you why you haven’t enabled something so simple!
EDIT: Since API level 23 a new parameter has been added EXTRA_PREFER_OFFLINE which the Google recognition service does appear to adhere to.
Hope the above helps.
I would like to improve the guide that the answer https://stackoverflow.com/a/17674655/2987828 sends to its users, with images. It is the sentence "For those that it doesn't, this is the ‘guide’ I supply them with." that I want to improve.
The user should click on the four buttons highlighted in blue in these images:
Then the user can select any desired languages. When the download is done, he should disconnect from network, and then click on the "microphone" button of the keyboard.
It worked for me (android 4.1.2), then language recognition worked out of the box, without rebooting. I can now dictates instructions to the shell of Terminal Emulator ! And it is twice faster offline than online, on a padfone 2 from ASUS.
These images are licensed under cc by-sa 3.0 with attribution required to stackoverflow.com/a/21329845/2987828 ; you may hence add these images anywhere along with this attribution.
(This the standard policy of all images and texts at stackoverflow.com)
A simple and flexible offline recognition on Android is implemented by CMUSphinx, an open source speech recognition toolkit. It works purely offline, fast and configurable It can listen continuously for keyword, for example.
You can find latest code and tutorial here.
Update in 2019: Time goes fast, CMUSphinx is not that accurate anymore. I recommend to try Kaldi toolkit instead. The demo is here.
In short, I don't have the implementation, but the explanation.
Google did not make offline speech recognition available to third party apps. Offline recognition is only accessable via the keyboard. Ben Randall (the developer of utter!) explains his workaround in an article at Android Police:
I had implemented my own keyboard and was switching between Google
Voice Typing and the users default keyboard with an invisible edit
text field and transparent Activity to get the input. Dirty hack!
This was the only way to do it, as offline Voice Typing could only be
triggered by an IME or a system application (that was my root hack) .
The other type of recognition API … didn't trigger it and just failed
with a server error. … A lot of work wasted for me on the workaround!
But at least I was ready for the implementation...
From Utter! Claims To Be The First Non-IME App To Utilize Offline Voice Recognition In Jelly Bean
I successfully implemented my Speech-Service with offline capabilities by using onPartialResults when offline and onResults when online.
I was dealing with this and I noticed that you need to install the offline package for your Language. My language setting was "Español (Estados Unidos)" but there is not offline package for that language, so when I turned off all network connectivity I was getting an alert from RecognizerIntent saying that can't reach Google, then I change the language to "English (US)" (because I already have the offline package) and launched the RecognizerIntent it just worked out.
Keys: Language setting == Offline Voice Recognizer Package
It is apparently possible to manually install offline voice recognition by downloading the files directly and installing them in the right locations manually. I guess this is just a way to bypass Google hardware requirements.
However, personally I didn't have to reboot or anything, simply changing to UK and back again did it.
Working example is given below,
MyService.class
public class MyService extends Service implements SpeechDelegate, Speech.stopDueToDelay {
public static SpeechDelegate delegate;
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
//TODO do something useful
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
Speech.init(this);
delegate = this;
Speech.getInstance().setListener(this);
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
System.setProperty("rx.unsafe-disable", "True");
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
return Service.START_STICKY;
}
#Override
public IBinder onBind(Intent intent) {
//TODO for communication return IBinder implementation
return null;
}
#Override
public void onStartOfSpeech() {
}
#Override
public void onSpeechRmsChanged(float value) {
}
#Override
public void onSpeechPartialResults(List<String> results) {
for (String partial : results) {
Log.d("Result", partial+"");
}
}
#Override
public void onSpeechResult(String result) {
Log.d("Result", result+"");
if (!TextUtils.isEmpty(result)) {
Toast.makeText(this, result, Toast.LENGTH_SHORT).show();
}
}
#Override
public void onSpecifiedCommandPronounced(String event) {
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
}
#Override
public void onTaskRemoved(Intent rootIntent) {
//Restarting the service if it is removed.
PendingIntent service =
PendingIntent.getService(getApplicationContext(), new Random().nextInt(),
new Intent(getApplicationContext(), MyService.class), PendingIntent.FLAG_ONE_SHOT);
AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
assert alarmManager != null;
alarmManager.set(AlarmManager.ELAPSED_REALTIME_WAKEUP, 1000, service);
super.onTaskRemoved(rootIntent);
}
}
For more details,
https://github.com/sachinvarma/Speech-Recognizer
Hope this will help someone in future.
I've been working on an android app concept in which the app has to auto-dial some special USSD codes in order to initiate certain telco services of interest to the user when the user initiates the service via a shortcut in the app.
The trouble I'm finding is that when the app tries to auto-dial such short codes or USSD numbers, the phone's OS (or is it the Call Intent), doesn't auto-dial, but instead presents the user with the code/number in the dial-pad and so the user has to manually initiate the call - which sort of defeats my intention of allowing users to initiate the services with just one click - the shortcut.
Currently, this is how I'm initiating these calls:
intent = new Intent(Intent.ACTION_DIAL);
intent.setData(Uri.parse("tel:" + number.trim()));
try {
activity.startActivity(intent);
} catch (Exception e) {
Log.d(Tag, e.getMessage());
}
Interestingly, a number such as +256772777000 will auto-dial, launching the user into the call automatically, but a number/code such as 911, *112#, *1*23#, etc won't.
So, what do I need to do differently, or is this not possible at all?
UPDATE
Actually, looking at another app in which I was autodialling user-specified numbers, the problem with the above code trying to auto-dial ussd codes was that instead of using intent.ACTION_CALL, I was using intent.ACTION_DIAL - which definitely just prompts the user with the number to call, without directly calling it. When I fixed that, the app now works as expected. See answer below...
Code samples are most welcome.
Actually, despite what some people were claiming about Android preventing such a feature. When I looked at the code in one of my older apps which auto-dials user-specified numbers, I found the solution to be:
intent = new Intent(Intent.ACTION_CALL);
intent.setData(Uri.parse("tel:" + number.trim()));
try {
activity.startActivity(intent);
} catch (Exception e) {
Log.d(Tag, e.getMessage());
}
This works as expected - USSD codes get auto-dialled when above code runs. The only important thing to note when using this approach, being that you have to add the following permissions to your manifest:
<uses-permission android:name="android.permission.CALL_PHONE" />
So, as indicated in the update to my question, the problem with my original approach was using intent.ACTION_DIAL instead of intent.ACTION_CALL.
I've written an Android client for a mobile backend starter app according to this tutorial. Everything works up to the section implementing Continuous Queries.
I've written a query and I'm calling it from the correct place in the code (onPostCreate()), however the query never returns any data.
I don't believe this is an authentication problem because I'm able to make other calls successfully.
Here is the code which never returns a result:
CloudCallbackHandler<List<CloudEntity>> handler = new CloudCallbackHandler<List<CloudEntity>>() {
#Override
public void onComplete(List<CloudEntity> results) {
for (CloudEntity entity : results) {
UserLocation loc = new UserLocation(entity);
mUserLocations.remove(loc);
mUserLocations.add(loc);
drawMarkers();
}
}
#Override
public void onError(IOException e) {
Toast.makeText(getApplicationContext(), e.getMessage(),
Toast.LENGTH_LONG).show();
}
};
CloudQuery query = new CloudQuery("UserLocation");
query.setLimit(50);
query.setSort(CloudEntity.PROP_UPDATED_AT, Order.DESC);
query.setScope(Scope.FUTURE_AND_PAST);
getCloudBackend().list(query, handler);
With the debugger I've verified that the getCloudBackend().list() line executes, but the onComplete() method is never hit, and neither is onError().
Here is an example of a call that works perfectly:
UserLocation self = new UserLocation(super.getAccountName(),
gh.encode(mCurrentLocation));
getCloudBackend().update(self.asEntity(), updateHandler);
Essentially, getCloudBackend().update() works, while getCloudBackend().list() does not.
I should also add that I've downloaded the full source from the github repo linked in the tutorial, and the same problem exists with that code.
I've also tried re-deploying the backend server multiple times.
Ok so I have finally fixed the problem! The issue is both in the manifest and in the class GCMIntentService.java
In the manifest the GCM is registered as a service and belongs to a package. By default this service is a part of the default package com.google.cloud.backend.android. When you create a new package and have all your client code in there, you need to move the GCMIntentService.java class into that new package and in the manifest modify the service and broadcast receiver
<service android:name="yourpackagename.GCMIntentService" />
<receiver
android:name="com.google.android.gcm.GCMBroadcastReceiver"
android:permission="com.google.android.c2dm.permission.SEND" >
<intent-filter>
<action android:name="com.google.android.c2dm.intent.RECEIVE" />
<action android:name="com.google.android.c2dm.intent.REGISTRATION" />
<category android:name="yourpackagename" />
</intent-filter>
</receiver>
Any other permission that comes with the default package name should also be updated to the main package name. This doesn't need to be modified if you're only going to use that one default package that comes with the mobile backend starter.
Regarding the GoogleAuthIOException I received that as well initially. So I redid all the steps to enable GCM and authentication. Things to keep in mind though are that I still followed the tutorial and went with Web Application -> Generic when registering the GCM server key and Web Client ID. Also another key thing to keep in mind when registering for the Android Client ID is that with your SHA1 fingerprint it also needs a package name. Again the package name has to be your main client package if you're using more than one package for your project. You can get the project number that goes in the Consts.java (and it's required to register GCM) from the old Google API console and the project ID from the new cloud console. The Web client ID also goes in the Consts.java file and also in that same file you have to enable auth by changing
public static final boolean IS_AUTH_ENABLED = false;
to
public static final boolean IS_AUTH_ENABLED = true;
Hope this helps.
So I am also getting the SAME EXACT problem you are. getCloudBackend().update() works for me, and not only with the geohasher class, I also tried to send updates to the cloud with myLocation.toString() where myLocation is a LatLng and it gets updated fine.
Sorry for not giving you the actual solution to your problem. It's a really odd situation that the same exact code worked in the Google I/O demo but not when we (and I followed the tutorial very thoroughly) actually try it out. I feel that this is a server problem if anything.
Thanks for reporting this -- sorry you are having a problem. THe most likely problem is in configuring GCM. Can you verify you have GCM enabled on the project and all the setup steps where done correctly? Maybe try to send a message and see if that works?
On iPhone, an application can associate a new protocol name to itself so that if a user types in 'myapp://xxx' in a web browser it calls the application.
Is this possible with BlackBerry or Android?
For Android have a look at this question's answers:
Android Respond To URL in Intent
and also the following page especially in the section "Data Types" about android:scheme on this page:
http://developer.android.com/guide/topics/intents/intents-filters.html
For your app you would put something like the following in your AndroidManifest.xml:
<intent-filter><action android:name="android.intent.action.VIEW"></action>
<category android:name="android.intent.category.DEFAULT"></category>
<category android:name="android.intent.category.BROWSABLE"></category>
<data android:scheme="myapp"></data>
</intent-filter>
For BlackBerry - yes, to an extent, look at the net.rim.device.api.browser.plugin package (JDE 4.0.0 and later). It allows you to specify a callback interface for a given MIME type & other parameters.
Basically you subclass BrowserContentProvider to indicate the MIME type(s) you want to receive, and register with BrowserContentProviderRegistery.
I don't have a lot of experience with this - but it looks like you may be limited to providing custom rendering functionality - that may be ok for you. I'm not sure how limited your ability to do anything else would be - you'd have to try things out.
For blackberry devices running 4.0 or later (all "trackball" devices and up run at least 4.2) the following code is all you need:
// Get the default sessionBrowserSession
net.rim.blackberry.api.browser.browserSession = Browser.getDefaultSession();
// now launch the URL
browserSession.displayPage("http://www.BlackBerry.com");
Since this is a pretty reusable code segment I recommend placing it the following function:
public static void loadURL(String url)
{
try{
net.rim.blackberry.api.browser.BrowserSession bSession = net.rim.blackberry.api.browser.Browser.getDefaultSession();
bSession.displayPage(url);
bSession.showBrowser();
}
catch (Exception ex){
System.out.println("Error loading url [" + url + "]: " + ex.getMessage());
}
}