What is the name of GoogleTTSService in JellyBean 4.3? - android

In all Android versions prior to 4.3, the name of Google's text-to-speech service, belonging to package android.tts.TtsService, is GoogleTTSService.
Thus, if you inspect the list of running services in devices running Android 4.2 or lower, you will find com.google.android.tts.GoogleTTSService among them.
But in Android 4.3 that seems to have changed and, among the many services listed in my running device, I can no longer find a corresponding service name.
What is the new name?
Is it part of a different service?
Update: It appears that the package name for the service has been renamed from android.tts.TtsService in 2.x to android.speech.tts.TextToSpeech in 4.3. That's a step in the right direction but the actual name of Google's engine is still missing.
Any idea?

You can discover the package for any TTS Engine in the following way:
TextToSpeech tts = new TextToSpeech(context, onInitListener);
Then in the onInit Listener:
#Override
public void onInit(final int status) {
switch (status) {
case TextToSpeech.SUCCESS:
try {
final String initEngine = tts.getDefaultEngine();
// Output the engine to the log if it's != null
} catch (final Exception e) {
}
break;
}
}
From my experience, the engine can sometimes return null or crash when it's called too soon after onInit, so surrounding with a try/catch block is recommended. This was only happening with some IVONA and SVOX TTS engines, but of course the user could have one of those as their default.

According to this, you may be using the ACTION_CHECK_TTS_DATA intent, which is not handled correctly in Android 4.2.
Try to eliminate the use ACTION_CHECK_TTS_DATA intent and instead we just rely on the method TextToSpeech.isLanguageAvailable() as an indicator of whether or not the voice data is installed.
Additional useful information that may be related to your problem:
Access the Google Now voice using the Android TTS APIs
Offline Speech Recognition In Android (JellyBean)

Related

Can anyone tell me what "on O+ for instant apps" mean?

The following information is from Android Developer's page.
I like to understand What "O+" in the context below. Is it version like Oreo?
updateAppInfo
void updateAppInfo (Context context)
Updates application info based on currently installed splits.
Note #1: This method must be called after split is installed on O+ for instant apps, so that application components can see new resources and code from new splits.
Note #2: This method will update application info reference in application thread object.
Note #3: This method should only be called on O+.
Example usage:
// SplitInstallAPI callbacks
public void onStateUpdate(SplitInstallSessionState splitInstallSessionState) {
if (splitInstallSessionState.status() == SplitInstallSessionStatus.INSTALLED) {
// Use SplitInstallHelper API on O+ to update application info after the splits are
// installed.
if (BuildCompat.isAtLeastO()) {
// Updates app info with new split information making split artifacts available to the
// app on subsequent requests.
SplitInstallHelper.updateAppInfo(context);
}
}
}
Yes. In this case O is short for Oreo. Each major version of Android (from 1.2 onwards) is named after a dessert or other sweet food, and versions are codenamed in alphabetical order starting with C for Cupcake. Versions are often shortened down to their first letter for quick reference, or when the version name has not yet been decided (which is currently the case for Android P).
So in your example, the function BuildCompat.isAtLeastO() checks that the current device is running at least Android Oreo (API level 26).
Instant apps is a mini-application that does not need to be installed. Not all standard methods can work in it.

Google-cast v3 custom namespace

I am now refactoring my sender application for Android using the new v3 Google Cast API.
The problem I encounter is when i add
List<String> namespaces = new ArrayList<>();
namespaces.add("urn:x-cast:lalalalla");
...
return new CastOptions.Builder()
.setSupportedNamespaces(namespaces)
the Android app does not display the chromecast icon anymore (I guess it does not discover chromecast devices anymore).
What am I doing wrong with the namespace as without .setSupportedNamespaces it works fine as in the sample app?
Thanks!
This method can be a tad misleading sometimes. setSupportedNamespaces(ns) actually modifies the filter criteria for Cast devices for your app.
ie. it will only display Cast Devices currently running a receiver supporting the namespaces you specify. Unless this is what you want I'll advise to remove this call, you can still use custom namespaces when you're connected to your cast Device (after starting or joining a Cast Session)

How to detect if the Wear app of my Android app is already installed in the watch

Is it somehow possible to detect if the Wear mini app inside an Android app is already installed in the watch?
I have an app which cannot be used on the phone until the Wear part is installed in the watch, so I want to block all interaction until then.
What about app updates, is it possible to detect if the Wear part was already updated?
EDIT:
It looks like the Data API and even Message API calls are buffered and delivered after the app is installed. This however does not solve the issue with app updates. That is solvable with the accepted answer.
One Solution is to use CapabilityClient(https://developers.google.com/android/reference/com/google/android/gms/wearable/CapabilityClient). First you can detect whether the Wearable and phone are connected or not using NodeClient(https://developers.google.com/android/reference/com/google/android/gms/wearable/NodeClient). Below I have mentioned the code to detect whether the watch is connected to phone or not in android.
Task<List<Node>> nodesTask = Wearable.getNodeClient(MainMobileActivity.this)
.getConnectedNodes();
nodesTask.addOnSuccessListener(new OnSuccessListener<List<Node>>() {
#Override
public void onSuccess(List<Node> nodes) {
nodeSize = nodes.size();
for (Node node : nodes) {
Wearable.getMessageClient(MainMobileActivity.this)
.sendMessage(node.getId(), MESSAGE_PATH, "Hello from AndroidWear".getBytes());
}
Log.d("Hello" , "Message sent to Cordova");
}
});
So, nodeSize tells how many nodes/watches are connected.
Wearable.getMessageClient(MainMobileActivity.this)
.sendMessage(node.getId(), MESSAGE_PATH, "Hello from AndroidWear".getBytes());
This piece of code helps to send the message from phone to watch. Now coming to detect whether the watch has the application or not. Below is the mentioned code for it.
Task<CapabilityInfo> capabilityTask = Wearable.getCapabilityClient(this)
.getCapability(CAPABILITY_WEAR_APP, CapabilityClient.FILTER_REACHABLE);
capabilityTask.addOnSuccessListener(new OnSuccessListener<CapabilityInfo>() {
#Override
public void onSuccess(CapabilityInfo capabilityInfo) {
mWearNodesWithApp = capabilityInfo.getNodes();
}
});
So, if mWearNodesWithApp comes as 0 it shows that app is not installed and if it shows 1 it means application is installed.
CAPABILITY_WEAR_APP should be of String type and should have the value which you mentioned in wear.xml of wear application and not of phone. Do remember to mention the same applicationId for both Phone and Wear application.
AFAIK, there is no out-of-the-box solution to do it.
If your Wear app does not have activities (and therefore no means to be started by user), what you can do is send something like IS_INSTALLED message to Wear periodically while handheld app is in foreground until Wear won't put it version number into data layer. On application update you can check for version number in data layer and if it's lower than current version - repeat the procedure.
This approach will as well solve problem with Wear device not being connected (or out-of-range which is essentially the same).

Updating Google Play Services

ALL,
I'm trying to develop an Android application. The application will use the GeoLocation which is based on the Google Play Services.
I also have a phone for testing (Samsung with Android 4.2.2). When checking things with Eclipse I see that the device has this service but its version is not the same as the application was developed with.
So I push the code which should go to Google Store and update the service to bring the proper version on the device. Now this device does not have a service only a WiFi (meaning its just a piece of hardware, not the phone).
Now when I ran this code it goes to the Google Store and it continuously tries to find the appropriate .apk.
The code I pushed is as follows:
int res = GooglePlayServicesUtil.isGooglePlayServicesAvailable( getApplicationContext() );
if( res != ConnectionResult.SUCCESS )
{
try
{
GooglePlayServicesUtil.getErrorDialog( res, this, RQS_GooglePlayServices ).show();
}
catch( Exception e )
{
Utils.displayErrorDialog( this, e.getMessage() );
}
}
What am I missing? It should just be straight update.
Thank you.
My problem sounds related. I have installed Foursquare on my Android (Jelly Bean) and when I launch Foursquare it pops-up with, you must upgrade Google Play (sounds like what you are trying to do in your App). I then click on the button and it launches the GP Store to upgrade. The upgrade stops progressing at 54% completion. Consistently -- after multiple times. I am suspect of memory size, as my Android is SLOOOOOWWWW, under powered and low on memory.

Offline Speech Recognition In Android (JellyBean)

It looks as though Google has made offline speech recognition available from Google Now for third-party apps. It is being used by the app named Utter.
Has anyone seen any implementations of how to do simple voice commands with this offline speech rec? Do you just use the regular SpeechRecognizer API and it works automatically?
Google did quietly enable offline recognition in that Search update, but there is (as yet) no API or additional parameters available within the SpeechRecognizer class. {See Edit at the bottom of this post} The functionality is available with no additional coding, however the user’s device will need to be configured correctly for it to begin working and this is where the problem lies and I would imagine why a lot of developers assume they are ‘missing something’.
Also, Google have restricted certain Jelly Bean devices from using the offline recognition due to hardware constraints. Which devices this applies to is not documented, in fact, nothing is documented, so configuring the capabilities for the user has proved to be a matter of trial and error (for them). It works for some straight away – For those that it doesn't, this is the ‘guide’ I supply them with.
Make sure the default Android Voice Recogniser is set to Google not
Samsung/Vlingo
Uninstall any offline recognition files you already have installed
from the Google Voice Search Settings
Go to your Android Application Settings and see if you can uninstall
the updates for the Google Search and Google Voice Search
applications.
If you can't do the above, go to the Play Store see if you have the
option there.
Reboot (if you achieved 2, 3 or 4)
Update Google Search and Google Voice Search from the Play Store (if
you achieved 3 or 4 or if an update is available anyway).
Reboot (if you achieved 6)
Install English UK offline language files
Reboot
Use utter! with a connection
Switch to aeroplane mode and give it a try
Once it is working, the offline recognition of other languages,
such as English US should start working too.
EDIT: Temporarily changing the device locale to English UK also seems to kickstart this to work for some.
Some users reported they still had to reboot a number of times before it would begin working, but they all get there eventually, often inexplicably to what was the trigger, the key to which are inside the Google Search APK, so not in the public domain or part of AOSP.
From what I can establish, Google tests the availability of a connection prior to deciding whether to use offline or online recognition. If a connection is available initially but is lost prior to the response, Google will supply a connection error, it won’t fall-back to offline. As a side note, if a request for the network synthesised voice has been made, there is no error supplied it if fails – You get silence.
The Google Search update enabled no additional features in Google Now and in fact if you try to use it with no internet connection, it will error. I mention this as I wondered if the ability would be withdrawn as quietly as it appeared and therefore shouldn't be relied upon in production.
If you intend to start using the SpeechRecognizer class, be warned, there is a pretty major bug associated with it, which require your own implementation to handle.
Not being able to specifically request offline = true, makes controlling this feature impossible without manipulating the data connection. Rubbish. You’ll get hundreds of user emails asking you why you haven’t enabled something so simple!
EDIT: Since API level 23 a new parameter has been added EXTRA_PREFER_OFFLINE which the Google recognition service does appear to adhere to.
Hope the above helps.
I would like to improve the guide that the answer https://stackoverflow.com/a/17674655/2987828 sends to its users, with images. It is the sentence "For those that it doesn't, this is the ‘guide’ I supply them with." that I want to improve.
The user should click on the four buttons highlighted in blue in these images:
Then the user can select any desired languages. When the download is done, he should disconnect from network, and then click on the "microphone" button of the keyboard.
It worked for me (android 4.1.2), then language recognition worked out of the box, without rebooting. I can now dictates instructions to the shell of Terminal Emulator ! And it is twice faster offline than online, on a padfone 2 from ASUS.
These images are licensed under cc by-sa 3.0 with attribution required to stackoverflow.com/a/21329845/2987828 ; you may hence add these images anywhere along with this attribution.
(This the standard policy of all images and texts at stackoverflow.com)
A simple and flexible offline recognition on Android is implemented by CMUSphinx, an open source speech recognition toolkit. It works purely offline, fast and configurable It can listen continuously for keyword, for example.
You can find latest code and tutorial here.
Update in 2019: Time goes fast, CMUSphinx is not that accurate anymore. I recommend to try Kaldi toolkit instead. The demo is here.
In short, I don't have the implementation, but the explanation.
Google did not make offline speech recognition available to third party apps. Offline recognition is only accessable via the keyboard. Ben Randall (the developer of utter!) explains his workaround in an article at Android Police:
I had implemented my own keyboard and was switching between Google
Voice Typing and the users default keyboard with an invisible edit
text field and transparent Activity to get the input. Dirty hack!
This was the only way to do it, as offline Voice Typing could only be
triggered by an IME or a system application (that was my root hack) .
The other type of recognition API … didn't trigger it and just failed
with a server error. … A lot of work wasted for me on the workaround!
But at least I was ready for the implementation...
From Utter! Claims To Be The First Non-IME App To Utilize Offline Voice Recognition In Jelly Bean
I successfully implemented my Speech-Service with offline capabilities by using onPartialResults when offline and onResults when online.
I was dealing with this and I noticed that you need to install the offline package for your Language. My language setting was "Español (Estados Unidos)" but there is not offline package for that language, so when I turned off all network connectivity I was getting an alert from RecognizerIntent saying that can't reach Google, then I change the language to "English (US)" (because I already have the offline package) and launched the RecognizerIntent it just worked out.
Keys: Language setting == Offline Voice Recognizer Package
It is apparently possible to manually install offline voice recognition by downloading the files directly and installing them in the right locations manually. I guess this is just a way to bypass Google hardware requirements.
However, personally I didn't have to reboot or anything, simply changing to UK and back again did it.
Working example is given below,
MyService.class
public class MyService extends Service implements SpeechDelegate, Speech.stopDueToDelay {
public static SpeechDelegate delegate;
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
//TODO do something useful
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
Speech.init(this);
delegate = this;
Speech.getInstance().setListener(this);
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
System.setProperty("rx.unsafe-disable", "True");
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
return Service.START_STICKY;
}
#Override
public IBinder onBind(Intent intent) {
//TODO for communication return IBinder implementation
return null;
}
#Override
public void onStartOfSpeech() {
}
#Override
public void onSpeechRmsChanged(float value) {
}
#Override
public void onSpeechPartialResults(List<String> results) {
for (String partial : results) {
Log.d("Result", partial+"");
}
}
#Override
public void onSpeechResult(String result) {
Log.d("Result", result+"");
if (!TextUtils.isEmpty(result)) {
Toast.makeText(this, result, Toast.LENGTH_SHORT).show();
}
}
#Override
public void onSpecifiedCommandPronounced(String event) {
try {
if (VERSION.SDK_INT >= VERSION_CODES.KITKAT) {
((AudioManager) Objects.requireNonNull(
getSystemService(Context.AUDIO_SERVICE))).setStreamMute(AudioManager.STREAM_SYSTEM, true);
}
} catch (Exception e) {
e.printStackTrace();
}
if (Speech.getInstance().isListening()) {
Speech.getInstance().stopListening();
} else {
RxPermissions.getInstance(this).request(permission.RECORD_AUDIO).subscribe(granted -> {
if (granted) { // Always true pre-M
try {
Speech.getInstance().stopTextToSpeech();
Speech.getInstance().startListening(null, this);
} catch (SpeechRecognitionNotAvailable exc) {
//showSpeechNotSupportedDialog();
} catch (GoogleVoiceTypingDisabledException exc) {
//showEnableGoogleVoiceTyping();
}
} else {
Toast.makeText(this, R.string.permission_required, Toast.LENGTH_LONG).show();
}
});
}
}
#Override
public void onTaskRemoved(Intent rootIntent) {
//Restarting the service if it is removed.
PendingIntent service =
PendingIntent.getService(getApplicationContext(), new Random().nextInt(),
new Intent(getApplicationContext(), MyService.class), PendingIntent.FLAG_ONE_SHOT);
AlarmManager alarmManager = (AlarmManager) getSystemService(Context.ALARM_SERVICE);
assert alarmManager != null;
alarmManager.set(AlarmManager.ELAPSED_REALTIME_WAKEUP, 1000, service);
super.onTaskRemoved(rootIntent);
}
}
For more details,
https://github.com/sachinvarma/Speech-Recognizer
Hope this will help someone in future.

Categories

Resources