I want my app to read out the message contained in the push notification.
I already searched the internet but I was not able to find some code which was working.
I expect that the text is translated to speech and automatically played.
If you need an easy approach, you can use SpeakerBox Library. It's very easy to use.
Just create a new instance
Speakerbox speakerbox = new Speakerbox(activity);
Now you are all set.
If you want to make a speech from the text "Hello World"
Just do this -
Speakerbox speakerbox = new Speakerbox(activity);
speakerbox.play("Hello World");
You will find more details from the mentioned link
The gradle dependency for this library is -
implementation 'com.mapzen.android:speakerbox:1.4.1'
you should use TextToSpeech within your notification class / service
TextToSpeech tts = new TextToSpeech(this, this);//(Context,TextToSpeech.OnInitListener)
tts.setLanguage(Locale.US);
tts.speak("Text to say aloud", TextToSpeech.QUEUE_ADD, null);
here is a link for more information about TextToSpeech
Related
I've been developing a demo for speech recognition and ran into an issue. Could anyone help me? I called startRecognizing interface and received an onError callback (Error 11203, subError 3005, errorMessage: service unavailable), see screenshot:
I followed the document , My code:
// Create an Intent to set parameters.
val mSpeechRecognizerIntent = Intent(MLAsrConstants.ACTION_HMS_ASR_SPEECH)
// Use Intent for recognition parameter settings.
mSpeechRecognizerIntent
// Set the language that can be recognized to English. If this parameter is not set, English is recognized by default. Example: "zh-CN": Chinese; "en-US": English; "fr-FR": French; "es-ES": Spanish; "de-DE": German; "it-IT": Italian; "ar": Arabic; "th_TH": Thai; "ms_MY": Malay; "fil_PH": Filipino.
.putExtra(MLAsrConstants.LANGUAGE, "en-US") // Set to return the recognition result along with the speech. If you ignore the setting, this mode is used by default. Options are as follows:
// MLAsrConstants.FEATURE_WORDFLUX: Recognizes and returns texts through onRecognizingResults.
// MLAsrConstants.FEATURE_ALLINONE: After the recognition is complete, texts are returned through onResults.
.putExtra(MLAsrConstants.FEATURE, MLAsrConstants.FEATURE_WORDFLUX) // Set the application scenario. MLAsrConstants.SCENES_SHOPPING indicates shopping, which is supported only for Chinese. Under this scenario, recognition for the name of Huawei products has been optimized.
.putExtra(MLAsrConstants.SCENES, MLAsrConstants.SCENES_SHOPPING)
// Start speech recognition.
mSpeechRecognizer.startRecognizing(mSpeechRecognizerIntent)
Do you have any idea why this could be happening? Please help, thanks!!
You need to change "en-US" to "zh-CN".
Or comment out ".putExtra(SCENES, SCENES_SHOPPING)":
I'm trying to start Google Assistant and send a text question (not voice) from my app when I press a button. For example: I click a button, and the Google Assistant answer to my question "How is the weather today?".
Is this possible?
EDIT:
When I press a button I want the Google Assistant to do some actions and give a spoken feedback.
For example: "Read the weather for tomorrow and set the alarm to 6.30 am".
It looks as though you can reference it from a direct package class name.
String queryString = "How is the weather today?";
Intent intent = new Intent(Intent.ACTION_WEB_SEARCH);
intent.setClassName("com.google.android.googlequicksearchbox",
"com.google.android.googlequicksearchbox.SearchActivity");
intent.putExtra("query", queryString);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
If you're already using the Assistant SDK, it's pretty simple. Just replace AudioInConfig with a text query. Here's how I do it:
AssistConfig config = AssistConfig.newBuilder()
.setTextQuery("Your text query goes here!")
//.setAudioInConfig(audioInConfig)
.setAudioOutConfig(audioOutConfig)
.setDeviceConfig(deviceConfig)
.setDialogStateIn(dialogStateIn)
.setScreenOutConfig(screenOutConfig)
.build();
AssistRequest request = AssistRequest.newBuilder().setConfig(config).build();
Then send the request to the server over gRPC and you'll get a spoken response back.
I'm trying to build a simple Hello World GDK program for Google Glass. I've looked up everywhere, but all the samples I could find used "Timeline Manager", which was removed by Google after XE 16.
What I'm trying to do is to create a live card that shows texts (Hello world!) in the middle.
I've tried to modify codes from HERE (HuskyHuskie's answer) and HERE (IsabelHM's answer)
However, no matter what I did, no option or voice command appeared on the glass even though the console showed that the program is installed on device.
What I mostly modified was take out the TimelineManager part and replace
mLiveCard = mTimelineManager.createLiveCard(LIVE_CARD_ID);
with
mLiveCard = new LiveCard(this,LIVE_CARD_ID);
Also, I'm relatively new to Android. I don't quite understand how R.id.XXXX and R.layout.XXXX are missing from the resource. Do you need to define it in Manifest or what else?
The following is the onStartCommand method:
#Override
public int onStartCommand(Intent intent, int flags, int startId) {
RemoteViews aRV = new RemoteViews(this.getPackageName(),
R.layout.card_text);
if (mLiveCard == null) {
// mLiveCard = mTimelineManager.createLiveCard(LIVE_CARD_ID);
mLiveCard = new LiveCard(this,LIVE_CARD_ID);
aRV.setTextViewText(R.id.main_text, INTRO);
mLiveCard.setViews(aRV);
Intent mIntent = new Intent(this, MainActivity.class);
mIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK);
mLiveCard.setAction(PendingIntent.getActivity(this, 0, mIntent, 0));
mLiveCard.publish(LiveCard.PublishMode.REVEAL);
}
return START_STICKY;
}
Ok I got it to work following THIS
Note that the Manifest is not entirely correct. You need to add this line in the Manifest after the XE16 update:
<uses-permission android:name="com.google.android.glass.permission.DEVELOPMENT" />
See the post HERE for reference.
I strongly recommend using our official samples available on GitHub and reading our documentations as all of those caveats are explained and handled.
If you are using the latest version of Android Studio, you can also easily create a new project through our available templates: LiveCard and Immersion.
Open Android Studio
Create a new project
Enter your project information: application name, package name, etc.
Select Glass as the form factor: make sure to unselect all the other form factors unless you want to develop for those devices as well.
Select the Immersion Activity or the Simple Live Card template
Build and run your new Hello World project on Glass!
I'm using the pico default android TTS engine with IPA caracters doing this
String text3 = "<speak xml:lang=\"fr-FR\"> <phoneme alphabet=\"ipa\" ph=\"+"+words+"\"/>.</speak>";
myTTS.speak(text3, TextToSpeech.QUEUE_ADD, null);
It's generally working, but for some letters it doesn't like "ã" or "ɑ" etc.
So my question is, How can I add theses letters/sounds, to this TTS engine ?
Hey you can use addEarcon() to add sounds to testToSpeech link.
This medthod is used to add earcons.It will link a text to a speecific sound file.
You can also find example on this.
mTts = new TextToSpeech(this, new OnInitListener() {
#Override
public void onInit(int status) {
mTts.addEarcon("[tock]", "com.ideal.itemid", R.raw.tock_snd);
showRecordingView();
}
});
There is also a very good explanation on addEarcon in book Professional Android Sensor Programming by Greg Milette, Adam Stroud
at page no 366 and 367.
You can also find example on this link.
I want , for example, to have the language in my device set to "italian" and have the TTS speaking english inside my app.
Any idea ?
use the setLanguage method
TextToSpeech mTts;
mTts = new TextToSpeech(this, this);
mTts.setLanguage(Locale.US);
//mTts.isLanguageAvailable(Locale.FRANCE)
Refer to this Link Section Languages and Locale
I advice you to see the Google I/O video
The text to speech Default Settings Overrides your App Setting
you can prompt the user to the text to speech setting by used intent and asking him to erase the default setting:
ComponentName componentToLaunch = new ComponentName(
"com.android.settings",
"com.android.settings.TextToSpeechSettings");
Intent intent = new Intent();
intent.addCategory(Intent.CATEGORY_LAUNCHER);
intent.setComponent(componentToLaunch);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
startActivity(intent);
Take a look at Using Text-to-Speech You can set the language of your TextToSpeech object using setLanguage like:
mTts.setLanguage(Locale.US); // here mTs is a TextToSpeech object
So, what you want shouldn't be a problem.