I'm making use of Text to Speech - TTS in my Android app.. I've pasted the code below.. TTS is working fine, however the voice/text it speaks is not very clear.. It speaks really quick, so that it is not fully understandable.. I tried setting Locale.US, and used setPitch or setSpeechRate but it is not really convincing. I felt that there is some problem with my phone (Samsung S2).. So tried installing Google Translate TTS app from Google Play store.. In that app the voice was really clear.
My app will be used by Kids.. so want to make sure that the voice is really clear.
I'm breaking my head for the past few days to fix this problem.. Would be great if you could give me some pointers on where I'm doing wrong or how to improve??
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = tts.setLanguage(Locale.getDefault());
//tts.setSpeechRate((float) 0.8);
//tts.setPitch(1.0f);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.e("TTS", "This Language is not supported");
} else {
speakOut(0);
}
} else {
Intent installIntent = new Intent();
installIntent.setAction(
TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
}
private void speakOut(int position) {
tts.speak("Some text goes here", TextToSpeech.QUEUE_FLUSH, null);
}
Note: I tried setting value as 0.5f,0.8f etc., in both setPitch and setSpeechRate but still all the voice is not really clear as in GoogleTranslate App.
Your code looks fine.
All tts libraries are shared across the system. Samsung comes with its own tts library. Rest of phones use Pico TTS. The great thing is that your app is independent from the library and you can download as many TTS libs as you want so that when you request the TTS intent the user will be prompt a pop-up to select which of their TTS synth they want for your app.
For me Pico TTS was working fine. Velocity / speech rate was normal, I just put the tone (pitch) a bit up to de-robotize the feeling a bit.
tts.setPitch(1.1f);
Try with Pico TTS and answer back.
Related
I have an android app with a custom intent that is fired by the Google Assistant with some text like "open my app and activate some action" (for example, "open clients database and sort clients").
All this is done very well but I would like to add some speech once the job is done, maybe a "job done" text or more specific "clients list is now sorted".
Is this possible with the Assistant. Can we send back a result for it to speak it?
I haven't found any solution on playing voice using the Assistant but I could play voice using TextToSpeech.speak. Very simple.
TextToSpeech oTextToSpeech=new TextToSpeech(getApplicationContext(), new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if(status != TextToSpeech.ERROR) {
oTextToSpeech.speak("Hello this is a test", TextToSpeech.QUEUE_ADD, null);
}
}
});
The only problem is that I cannot use the same voice but it works very well and it is very simple.
I am trying to follow this tutorial for my Android wearables app:
https://www.sitepoint.com/using-android-text-to-speech-to-create-a-smart-assistant/
Here is the code for my Activity file:
import android.speech.tts.TextToSpeech;
public class ScoresActivity extends Activity {
private TextToSpeech tts;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_scores);
// Text to speech setup
tts = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
System.out.println("status: " + status); // Always returns -1
if (status == TextToSpeech.SUCCESS) {
int result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.e("TTS", "This Language is not supported");
}
speak("Hello");
} else {
Log.e("TTS", "Initilization Failed!");
}
}
});
}
I always see this error message in the Logs:
Is it even possible to run the Android SDK's text-to-speech library on wearable devices? I tried running this code on a mobile Android app and everything worked fine.
Yes this is possible as they even have docs for this feauture in Adding Voice Capabilities:
Voice actions are an important part of the wearable experience. They let users carry out actions hands-free and quickly. Wear provides two types of voice actions:
System-provided These voice actions are task-based and are built into the Wear platform. You filter for them in the activity that you
want to start when the voice action is spoken. Examples include "Take
a note" or "Set an alarm".
App-provided These voice actions are app-based, and you declare them just like a launcher icon. Users say "Start " to use these voice
actions and an activity that you specify starts.
You can also check this SO post for additional reference.
Depends which device you have. I think it needs to have android wear 2.0 and then possibly a speaker would make it more likely. Im only saying that based on knowing my nixon mission does not have tts installed but the lg urbane 2 does. Very annoying as tts could be used over bluetooth.
Would be good to get a full listed of supported devices.
I have an application that according to some events, changes a normal notification to text-to-speech in order to since sometimes the phone isn't available to users, and it'll be safer not to handle the phone.
For example, when you're driving, this is dangerous, so i want to turn the notifications to text-to-speech.
I've looked for a long time some explanation for turning text-to-speech when driving, but i can't find any reference for that no where i search.
For generating text-to-speech, i have this part, which works fine :
private TextToSpeech mTextToSpeech;
public void sayText(Context context, final String message) {
mTextToSpeech = new TextToSpeech(context, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
try {
if (mTextToSpeech != null && status == TextToSpeech.SUCCESS) {
mTextToSpeech.setLanguage(Locale.US);
mTextToSpeech.speak(message, TextToSpeech.QUEUE_ADD, null);
}
} catch (Exception ex) {
System.out.print("Error handling TextToSpeech GCM notification " + ex.getMessage());
}
}
});
}
But, i don't know how to check if i'm currently driving or not.
As Ashwin suggested, you can use Activity recognition Api, but there's a downside of that, the driving samples you'll receive, has a field of 'confidence' which isn't always accurate, so you'll have to do extra work(such as check locations to see if you actually moved) in order to fully know if the user moved.
You can use google's FenceApi which allows you to define a fence of actions such as driving, walking, running, etc. This api launched recently. If you want a sample for using it, you can use this answer.
You can pull this git project (everything free), which does exactly what you want : adds to the normal notification a text-to-speech when you're driving.
In order to know whether you are driving or not you can use Activity Recognition API
Here is a great tutorial that might help you out Tutorial and Source Code
I am learning to write an app that is intended to perform TTS on given strings, and have tried an example modified from web:
Coding as follows:
// setup TTS part 1
mTts = new TextToSpeech(Lesson2_dialog_revision_simple.this, this); // TextToSpeech.OnInitListener
speakBtn.setOnClickListener(new OnClickListener()
{
public void onClick(View v)
{
StringTokenizer loveTokens = new StringTokenizer("他們 one two是 three ",",.");
int i = 0;
loveArray = new String[loveTokens.countTokens()];
while(loveTokens.hasMoreTokens())
{
loveArray[i++] = loveTokens.nextToken();
}
speakText();
}
});
}
// setup TTS part 2
#Override
public void onUtteranceCompleted(String utteranceId)
{
Log.v(TAG, "Get completed message for the utteranceId " + utteranceId);
lastUtterance = Integer.parseInt(utteranceId);
}
// setup TTS part 3
#Override
public void onInit(int status)
{
if(status == TextToSpeech.SUCCESS)
{
int result = mTts.setLanguage(Locale.CHINESE); // <====== set speech location
if(result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED)
{
Toast.makeText(Lesson2_dialog_revision_simple.this, "Language is not supported", Toast.LENGTH_LONG).show();
speakBtn.setEnabled(false);
}
else
{
speakBtn.setEnabled(true);
mTts.setOnUtteranceCompletedListener(this);
}
}
}
// setup TTS part 4
private void speakText()
{
lastUtterance++;
if(lastUtterance >= loveArray.length)
{
lastUtterance = 0;
}
Log.v(TAG, "the begin utterance is " + lastUtterance);
for(int i = lastUtterance; i < loveArray.length; i++)
{
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, String.valueOf(i));
mTts.speak(loveArray[i], TextToSpeech.QUEUE_ADD, params);
}
}
Questions:
Everything is ok if the int result = mTts.setLanguage(Locale.US); in part 3 above is set as US and to read out "one two three" in English perfectly. (in the above example, it will skip all the chinese words and just read out one two three)
However, if I change the string to read out Chinese by setting language as setLanguage(Locale.CHINESE), it immediately toasts out that "Language is not supported".
I would like to ask
the current TTS still does not support Chinese? I would even more prefer Cantonese rather than Chinese.
The phone is ABLE to recognize Cantonese when I inputting messages via speech (Cantonese). Is it actually there are some other way to perform TTS with output being Cantonese?
Thanks!!
1 - The Google TTS Engine at its current version does not support Cantonese as output yet. Putonghua works fine.
2 - Ekho is a TTS Engine that supports Cantonese.
You might want to give a try on the TTS app I developed that works with Ekho and Google TTS Engine: Voice Out TTS
As far as I know there's no specific Locale in JAVA to distinguish between Cantonese or Putonghua because Cantonese is a Chinese dialect. The Locale in JAVA refers only to the writings (Simplified or Traditional).
For example you can read a string written in Traditional Chinese with Cantonese or Putonghua.
#Pearmak: you can check the language that are supported in your device
int i = mTts.isLanguageAvailable(Locale.ENGLISH);
where mTts is object of TextToSpeech
If you get the value of i >=0 then that language is supported on you device otherwise not.
You may also pass the language locale string.
int i = mTts.isLanguageAvailable(new Locale("zh_CN")); //for chinese simplified
Yue, a tiny Chinese text to speech (TTS) synthesis engine of Cantonese, Mandarin for offline embedded system. Yue is extremely small size, offline, independent, and PCM audio output no needs of server or network connection. It has high naturalness of synthesised voice for hybrid text input, the Cantonese and Mandarin speech synthesis for same text input, with Yale, Jyutping and Pinyin romanization. The engine can continues produce and play voice for long text, the length of the text without limit. It has build-in intelligent detecter that can handle any traditional Chinese, simplified Chinese, English, number and punctuations, symbol mixed text input. Yue is written in ANSI C, no dependent of third part library, running on ARM, AVR embedded system such as watch, toy, robot and iPhone, Android, … mobile platforms, of course normal desktops, ebook, news paper reader, story teller. Yue can be loaded into memory and embedded in other programs, because of its extremely small size, it is well suited to embedded systems, and is also suitable for desktop operating systems. The engine can have bindings for a large number of programming languages.
The link:http://www.sevenuc.com/en/tts.html
Google TTS recently added support for Cantonese (and also Mandarin). http://www.androidpolice.com/2015/07/24/google-tts-now-supports-four-new-languages-including-cantonese-and-mandarin/
some phones have the cantonese locale that you can use with TTS.
try
new Locale("yue", "HK"); //yue for 粤语
Once you have set the system language to Cantonese, then you can use setLanguage(Locale.getDefault()).
I have an HD Desire phone with android 2.3.
The TTS is working fine and it speaks every text I give. But when I use this either of the lines below to set my own voice for some texts, it simply ignores it and synthesizes the text, just like the line is not written!
tts.addSpeech("salam", "/sdcard/salam.wav");
tts.addSpeech("shalam", "com.company.appname", R.raw.shalam);
...
tts.speak("salam", TextToSpeech.QUEUE_FLUSH, null); //<--This isn't playing my voice file.
tts.speak("shalam", TextToSpeech.QUEUE_FLUSH, null); //<--Neither is this
I am sure of the existence of both files.
Why is that? Is there any restriction on the sound files? For example on their Frequency, or being mono or stereo?
I already checked the docs and saw nothing related.
OK, I found my problem, very silly situation which wasted several hours of mine!! I hope it will help if someone makes my mistake.
We should postpone this mapping of texts to the point TTS is successfully initialized, for example in onInit function:
#Override
public void onInit(int status) {
if(status == TextToSpeech.SUCCESS)
{
tts.setLanguage(Locale.US);
mapVoices();
}
else
...
}
private void mapVoices()
{
tts.addSpeech("salam", "/sdcard/salam.wav");
tts.addSpeech("shalam", "com.company.appname", R.raw.shalam);
//...
}