synthesizeToFile failed in Android - android

TextToSpeech is initialized like this in onCreate()
tts = new TextToSpeech(this, this);
onInit coded like this.
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
tts.setOnUtteranceProgressListener(new UtteranceProgressListener() {
#Override
public void onStart(String s) {
Log.v(TAG, "onStart : " + s);
}
#Override
public void onDone(String s) {
tts.setLanguage(Locale.US);
fabSpeak.setEnabled(true);
Log.v(TAG, "Proceed");
}
#Override
public void onError(String s) {
Log.v(TAG, "onError : " + s);
}
});
Log.v(TAG, "Proceed2");
} else {
Log.e(TAG, "Initilization Failed!");
}
}
only Proceed2 printed. Proceed is printed only on when I call
tts.speak(text, TextToSpeech.QUEUE_FLUSH, null, "1");
If I synthesizeToFile without call speak like this
int test = tts.synthesizeToFile(text, null, file, "tts");
if (test == TextToSpeech.SUCCESS) {
Log.v(TAG, "Success");
}
This log prints Success, but file is empty. If I synthesize file after called speak then file has data.
But I want to synthesizeToFile without call speak. I don't know what is wrong here.

Related

Multiple Instances for TextToSpeech

I need to speak multiple language. I created an array for TextToSpeech.
private TextToSpeech[] mTextSpeechs_;
mTextSpeechs_ = new TextToSpeech[5];
mTextSpeechs_[0] = new TextToSpeech(this, new TextToSpeech.OnInitListener()
{
#Override
public void onInit(int status)
{
mTextSpeechs_[0].setLanguage(Locale.CHINESE);
mTextSpeechs_[0].speak(getString(R.string.string_main_chineseready), TextToSpeech.QUEUE_FLUSH, null,"Display");
}
});
mTextSpeechs_[1] = new TextToSpeech(this, new TextToSpeech.OnInitListener()
{
#Override
public void onInit(int status)
{
mTextSpeechs_[1].setLanguage(Locale.forLanguageTag("yue-HK"));
mTextSpeechs_[1].speak(getString(R.string.string_main_hongkong), TextToSpeech.QUEUE_FLUSH, null,"Display");
}
});
mTextSpeechs_[2] = new TextToSpeech(this, new TextToSpeech.OnInitListener()
{
#Override
public void onInit(int status)
{
mTextSpeechs_[2].setLanguage(Locale.JAPAN);
mTextSpeechs_[2].speak(getString(R.string.string_main_japan), TextToSpeech.QUEUE_FLUSH, null,"Display");
}
});
mTextSpeechs_[3] = new TextToSpeech(this, new TextToSpeech.OnInitListener()
{
#Override
public void onInit(int status)
{
mTextSpeechs_[3].setLanguage(Locale.KOREA);
mTextSpeechs_[3].speak(getString(R.string.string_main_korea), TextToSpeech.QUEUE_FLUSH, null,"Display");
}
});
mTextSpeechs_[4] = new TextToSpeech(this, new TextToSpeech.OnInitListener()
{
#Override
public void onInit(int status)
{
mTextSpeechs_[4].setLanguage(Locale.ENGLISH);
mTextSpeechs_[4].speak(getString(R.string.string_main_english), TextToSpeech.QUEUE_FLUSH, null,"Display");
}
});
....
//type 0 flush
//type 1 add
public void speakMultiLanguage(String text, int type,int langidx)
{
if( type == 0 )
mTextSpeechs_[langidx_].speak(text, TextToSpeech.QUEUE_FLUSH, null,"Display");
else if( type == 1)
mTextSpeechs_[langidx_].speak(text, TextToSpeech.QUEUE_ADD, null,"Display");
}
Now, When I call speakMultiLanguage function to speak specified language, it will delay about 5 seconds to speak. If the last language is same, it will not delay. Does anyone give me a solution to solve the delay?
I wasn't testing your particular case of multiple usage of TTS in array. But for me it looks none productive, especially memory and resources using.
In my case I initialise only one TTS object in onCreate() method of My Activity. And when I need to change languge to another, I'll use tts.setLanguge(Locale).
In my case and with my virtual device this solution working immediately.
initialise:
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
tts = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS){
tts.setLanguage(Locale.ENGLISH);
}else{
Log.e(TAG, "TTS fault");
}
}
});
//....
}
Change:
#RequiresApi(api = Build.VERSION_CODES.LOLLIPOP)
private void prepareTTS(Locale newLocale){
if (!tts.getVoice().getLocale().getISO3Language().equals(newLocale.getISO3Language())) {
tts.setLanguage(newLocale);
Log.d(TAG, "ChangeTo: " + newLocale.getISO3Language());
}else{
Log.d(TAG, "The same");
}
}
speech:
#TargetApi(Build.VERSION_CODES.LOLLIPOP)
private void ttsSpeak21(String text){
String utteranceId = this.hashCode() + "";
int result = tts.speak(text, TextToSpeech.QUEUE_FLUSH, null, utteranceId);
if (result == TextToSpeech.ERROR){
Log.d(TAG, "Can't say");
}else {
Log.d(TAG, lang + "speaking!");
}
}
free resources:
#Override
protected void onDestroy() {
if (tts != null){
tts.stop();
tts.shutdown();
}
super.onDestroy();
}

NotificationListenerService : Stop Queue all current notifications?

I'm working on a notification based app, for which I need to listen to incoming notifications. I've been able to listen to incoming call,sms, mail etc. I have no clue how to listen for pings or messages from friends on Whatsapp via code. Can this actually be done but problem is that TTS service continue started.when user send message two time TTS service run two time. plz solve this problem...
#Override
public void onNotificationPosted(StatusBarNotification sbn) {
super.onNotificationPosted(sbn);
TelephonyManager telephonyManager = (TelephonyManager) getSystemService(TELEPHONY_SERVICE);
int state = telephonyManager.getCallState();
if (state != TelephonyManager.CALL_STATE_OFFHOOK) {
String packageName = sbn.getPackageName();
if (sbn.getNotification().tickerText != null) {
if (packageName.contains("whatsapp")) {
whatText = "Whatsapp Notification";
initTTS(sbn.getNotification().tickerText + "");
}
}
}
}
private void initTTS(String s) {
text = whatText + " ";
STD = s + " ";
tts = new TextToSpeech(this, this);
tts.setOnUtteranceProgressListener(new UtteranceProgressListener() {
#Override
public void onStart(String utteranceId) {
}
#Override
public void onDone(String utteranceId) {
stopSelf();
}
#Override
public void onError(String utteranceId) {
}
});
}
#Override
public void onInit(int status) {
if (status != TextToSpeech.ERROR) {
tts.setLanguage(Locale.US);
SharedPreferences preferences = getSharedPreferences("SET", MODE_PRIVATE);
String Demo = preferences.getString("text", text);
tts.speak(Demo + STD + " ", TextToSpeech.QUEUE_FLUSH, null);
}
}

How can I get Text To Speech to read a foreign language?

When I check which languages are available Thai (th)is available butit doesn't read the text
#SuppressLint("NewApi")
private void speak() {
if(tts!=null && tts.isSpeaking()){
tts.stop();
}else{
tts = new TextToSpeech(this, this);
tts.setLanguage(Locale.forLanguageTag("th")); //tts.getAvailableLanguages().;
tts.setSpeechRate(0.7f);
}
}
#Override
public void onInit(int status) {
tts.speak("ซึ่งมีระยะทางส่วนใหญ่เป็น ทางหลวงแผ่นดินหมายเลข (สายบางนา - หาดเล็ก) เป็นเส้นทางคมนาคมหลักเส้นหนึ่งของประเทศไทย ", TextToSpeech.QUEUE_FLUSH, null);
}
Edit your code like this:
#SuppressLint("NewApi")
private void speak() {
if(tts!=null && tts.isSpeaking()) {
tts.stop();
}else{
tts = new TextToSpeech(this, this);
}
}
#Override public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int res = tts.setLanguage("th_TH");
//tts.getAvailableLanguages().;
tts.setSpeechRate(0.7f);
if (res >= TextToSpeech.LANG_AVAILABLE) {
tts.speak("ซึ่งมีระยะทางส่วนใหญ่เป็น ทางหลวงแผ่นดินหมายเลข (สายบางนา - หาดเล็ก) เป็นเส้นทางคมนาคมหลักเส้นหนึ่งของประเทศไทย ", TextToSpeech.QUEUE_FLUSH, null);
}
}
Because TextToSpeech instance is created asynchronously, so you can hear synthesis result when you control your tts after onInit() method was done.

Android TextToSpeech in Chrome ARC

I'm trying to port my android app to Chrome using ARCWelder. The TextToSpeech component doesn't seem to work. In one activity I have an indeterminate progress circle waiting until the TTS engine is initialized. On Chrome, it either spins forever or returns a NullPointerException. Is TTS not available in Chrome? Running ChromeOS on a Chromebox.
UtteranceProgressListener ttsListener = new UtteranceProgressListener() {
#Override
public void onStart(String s) {
Logg.d("speech started: " + s);
if (loadingDialog.isShowing()) {
loadingDialog.dismiss();
}
}
#Override
public void onDone(String s) {
Logg.d("speech done: " + s);
if (s.equals("1")) {
nextWord();
} else if (s.equals("2")) {
CheckLeave();
}
}
#Override
public void onError(String s) {
Logg.e("Text to Speech error speaking: " + s);
}
};
#Override
protected void onCreate(Bundle savedInstanceState) {
showProgressDialog();
}
#Override
protected void onResume() {
if (tts==null) {
Logg.d("re-initializing TTS");
tts=new TextToSpeech(getApplicationContext(),
new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if(status != TextToSpeech.ERROR){
tts.setSpeechRate(.5f + .25f * (Integer)KVDB.GetValue("speechRate",2));
tts.setLanguage(Locale.US);
if (pauseTime != 0) {
//Paused. Say nothing.
} else if (currentWord == null) {
startTime = new ExcelDate();
nextWord();
} else if (currentWord.length() == 0) {
nextWord();
} else {
reSpeak();
}
}
}
});
tts.setOnUtteranceProgressListener(ttsListener);
}
super.onResume();
}
private void showProgressDialog() {
loadingDialog = new ProgressDialog(this);
loadingDialog.setProgressStyle(ProgressDialog.STYLE_SPINNER);
loadingDialog.setTitle(getString(R.string.test_loading_msg));
loadingDialog.show();
}
As you found, it does look like we don't have any kind of default TTS service provider as part of the ARC package. There is not one provided by the base Android OS at all.
Please feel free to file a bug for it.

Speak failed: not bound to TTS Engine

I am trying to make an application that uses Speech to text, and Text to speech.
So the algorithm of this program is:
1 - when the user runs this program, it will call voice Recognition
2 - after getting the input from the user, the program will repeat the same word the user said.
here's my code:
public class MainActivity extends Activity implements TextToSpeech.OnInitListener, OnClickListener {
protected static final int REQUEST_OK = 1234;
String userSay;
TextView text1;
private TextToSpeech tts;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
if (android.os.Build.VERSION.SDK_INT > 9) {
StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
StrictMode.setThreadPolicy(policy);
}
findViewById(R.id.button1).setOnClickListener(this);
text1 = (TextView) findViewById(R.id.text1);
tts=new TextToSpeech(this, this);
}
#Override
public void onDestroy() {
// Don't forget to shutdown tts!
if (tts != null) {
tts.stop();
tts.shutdown();
}
super.onDestroy();
}
#Override
public void onPause(){
if(tts !=null){
tts.stop();
tts.shutdown();
}
super.onPause();
}
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
Log.e("TTS", "Initilization Success!");
int result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.e("TTS", "This Language is not supported");
} else {
String anyThing = " ";
speakIt(anyThing);
}
return;
} else {
Log.e("TTS", "Initilization Failed!");
}
}
private void speakIt(String someThing) {
Log.e("something: ", someThing);
tts.speak(someThing, TextToSpeech.QUEUE_ADD, null);
Log.e("TTS", "called");
}
#Override
public void onClick(View v) {
//Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
//i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, "en-US");
text1.setText(" ");
Intent i = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
i.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, this.getPackageName());
i.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
i.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
i.putExtra(RecognizerIntent.EXTRA_PROMPT, "I'm listen to you...");
try {
startActivityForResult(i, REQUEST_OK);
} catch (Exception e) {
Toast.makeText(this, "Error initializing speech to text engine.", Toast.LENGTH_LONG).show();
}
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode==REQUEST_OK && resultCode==-1) {
ArrayList<String> thingsYouSaid = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS);
userSay = thingsYouSaid.get(0);
}
try {
String FinalValue = getDataMethod(hasil);
text1.setText(FinalValue);
Log.v("Status OK: ", FinalValue);
speakIt(FinalValue);
Log.v("speakIt: ", "called");
} catch (IllegalStateException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
super.onActivityResult(requestCode, resultCode, data);
}
private String getDataMethod(String num) throws IllegalStateException, IOException {
num = "You say, " + num;
return num;
}
}
But the application won't say anything. And in the logcat I got an error that says:
TextToSpeech(10335): speak failed: not bound to TTS
When before I made this application, I made it in separate parts: Text to Speech, and Speech to Text.
Both of that applications ran normally.
I don't know why this error occurs.
Can anyone help me?
One problem is that you are calling:
speakIt(FinalValue);
inside:
onActivityResult(
while in onPause, you execute:
if(tts !=null){
tts.stop();
tts.shutdown();
}
so once, you open your sub activity, onPause is called and tts is shutdown, returning back you use such destroyed tts which is wrong and can cause such behaviour.
You can move your code from onPause to onStop, and your init code to onStart. In onActivityResult set some class instance variable with data to speak inside onInit.

Categories

Resources