string input for Android's speech engine - android

It seems to me that the method "speak" of class TextToSpeech only works in method onInit or onUtteranceCompleted. However, onInit and onUtteranceCompleted don't have any parameter for passing strings.
In the following code, I tried to define a global string arraylist outside the methods and used the arraylist for string input.For some reason , it didn't work out.But the engine did speak "did you sleep well". Any help is appreciated.
public class TTS extends Activity implements OnInitListener,OnUtteranceCompletedListener,Runnable {
ArrayList<String> content=new ArrayList<String>();
int MY_DATA_CHECK_CODE=50;
private TextToSpeech mTts;
public void onCreate(Bundle savedInstanceState) {
content.add("test");
content.add("another test");
super.onCreate(savedInstanceState);
setContentView(R.layout.splash);
Intent checkIntent = new Intent();
checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);
}
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == MY_DATA_CHECK_CODE) {
if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
// success, create the TTS instance
mTts = new TextToSpeech(this,this);
} else {
// missing data, install it
Intent installIntent = new Intent();
installIntent.setAction(
TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
}
}
}
public void onInit(int status){
if(status==TextToSpeech.SUCCESS){
mTts.setLanguage(Locale.US);
mTts.setOnUtteranceCompletedListener(this);
String myText1 = "Did you sleep well?";
mTts.speak(myText1, TextToSpeech.QUEUE_FLUSH, null);
for(int i=0;i<content.size();i++){
mTts.speak(content.get(i),TextToSpeech.QUEUE_ADD,null);
}
if(status==TextToSpeech.ERROR){
mTts.shutdown();
}
}
}

I believe some of your code is missing, but FYI it is possible to assign an ID to an utterance via the parameters map, e.g.:
HashMap<String, String> myHashAlarm = new HashMap();
myHashAlarm.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "ID of First Utterance");
mTts.speak("It was a clear black night", TextToSpeech.QUEUE_ADD, myHashAlarm);
"ID of First Utterance" will be passed to onUtteranceCompleted(String utteranceId)
Please see Using Text-to-Speech.

Related

synthesizeToFile failed: not bound to TTS engine , when using TextToSpeech in android

I am using Android TextToSpeech API, I want to save the convert text2speech as a file on the SD-card memory, but I got the error:
synthesizeToFile failed: not bound to TTS engine
My code to use TTS is :
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
if (requestCode == MY_DATA_CHECK_CODE) {
if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
tts = new TextToSpeech(this, this);
if(getIntent() != null){
if(getIntent().getExtras()!=null){
String d = getIntent().getExtras().getString("data");
String data[] = d.split("-");
bookName = data[0];
loadPage(data[0], Integer.parseInt(data[1]));
}
}
Log.d("TTS","Data is loaded");
}
else {
Intent installTTSIntent = new Intent();
installTTSIntent.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installTTSIntent);
}
}
}
where inside the loadPage() function a call the synthesizeToFile function as below :
String tempDestFile = appTmpPath.getAbsolutePath() +"/"+ fileName;
tts.synthesizeToFile(speakTextTxt, myHashRender, tempDestFile);
You have to wait until after onInit is called before you can call speak, synthesizeToFile etc... put your loadPage method in onInit after checking for success there.

Trying to read barcodes with Zxing, but it seems the onActivityResult is not beeing called

as the title says, I'm trying to scan 1D barcodes, so far I have thet following code:
public class MainActivity extends Activity {
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
}
public void test(View view){
Intent intent = new Intent("com.google.zxing.client.android.SCAN");
intent.putExtra("SCAN_MODE", "1D_CODE_MODE");
startActivityForResult(intent, 0);
}
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
switch (requestCode) {
case IntentIntegrator.REQUEST_CODE:
if (resultCode == Activity.RESULT_OK) {
IntentResult intentResult =
IntentIntegrator.parseActivityResult(requestCode, resultCode, intent);
if (intentResult != null) {
String contents = intentResult.getContents();
String format = intentResult.getFormatName();
TextView uno = (TextView) findViewById(R.id.textView1);
uno.setText(contents);
Toast.makeText(this, "Numero: " + contents, Toast.LENGTH_LONG).show();
Log.d("SEARCH_EAN", "OK, EAN: " + contents + ", FORMAT: " + format);
} else {
Log.e("SEARCH_EAN", "IntentResult je NULL!");
}
} else if (resultCode == Activity.RESULT_CANCELED) {
Log.e("SEARCH_EAN", "CANCEL");
}
}
}
}
And of course, I have both IntentResult and IntentIntegrator added to the project.
So, the scanner is beeing called correctly when a button is pressed and it seems to scan the code perfectly (it says "Text found" after it scans it), but it seems that the onActivityResult is not called, since the TextView is not beeing updated and the Toast is not appearing.
Any idea on what the mistake could be?
Thanks in advance!
Your first mistake is not using IntentIntegrator.initiateScan(), replacing it with your own hand-rolled call to startActivityForResult().
Your second mistake is in assuming that IntentIntegrator.REQUEST_CODE is 0. It is not.
Hence, with your current code, you are sending out a request with request code of 0, which is coming back to onActivityResult() with request code of 0, which you are ignoring, because you are only looking for IntentIntegrator.REQUEST_CODE.
Simply replace the body of your test() method with a call to initiateScan(), and you should be in better shape. Here is a sample project that demonstrates the use of IntentIntegrator.
I resolve your same problem so.
public class MainActivity extends Activity {
private TextView tvStatus, tvResult;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
this.tvStatus = (TextView) findViewById(R.id.tvStatus);
this.tvResult = (TextView) findViewById(R.id.tvResult);
Button scanBtn = (Button) findViewById(R.id.btnScan);
scanBtn.setOnClickListener(new OnClickListener() {
#Override
public void onClick(View v) {
try {
Intent intent = new Intent(
"com.google.zxing.client.android.SCAN");
intent.putExtra("SCAN_FORMATS", "QR_CODE_MODE");
startActivityForResult(intent,
IntentIntegrator.REQUEST_CODE);
} catch (Exception e) {
Log.e("BARCODE_ERROR", e.getMessage());
}
}
});
}
public void onActivityResult(int requestCode, int resultCode, Intent intent) {
IntentResult scanResult = IntentIntegrator.parseActivityResult(
requestCode, resultCode, intent);
if (scanResult != null) {
this.tvStatus.setText(scanResult.getContents());
this.tvResult.setText(scanResult.getFormatName());
}
}
}
The onActivityResault function must be overridden. just add an #Override before the function declaration and it will be solved.

Text to Speech Android

I am trying to create a Text to Speech app that will remember the last sentence or part of the sentence after a comma where it was when the app is paused. Below is my code.
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
pitch = (EditText) findViewById(R.id.pitch);
words = (EditText) findViewById(R.id.wordsToSpeak);
words.setText("This message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential or exempt from disclosure by law.");
speakBtn = (Button) findViewById(R.id.speak);
// Check to be sure that TTS exists and is okay to use
Intent checkIntent = new Intent();
checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkIntent, REQ_TTS_STATUS_CHECK);
}
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == REQ_TTS_STATUS_CHECK) {
switch (resultCode) {
case TextToSpeech.Engine.CHECK_VOICE_DATA_PASS:
// TTS is up and running
mTts = new TextToSpeech(this, this);
Log.v(TAG, "Pico is installed okay");
break;
case TextToSpeech.Engine.CHECK_VOICE_DATA_BAD_DATA:
case TextToSpeech.Engine.CHECK_VOICE_DATA_MISSING_DATA:
case TextToSpeech.Engine.CHECK_VOICE_DATA_MISSING_VOLUME:
// missing data, install it
Log.v(TAG, "Need language stuff: " + resultCode);
Intent installIntent = new Intent();
installIntent
.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(installIntent);
break;
case TextToSpeech.Engine.CHECK_VOICE_DATA_FAIL:
default:
Log.e(TAG, "Got a failure. TTS apparently not available");
}
} else {
// Got something else
}
}
#Override
public void onInit(int status) {
// Now that the TTS engine is ready, we enable the button
if (status == TextToSpeech.SUCCESS) {
speakBtn.setEnabled(true);
mTts.setOnUtteranceCompletedListener(this);
}
}
public void doSpeak(View view) {
mTts.setPitch(new Float(pitch.getText().toString()));
StringTokenizer st = new StringTokenizer(words.getText().toString(),
",.");
while (st.hasMoreTokens()) {
params.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID,
String.valueOf(uttCount++));
mTts.speak(st.nextToken(), TextToSpeech.QUEUE_ADD, params);
}
// mTts.speak(words.getText().toString(), TextToSpeech.QUEUE_ADD, null);
};
#Override
public void onPause() {
super.onPause();
// if we're losing focus, stop talking
if (mTts != null)
mTts.stop();
}
#Override
public void onDestroy() {
super.onDestroy();
mTts.shutdown();
}
#Override
public void onUtteranceCompleted(String utteranceId) {
Log.v(TAG, "Got completed message for uttId: " + utteranceId);
lastUtterance = Integer.parseInt(utteranceId);
}
}
I am able to get get the android to speak and keep track of the token that it last spoke. However, I am not sure how to resume where it last left off when you press the speakBtn. Is there anyway to go back to a certain token within a tokenizer if all the tokens were not successfully read out loud?
How about:
for(int i=0; i<successfulUtterances; ++i)
tokenizer.nextToken();
String nextUnspokenUtterance = tokenizer.nextToken();
If you're asking whether there's a direct way, there isn't. But this way will get rid of all the tokens you don't need and let you carry on.

using voice command in android how to navigate pages

i want to develop a application to navigate one page to another page using voice command
here is my code
public class mainActivity extends Activity implements OnClickListener {
/** Called when the activity is first created. */
ArrayList<String> StoredCommand = new ArrayList<String>();
private static final String TAG = "VoiceRecognition";
private static final int VOICE_RECOGNITION_REQUEST_CODE = 1234;
private static final Context View = null;
private ListView mList;
private Handler mHandler;
private Spinner mSupportedLanguageView;
/**
* Called with the activity is first created.
*/
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mHandler = new Handler();
StoredCommand.add("Path Recoder");
StoredCommand.add("Path Selector");
StoredCommand.add("Stop");
StoredCommand.add("Pause");
// Inflate our UI from its XML layout description.
setContentView(R.layout.main);
// Get display items for later interaction
Button speakButton = (Button) findViewById(R.id.btn_speak);
mList = (ListView) findViewById(R.id.list);
mSupportedLanguageView = (Spinner) findViewById(R.id.supported_languages);
// Check to see if a recognition activity is present
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() != 0) {
speakButton.setOnClickListener(this);
} else {
speakButton.setEnabled(false);
speakButton.setText("Recognizer not present");
}
// Most of the applications do not have to handle the voice settings. If the application
// does not require a recognition in a specific language (i.e., different from the system
// locale), the application does not need to read the voice settings.
refreshVoiceSettings();
}
/**
* Handle the click on the start recognition button.
*/
public void onClick(View v) {
if (v.getId() == R.id.btn_speak) {
startVoiceRecognitionActivity();
}
}
/**
* Fire an intent to start the speech recognition activity.
*/
private void startVoiceRecognitionActivity() {
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// Specify the calling package to identify your application
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName());
// Display an hint to the user about what he should say.
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo");
// Given an hint to the recognizer about what the user is going to say
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Specify how many results you want to receive. The results will be sorted
// where the first result is the one with higher confidence.
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
// Specify the recognition language. This parameter has to be specified only if the
// recognition has to be done in a specific language and not the default one (i.e., the
// system locale). Most of the applications do not have to set this parameter.
if (!mSupportedLanguageView.getSelectedItem().toString().equals("Default")) {
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE,
mSupportedLanguageView.getSelectedItem().toString());
}
startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE);
}
/**
* Handle the results from the recognition activity.
*/
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) {
// Fill the list view with the strings the recognizer thought it could have heard
ArrayList<String> matches = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,
matches));
StringBuilder sb=new StringBuilder() ;
for (String match:matches){
switch(resultCode) {
case RESULT_OK:
Log.i(TAG, "RESULT_OK");
if(StoredCommand==matches)
{
//Button next=(Button)findViewById(R.id.btn_speak);
if (matches.contains("Path Recoder"))
{
Intent myIntent = new Intent(View, PahtRecoder.class);
startActivityForResult(myIntent, 0);
}
else if(matches.contains("path selector"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
else if(matches.contains("stop"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
else if(matches.contains("start"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
}
else
{
Log.i(TAG, "COMMAND_NOT_MATCHING");
}
break;
case RESULT_CANCELED:
Log.i(TAG, "RESULT_CANCELED");
break;
case RecognizerIntent.RESULT_AUDIO_ERROR:
Log.i(TAG, "RESULT_AUDIO_ERROR");
break;
case RecognizerIntent.RESULT_CLIENT_ERROR:
Log.i(TAG, "RESULT_CLIENT_ERROR");
break;
case RecognizerIntent.RESULT_NETWORK_ERROR:
Log.i(TAG, "RESULT_NETWORK_ERROR");
break;
case RecognizerIntent.RESULT_NO_MATCH:
Log.i(TAG, "RESULT_NO_MATCH");
break;
case RecognizerIntent.RESULT_SERVER_ERROR:
Log.i(TAG, "RESULT_SERVER_ERROR");
break;
default:
Log.i(TAG, "RESULT_UNKNOWN");
break;
}
}
}
else{
Log.e("TAG", "Recognition is Failed");
}
super.onActivityResult(requestCode, resultCode, data);
}
private void refreshVoiceSettings() {
Log.i(TAG, "Sending broadcast");
sendOrderedBroadcast(RecognizerIntent.getVoiceDetailsIntent(this), null,
new SupportedLanguageBroadcastReceiver(), null, Activity.RESULT_OK, null, null);
}
private void updateSupportedLanguages(List<String> languages) {
// We add "Default" at the beginning of the list to simulate default language.
languages.add(0, "Default");
SpinnerAdapter adapter = new ArrayAdapter<CharSequence>(this,
android.R.layout.simple_spinner_item, languages.toArray(
new String[languages.size()]));
mSupportedLanguageView.setAdapter(adapter);
}
private void updateLanguagePreference(String language) {
TextView textView = (TextView) findViewById(R.id.language_preference);
textView.setText(language);
}
/**
* Handles the response of the broadcast request about the recognizer supported languages.
*
* The receiver is required only if the application wants to do recognition in a specific
* language.
*/
private class SupportedLanguageBroadcastReceiver extends BroadcastReceiver {
#Override
public void onReceive(Context context, final Intent intent) {
Log.i(TAG, "Receiving broadcast " + intent);
final Bundle extra = getResultExtras(false);
if (getResultCode() != Activity.RESULT_OK) {
mHandler.post(new Runnable() {
#Override
public void run() {
showToast("Error code:" + getResultCode());
}
});
}
if (extra == null) {
mHandler.post(new Runnable() {
#Override
public void run() {
showToast("No extra");
}
});
}
if (extra.containsKey(RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES)) {
mHandler.post(new Runnable() {
#Override
public void run() {
updateSupportedLanguages(extra.getStringArrayList(
RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES));
}
});
}
if (extra.containsKey(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE)) {
mHandler.post(new Runnable() {
#Override
public void run() {
updateLanguagePreference(
extra.getString(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE));
}
});
}
}
private void showToast(String text) {
Toast.makeText(mainActivity.this, text, 1000).show();
}
}
according to the stored command i need to navigate those pages.but result is Google voice option work but commands are not working.is there any wrong of my pattern matching...please give me solution for that.
thank you
Try this out as a minimal implementation of onActivityResult():
if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK)
{
ArrayList<String> matches = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,
matches));
for (String bestMatch:matches)
{
if (bestMatch.equalsIgnoreCase("Path Recoder"))
{
Intent myIntent = new Intent(View, PahtRecoder.class);
startActivityForResult(myIntent, 0);
}
else if(bestMatch.equalsIgnoreCase("Path Selector"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
else if(bestMatch.equalsIgnoreCase("Stop"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
else if(bestMatch.equalsIgnoreCase("Pause"))
{
Intent myIntent = new Intent(View, Pahtselector.class);
startActivityForResult(myIntent, 0);
}
else
{
Log.i(TAG, "COMMAND_NOT_MATCHING");
}
}
}
Further Update: My initial post was wrong - StoredCommand should not be used; on older speech recognition platforms they would be provided with a list of possible utterances and the engine would try to match what you say against the possibilities. However, the default engine on Android does not need this. By the way, what does your mList display?
Note also that I have not tested any of the code above...
Hi all this is the code for navigate from one page to another page using voice command.It will useful any one
public class mainActivity extends Activity implements OnClickListener {
/** Called when the activity is first created. */
private static final String TAG = "VoiceRecognition";
private static final int VOICE_RECOGNITION_REQUEST_CODE = 1234;
private static final Context View = null;
private ListView mList;
private Handler mHandler;
private Spinner mSupportedLanguageView;
/**
* Called with the activity is first created.
*/
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mHandler = new Handler();
// Inflate our UI from its XML layout description.
setContentView(R.layout.main);
// Get display items for later interaction
Button speakButton = (Button) findViewById(R.id.btn_speak);
mList = (ListView) findViewById(R.id.list);
mSupportedLanguageView = (Spinner) findViewById(R.id.supported_languages);
// Check to see if a recognition activity is present
PackageManager pm = getPackageManager();
List<ResolveInfo> activities = pm.queryIntentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() != 0) {
speakButton.setOnClickListener(this);
} else {
speakButton.setEnabled(false);
speakButton.setText("Recognizer not present");
}
// Most of the applications do not have to handle the voice settings. If the application
// does not require a recognition in a specific language (i.e., different from the system
// locale), the application does not need to read the voice settings.
refreshVoiceSettings();
}
/**
* Handle the click on the start recognition button.
*/
public void onClick(View v) {
if (v.getId() == R.id.btn_speak) {
startVoiceRecognitionActivity();
}
}
/**
* Fire an intent to start the speech recognition activity.
*/
private void startVoiceRecognitionActivity() {
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
// Specify the calling package to identify your application
intent.putExtra(RecognizerIntent.EXTRA_CALLING_PACKAGE, getClass().getPackage().getName());
// Display an hint to the user about what he should say.
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo");
// Given an hint to the recognizer about what the user is going to say
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
// Specify how many results you want to receive. The results will be sorted
// where the first result is the one with higher confidence.
intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 5);
// Specify the recognition language. This parameter has to be specified only if the
// recognition has to be done in a specific language and not the default one (i.e., the
// system locale). Most of the applications do not have to set this parameter.
if (!mSupportedLanguageView.getSelectedItem().toString().equals("Default")) {
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE,
mSupportedLanguageView.getSelectedItem().toString());
}
startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE);
}
/**
* Handle the results from the recognition activity.
*/
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) {
// Fill the list view with the strings the recognizer thought it could have heard
ArrayList<String> matches = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1,
matches));
for (String bestMatch : matches) {
if (bestMatch.contains("record") || bestMatch.contains("cod") || bestMatch.contains("ed")) {
// Intent myIntent = new Intent(View, PahtRecoder.class);
// startActivityForResult(myIntent, 0);
Intent my = new Intent(getApplicationContext(),
PathRecorderStart.class);
startActivityForResult(my, 0);
}
else if (bestMatch.contains("select") || bestMatch.contains("elect") || bestMatch.contains("ct")) {
// Intent myIntent = new Intent(View, PahtRecoder.class);
// startActivityForResult(myIntent, 0);
Intent my = new Intent(getApplicationContext(),
PathSelectorOptions.class);
startActivityForResult(my, 0);
}
else {
Log.i(TAG, "COMMAND_NOT_MATCHING");
}
}
}
super.onActivityResult(requestCode, resultCode, data);
}
private void refreshVoiceSettings() {
Log.i(TAG, "Sending broadcast");
sendOrderedBroadcast(RecognizerIntent.getVoiceDetailsIntent(this), null,
new SupportedLanguageBroadcastReceiver(), null, Activity.RESULT_OK, null, null);
}
private void updateSupportedLanguages(List<String> languages) {
// We add "Default" at the beginning of the list to simulate default language.
languages.add(0, "Default");
SpinnerAdapter adapter = new ArrayAdapter<CharSequence>(this,
android.R.layout.simple_spinner_item, languages.toArray(
new String[languages.size()]));
mSupportedLanguageView.setAdapter(adapter);
}
private void updateLanguagePreference(String language) {
TextView textView = (TextView) findViewById(R.id.language_preference);
textView.setText(language);
}
/**
* Handles the response of the broadcast request about the recognizer supported languages.
*
* The receiver is required only if the application wants to do recognition in a specific
* language.
*/
private class SupportedLanguageBroadcastReceiver extends BroadcastReceiver {
#Override
public void onReceive(Context context, final Intent intent) {
Log.i(TAG, "Receiving broadcast " + intent);
final Bundle extra = getResultExtras(false);
if (getResultCode() != Activity.RESULT_OK) {
mHandler.post(new Runnable() {
#Override
public void run() {
showToast("Error code:" + getResultCode());
}
});
}
if (extra == null) {
mHandler.post(new Runnable() {
#Override
public void run() {
showToast("No extra");
}
});
}
if (extra.containsKey(RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES)) {
mHandler.post(new Runnable() {
#Override
public void run() {
updateSupportedLanguages(extra.getStringArrayList(
RecognizerIntent.EXTRA_SUPPORTED_LANGUAGES));
}
});
}
if (extra.containsKey(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE)) {
mHandler.post(new Runnable() {
#Override
public void run() {
updateLanguagePreference(
extra.getString(RecognizerIntent.EXTRA_LANGUAGE_PREFERENCE));
}
});
}
}
private void showToast(String text) {
Toast.makeText(mainActivity.this, text, 1000).show();
}
}
}

TextToSpeech:service isn't started?

Hey i a creating app which is TextToSpeech functionality. I write code and run but no any
speech is generate. some error display in logcat. here is logcat
04-11 20:21:30.099: VERBOSE/TtsService(481): TtsService.setLanguage(eng, USA, )
04-11 20:21:30.109: INFO/TextToSpeech.java - speak(849): speak text of length 41
04-11 20:21:30.109: ERROR/TextToSpeech.java - speak(849): service isn't started
I don't understand how to solve this...here is my full code.
public class ExamAppearingActivity extends Activity implements OnInitListener
{
private int MY_DATA_CHECK_CODE = 0;
private TextToSpeech tts;
#Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.examquestionscreen);
if (isVoiceEnabled==1)
{
tts = new TextToSpeech(this, this);
final List<ObjectiveWiseQuestion> QuestionWiseProfile1= db.getOneQuestion(examId);
for (final ObjectiveWiseQuestion cn : QuestionWiseProfile1)
{
Intent checkIntent = new Intent();
checkIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
startActivityForResult(checkIntent, MY_DATA_CHECK_CODE);
db=new MySQLiteHelper(getBaseContext());
db.getWritableDatabase();
counter=cn.getCounter();
String question="Question is "+cn.getQuestion();
String option1="Option A is "+cn.getOptionA();
String option2="Option B is "+cn.getOptionB();
String option3="Option C is "+cn.getOptionC();
String option4="Option D is "+cn.getOptionD();
tts.speak(question, TextToSpeech.QUEUE_ADD, null);
tts.speak(option1, TextToSpeech.QUEUE_ADD, null);
tts.speak(option2, TextToSpeech.QUEUE_ADD, null);
tts.speak(option3, TextToSpeech.QUEUE_ADD, null);
tts.speak(option4, TextToSpeech.QUEUE_ADD, null);
}
}
}
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
if (requestCode == MY_DATA_CHECK_CODE) {
if (resultCode == TextToSpeech.Engine.CHECK_VOICE_DATA_PASS) {
// success, create the TTS instance
tts = new TextToSpeech(this, this);
}
else {
// missing data, install it
Intent installIntent = new Intent();
installIntent.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
//tts.isLanguageAvailable(Locale.INDIA_HINDI);
startActivity(installIntent);
}
}
}
#Override
public void onInit(int status)
{
if (status == TextToSpeech.SUCCESS)
{
// tts.setLanguage(Locale.US);
Locale loc = new Locale ("hi_IN");
tts.setLanguage(loc);
Toast.makeText(ExamAppearingActivity.this,"Text-To-Speech engine is initialized", Toast.LENGTH_LONG).show();
}
else if (status == TextToSpeech.ERROR)
{
Toast.makeText(ExamAppearingActivity.this, "Error occurred while initializing Text-To-Speech engine", Toast.LENGTH_LONG).show();
}
}
This code is run only when i add it on button click but i need to start it from
onCreate() method.
Any help is appreciated.
You can't use tts until onInit has been called.
At the moment, you create it and try to use it within the onCreate method, but it won't have finished being initialised by then.
You're also creating tts twice. The one in onActivityResult makes most sense because you're checking it exists first. I'd get rid of the creation in onCreate, and put all of the actual speaking into onInit.

Categories

Resources