I am new in java and android. I need to use text to speech in a class not in the activity, is it possible? if yes, how can I do it? I just found good tutorials which it was done in an activity.
Thank you!
I know this is rather late but if your still stumped (hopefully not); or for the purposes of anyone else with the same/similar question.
An stand alone class is pretty simple to do.
It needs context and if you want to pass it an message. Pass both in the constructor.
So you end up with something, which looks like this.
public class MyTTS {
private Context context;
private TextToSpeech tts;
private String txt;
private String TAG = MyTTS.class.getSimpleName();
public MyTTS(Context context, String txt) {
this.context = context;
this.txt = txt;
handleSpeech();
}
private void handleSpeech() {
tts = new TextToSpeech(context, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = tts.setLanguage(Locale.ENGLISH);
if (result == TextToSpeech.LANG_MISSING_DATA || result == TextTo
Speech.LANG_NOT_SUPPORTED) {
Log.e(TAG, "This Language is not supported");
} else {
saySomeThing();
}
} else {
Log.e(TAG, "Initialization Failed!");
}
}
});
}
private void saySomeThing() {
if((txt != null) && (txt.length() > 0)) {
tts.speak(txt, TextToSpeech.QUEUE_FLUSH, null);
}
else {
tts.shutdown();
}
}
To execute it:
new MyTTS(context, message);
Related
The code I tried to start callback for my background service.
public class PushService extends JobIntentService implements MethodCallHandler {
private static FlutterEngine backgroundFlutterEngine = null;
private void startPushService(Context context) {
synchronized (serviceStarted) {
if (backgroundFlutterEngine == null) {
final Long callbackHandle = PushStore.getInstance().getPreferenceLongValue(
PushPlugin.CALLBACK_DISPATCHER_ID, 0L);
if (callbackHandle == 0L) {
Log.e(TAG, "Fatal: no callback registered.");
return;
}
final FlutterCallbackInformation callbackInfo =
FlutterCallbackInformation.lookupCallbackInformation(callbackHandle);
if (callbackInfo == null) {
Log.e(TAG, "Fatal: failed to find callback info.");
return;
}
Log.i(TAG, "Starting PushService...");
backgroundFlutterEngine = new FlutterEngine(context);
DartCallback args = DartCallback(context.getAssets(), FlutterMain.findAppBundlePath(context), callbackInfo);
backgroundFlutterEngine.getDartExecutor().executeDartCallback(args);
...
}
}
}
}
But the FlutterMain is deprecated, how to do the new way to execute dart callback?
FlutterMain has been replaced by FlutterLoader, from what I understand.
Edit:
On the FlutterMain documentation page, right underneath the first horizontal divider, it mentions this replacement.
I am developing TextToSpeech in Android. I use the following code and it work fine.
public void EnableTextToSpeech(){
//TextToSpeech
text_to_speech = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if(status == TextToSpeech.SUCCESS){
Log.d(TAG,"TextToSpeech init SUCCESS");
int result = text_to_speech.setLanguage(Locale.US);
text_to_speech.setPitch(1);
text_to_speech.setSpeechRate(1);
if(result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED){
Log.d(TAG,"TextToSpeech NOT SUPPORT");
}else {
Log.d(TAG,"TextToSpeech Enable");
String welcome = "Welcome";
text_to_speech.speak(welcome , TextToSpeech.QUEUE_FLUSH,null,"TEST");
}
}else {
Log.d(TAG,"TextToSpeech init FAIL");
}
}
});
}
But I want it to speak the multiple language like Chines and English in one sentence.
Can I set different language like the following code?
text_to_speech.setLanguage(Locale.CHINESE);
text_to_speech.setLanguage(Locale.US);
How to set and speak multiple language in a sentence for TextToSpeech in Android ?
I am trying to add text-to-speech feature to my app, and it is working fine until I updated TTS from Google Play store.
There wasn't any delay to initialize the TTS in onCreate Method.
After the update, it would take 3-5 seconds for this TTS to finish initializing.
Basically, the text-to-speech is not ready until 3-5 seconds later.
Can someone please tell me what I've done wrong?
private HashMap<String, String> TTS_ID = new HashMap<String, String>();
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
.....
.....
TextToSpeech_Initialize();
}
public void TextToSpeech_Initialize() {
TTS_ID.put(TextToSpeech.Engine.KEY_PARAM_UTTERANCE_ID, "UniqueID");
speech = new TextToSpeech(MainActivity.this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if(status == TextToSpeech.SUCCESS) {
speech.setSpeechRate(SpeechRateValue);
speech.speak(IntroSpeech, TextToSpeech.QUEUE_FLUSH, TTS_ID);
}
}
});
}
Thank you very much
Confirmed! This is an issue with Google text to speech engine, if you try any other tts the delay disappears, eg Pico tts.
I have stumbled across this problem before but now I have found a proper solution..
You can initialize TextToSpeach in onCreate() like this:
TextToSpeach textToSpeech = new TextToSpeech(this, this);
but first you need to implement TextToSpeech.OnInitListener, and then you need to override the onInit() method:
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
Toast.makeText(getApplicationContext(), "Language not supported", Toast.LENGTH_SHORT).show();
} else {
button.setEnabled(true);
}
} else {
Toast.makeText(getApplicationContext(), "Init failed", Toast.LENGTH_SHORT).show();
}
}
I also noticed that if you didn't set the language in onInit() there is gonna be a delay!!
And now you can write the method that says the text:
private void speakOut(final String detectedText){
if(textToSpeech !=null){
textToSpeech.stop(); //stop and say the new word
textToSpeech.speak(detectedText ,TextToSpeech.QUEUE_FLUSH, null, null);
}
}
I've installed the PocketSphinx demo and it works fine under Ubuntu and Eclipse, but despite trying I can't work out how I would add recognition of multiple words.
All I want is for the code to recognize single words, which I can then switch() within the code, e.g. "up", "down", "left", "right". I don't want to recognize sentences, just single words.
Any help on this would be grateful. I have spotted other users' having similar problems but nobody knows the answer so far.
One thing which is baffling me is why do we need to use the "wakeup" constant at all?
private static final String KWS_SEARCH = "wakeup";
private static final String KEYPHRASE = "oh mighty computer";
.
.
.
recognizer.addKeyphraseSearch(KWS_SEARCH, KEYPHRASE);
What has wakeup got to do with anything?
I have made some progress (?) : Using addGrammarSearch I am able to use a .gram file to list my words, e.g. up,down,left,right,forwards,backwards, which seems to work well if all I say are those particular words. However, any other words will cause the system to match what is said to the "nearest" word from those stated. Ideally I don't want recognition to occur if words spoken are not in the .gram file...
Thanks to Nikolay's tip (see his answer above), I have developed the following code which works fine, and does not recognize words unless they're on the list. You can copy and paste this directly over the main class in the PocketSphinxDemo code:
public class PocketSphinxActivity extends Activity implements RecognitionListener
{
private static final String DIGITS_SEARCH = "digits";
private SpeechRecognizer recognizer;
#Override
public void onCreate(Bundle state)
{
super.onCreate(state);
setContentView(R.layout.main);
((TextView) findViewById(R.id.caption_text)).setText("Preparing the recognizer");
try
{
Assets assets = new Assets(PocketSphinxActivity.this);
File assetDir = assets.syncAssets();
setupRecognizer(assetDir);
}
catch (IOException e)
{
// oops
}
((TextView) findViewById(R.id.caption_text)).setText("Say up, down, left, right, forwards, backwards");
reset();
}
#Override
public void onPartialResult(Hypothesis hypothesis)
{
}
#Override
public void onResult(Hypothesis hypothesis)
{
((TextView) findViewById(R.id.result_text)).setText("");
if (hypothesis != null)
{
String text = hypothesis.getHypstr();
makeText(getApplicationContext(), text, Toast.LENGTH_SHORT).show();
}
}
#Override
public void onBeginningOfSpeech()
{
}
#Override
public void onEndOfSpeech()
{
reset();
}
private void setupRecognizer(File assetsDir)
{
File modelsDir = new File(assetsDir, "models");
recognizer = defaultSetup().setAcousticModel(new File(modelsDir, "hmm/en-us-semi"))
.setDictionary(new File(modelsDir, "dict/cmu07a.dic"))
.setRawLogDir(assetsDir).setKeywordThreshold(1e-20f)
.getRecognizer();
recognizer.addListener(this);
File digitsGrammar = new File(modelsDir, "grammar/digits.gram");
recognizer.addKeywordSearch(DIGITS_SEARCH, digitsGrammar);
}
private void reset()
{
recognizer.stop();
recognizer.startListening(DIGITS_SEARCH);
}
}
Your digits.gram file should be something like:
up /1e-1/
down /1e-1/
left /1e-1/
right /1e-1/
forwards /1e-1/
backwards /1e-1/
You should experiment with the thresholds within the double slashes // for performance, where 1e-1 represents 0.1 (I think). I think the maximum is 1.0.
And it's 5.30pm so I can stop working now. Result.
you can use addKeywordSearch which uses to file with keyphrases. One phrase per line with threshold for each phrase in //, for example
up /1.0/
down /1.0/
left /1.0/
right /1.0/
forwards /1e-1/
Threshold must be selected to avoid false alarms.
Working on updating Antinous amendment to the PocketSphinx demo to allow it to run on Android Studio. This is what I have so far,
//Note: change MainActivity to PocketSphinxActivity for demo use...
public class MainActivity extends Activity implements RecognitionListener {
private static final String DIGITS_SEARCH = "digits";
private SpeechRecognizer recognizer;
/* Used to handle permission request */
private static final int PERMISSIONS_REQUEST_RECORD_AUDIO = 1;
#Override
public void onCreate(Bundle state) {
super.onCreate(state);
setContentView(R.layout.main);
((TextView) findViewById(R.id.caption_text))
.setText("Preparing the recognizer");
// Check if user has given permission to record audio
int permissionCheck = ContextCompat.checkSelfPermission(getApplicationContext(), Manifest.permission.RECORD_AUDIO);
if (permissionCheck != PackageManager.PERMISSION_GRANTED) {
ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.RECORD_AUDIO}, PERMISSIONS_REQUEST_RECORD_AUDIO);
return;
}
new AsyncTask<Void, Void, Exception>() {
#Override
protected Exception doInBackground(Void... params) {
try {
Assets assets = new Assets(MainActivity.this);
File assetDir = assets.syncAssets();
setupRecognizer(assetDir);
} catch (IOException e) {
return e;
}
return null;
}
#Override
protected void onPostExecute(Exception result) {
if (result != null) {
((TextView) findViewById(R.id.caption_text))
.setText("Failed to init recognizer " + result);
} else {
reset();
}
}
}.execute();
((TextView) findViewById(R.id.caption_text)).setText("Say one, two, three, four, five, six...");
}
/**
* In partial result we get quick updates about current hypothesis. In
* keyword spotting mode we can react here, in other modes we need to wait
* for final result in onResult.
*/
#Override
public void onPartialResult(Hypothesis hypothesis) {
if (hypothesis == null) {
return;
} else if (hypothesis != null) {
if (recognizer != null) {
//recognizer.rapidSphinxPartialResult(hypothesis.getHypstr());
String text = hypothesis.getHypstr();
if (text.equals(DIGITS_SEARCH)) {
recognizer.cancel();
performAction();
recognizer.startListening(DIGITS_SEARCH);
}else{
//Toast.makeText(getApplicationContext(),"Partial result = " +text,Toast.LENGTH_SHORT).show();
}
}
}
}
#Override
public void onResult(Hypothesis hypothesis) {
((TextView) findViewById(R.id.result_text)).setText("");
if (hypothesis != null) {
String text = hypothesis.getHypstr();
makeText(getApplicationContext(), "Hypothesis" +text, Toast.LENGTH_SHORT).show();
}else if(hypothesis == null){
makeText(getApplicationContext(), "hypothesis = null", Toast.LENGTH_SHORT).show();
}
}
#Override
public void onDestroy() {
super.onDestroy();
recognizer.cancel();
recognizer.shutdown();
}
#Override
public void onBeginningOfSpeech() {
}
#Override
public void onEndOfSpeech() {
reset();
}
#Override
public void onTimeout() {
}
private void setupRecognizer(File assetsDir) throws IOException {
// The recognizer can be configured to perform multiple searches
// of different kind and switch between them
recognizer = defaultSetup()
.setAcousticModel(new File(assetsDir, "en-us-ptm"))
.setDictionary(new File(assetsDir, "cmudict-en-us.dict"))
// .setRawLogDir(assetsDir).setKeywordThreshold(1e-20f)
.getRecognizer();
recognizer.addListener(this);
File digitsGrammar = new File(assetsDir, "digits.gram");
recognizer.addKeywordSearch(DIGITS_SEARCH, digitsGrammar);
}
private void reset(){
recognizer.stop();
recognizer.startListening(DIGITS_SEARCH);
}
#Override
public void onError(Exception error) {
((TextView) findViewById(R.id.caption_text)).setText(error.getMessage());
}
public void performAction() {
// do here whatever you want
makeText(getApplicationContext(), "performAction done... ", Toast.LENGTH_SHORT).show();
}
}
Caveat emptor: this is a work in progress. Check back later. Suggestions would be appreciated.
I'm trying to get TTS to run in the background. But, I never get any sound. I have a broadcast receiver which starts a service. I put my TTS code in both of those, but it never speaks. I know the method is being called (I put a breakpoint on it), but it still doesn't work.
Here's my log, but it doesn't seem to contain anything about the TTS service.
10-04 22:45:30.663: WARN/InputManagerService(209): Window already focused, ignoring focus gain of: com.android.internal.view.IInputMethodClient$Stub$Proxy#4423df40
10-04 22:45:37.363: INFO/PollingManager(449): calculateShortestInterval(): shortest interval is 540000
10-04 22:45:37.413: INFO/TLSStateManager(449): org.apache.harmony.nio.internal.SocketChannelImpl#4400ece0: Wrote out 29 bytes of data with 0 bytes remaining.
10-04 22:45:38.043: ERROR/IMAPEmailService(480): Can't create default IMAP system folder Trash. Please reconfigure the folder names.
10-04 22:45:40.123: ERROR/EONS(303): EF_PNN: No short Name
10-04 22:45:41.543: ERROR/WMSTS(171): Month is invalid: 0
10-04 22:45:42.043: WARN/AudioFlinger(172): write blocked for 212 msecs, 24 delayed writes, thread 0xb998
Thanks everyone in advance!
It would help to see your TTS code to make it easier for people to help you. Since I already have TTS working in a BroadcastReceiver, here's an example trimmed down from my code.
public static class TTS extends Service implements TextToSpeech.OnInitListener, OnUtteranceCompletedListener {
private TextToSpeech mTts;
private String spokenText;
#Override
public void onCreate() {
mTts = new TextToSpeech(this, this);
// This is a good place to set spokenText
}
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = mTts.setLanguage(Locale.US);
if (result != TextToSpeech.LANG_MISSING_DATA && result != TextToSpeech.LANG_NOT_SUPPORTED) {
mTts.speak(spokenText, TextToSpeech.QUEUE_FLUSH, null);
}
}
}
#Override
public void onUtteranceCompleted(String uttId) {
stopSelf();
}
#Override
public void onDestroy() {
if (mTts != null) {
mTts.stop();
mTts.shutdown();
}
super.onDestroy();
}
#Override
public IBinder onBind(Intent arg0) {
return null;
}
}
Start the TTS service at the point in your BroadcastReceiver where you want it to speak:
context.startService(new Intent(context, TTS.class));
I hope this helps someone if not the asker (I'm sure he got it working by now).
you can also try this,if the text to be spoken is coming from a broadcast listener.first create a service
public class MyTell extends Service implements OnInitListener{
public MyTell() {
}
public static TextToSpeech mTts;
#Override
public IBinder onBind(Intent intent) {
return null;
}
public void onStart(Intent intent, int startId) {
// TODO Auto-generated method stub
mPreferences = getSharedPreferences(Mysettings.PREF_NAME, Service.MODE_PRIVATE);
pit = Float.parseFloat(mPreferences.getString("pit","0.8"));
rate = Float.parseFloat(mPreferences.getString("rate","1.1"));
mTts = new TextToSpeech(this, this);
super.onStart(intent, startId);
}
public void onInit(int status) {
// TODO Auto-generated method stub
if (status == TextToSpeech.SUCCESS) {
if (mTts.isLanguageAvailable(Locale.UK) >= 0)
Toast.makeText( MyTell.this,
"Sucessfull intialization of Text-To-Speech engine Mytell ",
Toast.LENGTH_LONG).show();
mTts.setLanguage(Locale.UK);
mTts.setPitch(pit);
mTts.setSpeechRate(rate);
} else if (status == TextToSpeech.ERROR) {
Toast.makeText(MyTell.this,
"Unable to initialize Text-To-Speech engine",
Toast.LENGTH_LONG).show();
}
}}
then create the listener where you're insert your text
public class MyBroadCast extends BroadcastReceiver {
public MyPop() {
}
#Override
public void onReceive(Context context, Intent intent) {
// TODO: This method is called when the BroadcastReceiver is receiving
// an Intent broadcast.
//here is where you're use the service you created to speak the text
MyTell.mTts.speak("Text to be spoken", TextToSpeech.QUEUE_FLUSH,null);
}
}
make sure you start the service before you use the tts engine and also check if a tts engine is available
Its working for me (just add on mainfest permision)
public class TES extends Service implements TextToSpeech.OnInitListener {
private TextToSpeech tts;
#Override
public IBinder onBind(Intent arg0) {
return null;
}
#Override
public void onCreate() {
super.onCreate();
}
#Override
public void onDestroy() {
// TODO Auto-generated method stub
if (tts != null) {
tts.stop();
tts.shutdown();
}
super.onDestroy();
}
#Override
public void onStart(Intent intent, int startId) {
tts = new TextToSpeech(this, this);
speakOut();
}
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
int result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA
|| result == TextToSpeech.LANG_NOT_SUPPORTED) {
Log.e("TTS", "This Language is not supported");
}
speakOut();
} else {
Log.e("TTS", "Initilization Failed!");
}
}
private void speakOut() {
tts.speak("its working", TextToSpeech.QUEUE_FLUSH, null);
}
}
Android TTS is a bounded service. Broadcast receiver has a limited context and can't bind himself to any service. However, It can START a service. All the examples shown here are of services that starting the TTS engine and of receiver that starts them.
You can also do it with activity but if you don't need UI a service is better.
I just think it's a good idea to understand how it works and why is works.
Good luck.
Using Kotlin, the above answers can be re-written as:
Receiver:
class MyReceiver : BroadcastReceiver() {
val ttsService = Intent(context, TTS::class.java)
context.startService(ttsService)
}
Service:
class TTS : Service(), TextToSpeech.OnInitListener {
private var mTts: TextToSpeech? = null
private var spokenText: String? = null
override fun onCreate() {
mTts = TextToSpeech(this, this)
// This is a good place to set spokenText
spokenText = "Hello!.."
}
override fun onInit(status: Int) {
if (status == TextToSpeech.SUCCESS) {
val result = mTts!!.setLanguage(Locale.US)
if (result != TextToSpeech.LANG_MISSING_DATA && result != TextToSpeech.LANG_NOT_SUPPORTED) {
Thread().run {
mTts!!.apply {
speak(spokenText, TextToSpeech.QUEUE_FLUSH, null, null)
}
Thread.sleep(10000)
stopSelf()
}
}
} else if (status == TextToSpeech.ERROR) {
stopSelf()
}
}
override fun onDestroy() {
if (mTts != null) {
mTts!!.stop()
mTts!!.shutdown()
}
super.onDestroy()
}
override fun onBind(arg0: Intent): IBinder? {
return null
}
}
And in the Manifest:
<receiver
android:name=".MyReceiver">
<intent-filter>
<action android:name="android.intent.action.xxxx" />
</intent-filter>
</receiver>
<service android:name=".TTS" />
Android-O onwards using service for things like this has background restrictions. One can use JobIntentService to achieve the same as shown here.