![enter image description here][1]I am working an Android application where I am using both a text to speech conversion and speech recognition elements. However, when I give a repeat instruction or press a repeat button in order to start tts to speak , it throws these warnings:
**speak failed: not bound to TTS engine**
**stop failed: not bound to TTS engine**
What do I need to do for this to work?
You have to make sure that you call speak only after onInit is called.
Related
I am trying to develop an android app via vuforia sdk and unity.
The app should:
detect a string and as soon as the string is detected, will play video on video prefabs (not full screen).
However, I could not figure out where in TextEventHandler.cs is handling whether text is detected...
Sorry guys.. i forgot to post code..
below was what I found in TextEventHandler that vuforia provides to me...
I was guessing maybe this is handling whether the text is detected or not
// Once the text tracker has initialized and every time the video background changed,
// set the region of interest
if (mVideoBackgroundChanged)
{
TextTracker textTracker = TrackerManager.Instance.GetTracker<TextTracker>();
if (textTracker != null)
{
CalculateLoupeRegion();
textTracker.SetRegionOfInterest(mDetectionAndTrackingRect, mDetectionAndTrackingRect);
//v.SetActive (true);
}
mVideoBackgroundChanged = false;
}
Use Text Recognition instead of this method. Visit here to see how to recognize text. And load your video when the text is recognized instead.
after carefully following this answer: How to integrate Zxing Barcode Scanner without installing the actual zxing app (cannot resolve symbol: .android.CaptureActivity)? by Liran Cohen
the scanner should not be opened in a separate activity (called via intent),
it should be shown just below a button (like logout).
I was able to detect/scan and get the decoded string of the barcode but the problem is the red line is missing, how to make the red line appear?
I extended the captureActivity to a activity I am using like readerActivity and override the handleDecode function to detect/scan and decode the qr image.
just want to know how to show the red line.
also tried changing the following code in the viewfinderview but still not working:
int middle = frame.width() /2 + frame.left;
under the onDraw() function
also tried calling the function:
viewfinderView.setWillNotDraw(false);
in the oncreate function of the captureActivity, the red line is still hidden.
You can take the code of Zxing on GitHub and add it to your project.
Look at these activity CaptureActivity for adding similar code to your activity.
I'm a beginner in Android Development. I'm making a Recognition Speech API with my own speech recognizer algorithm. I discovered that Android offers a class named RecognitionService that provides callbacks that solves my problem.
The question is: If i extend this class and I create my own RecognitionService, how a third programmer can use my class and set his default RecognitionService as the default recognitionservice in the system, or at least in a button or another Android view component.
Thank you for all.
Have a look at the Kõnele project (http://kaljurand.github.io/K6nele/about/), which implements the RecognitionService-interface.
Other apps can directly call this implementation using the 2-argument createSpeechRecognizer, e.g.
SpeechRecognizer.createSpeechRecognizer(this,
new ComponentName(
"ee.ioc.phon.android.speak",
"ee.ioc.phon.android.speak.SpeechRecognitionService");
);
With the 1-argument call the system default is returned. The user can set the default via:
Settings -> Language & input -> Speech -> Voice input
I am working on an application in which i would like to use TTS to read text. I want to support Indian Languages offline so i have installed eSpeak Text To Speech engine in my android device and have set it as default. After understanding Speech Synthesis Markup Language (SSML) i realized that i can give phonemes as an input to make the Speech Engine pronounce words correctly. So i created a sample application in which i am using TextToSpeech class of Android.
String text = "[[ D,Is Iz sVm f#n'EtIk t'Ekst 'InpUt ]]";// "This is some phonetic text input"
tts.speak(text, TextToSpeech.QUEUE_FLUSH, null);
I read in the documentation of espeak that to make the engine understand phonemes, simply put phonetic expression in double square brackets and it would accept it as phoneme and render it accordingly. But this doesn't work in Android. Is the syntax correct?
Thanks
I directly used following code with Punjabi language unicode text in my app and it works.
m_objTTS = new TextToSpeech(this, this, "com.googlecode.eyesfree.espeak");
m_strTexttoSpeak = "ਸਕਰੀਨ ਤੇ ਟੈਪ ਕਰੋ|"; // its punjabi translation for "Tap on Screen"
m_objTTS.speak(m_strTexttoSpeak,TextToSpeech.QUEUE_FLUSH, null, null);
You should have espeak TTS app installed on mobile device and set it as default TTS engine. Default system language set to language of your choice ( Punjabi is set in my case )
I am working on an app that displays text and plays audio from an API. It also needs to display copyright information for that text and audio; this information is retrieved from the same API.
Many of the audio copyrights contain the "℗" symbol, also known as the Sound Recording Copyright Symbol:
http://en.wikipedia.org/wiki/Sound_recording_copyright_symbol
However, when I take the text retrieved from the API and put it in a TextView, the "℗" symbol doesn't show up. The normal copyright symbol (©) shows just fine.
Has anyone encountered anything like this before? Is there a workaround?
You can try thus ways. such as
first create string value in res/values/string.xml . for example
<string name="copyright">\u2117 2011 copyright</string>
then get this from your activity class thus way. such as
TextView tvHeader = (TextView) findViewById(R.id.tvHeader);
tvHeader.setText(getResources().getString(
R.string.copyright));