I am developing a android translator app which have TTS feature also. My problem is the voice for Greek language not working properly. It just speak characters also does not speak in Hindi. I have goggled and found that google TTS not support Greek language. Will it be possible if I download the Greek language pack and then it will work properly. Please suggest me if it work or if not then what is the reasons.
I had same issue before.
You can use Google Online TTS.
It is my example code.
public void Online_TTS(final String text,final String lan) {
new Thread(new Runnable() {
#Override
public void run() {
String Url = "https://translate.google.com/translate_tts?ie=UTF-8";
String pronouce = "&q=" + text.replaceAll(" ", "%20");
String language = "&tl=" + lan;
String web = "&client=tw-ob";
String fullUrl = Url + pronouce + language + web;
Uri uri = Uri.parse(fullUrl);
MediaPlayer mediaPlayer = new MediaPlayer();
try {
mediaPlayer.setDataSource(MainActivity.this,uri);
mediaPlayer.prepare();
mediaPlayer.start();
} catch (IOException e) {
e.printStackTrace();
Log.i(TAG,"error");
}
}
}).start();
}
Hope to help you
Related
in my app I want to record the user's speech, run it through a band pass filter, then pass the resulting audio file (PCM / WAV) to the text to speech engine to speak the filtered results, I have everything working except cannot find a way to pass an audio file to the tts engine, I have googled this for a long time now (2 weeks) and no luck. is there any workaround for achieving this?
What I tried was calling the RecognizerIntent, then start the band pass filter via recording, and also tried the other way around by start the band pass method first then calling the recognizer intent but either way kills the tts instance even tho it's running on a separate thread. Also I have tested this using the normal tts procedure in the recognizer intent and also the web search version of the recognizer intent both with the same results, If I don't implement the band pass filter (NOTE that a recording thread is started at this time) it works fine but as soon as I implement the bandpass filter it fails, with a helpfull message when in web search mode that says "google is unavailable" Here's my current code:
RecognizerIntent, normal version:
public void getMic() {//bring up the speak now message window
tts = new TextToSpeech(this, new TextToSpeech.OnInitListener() {
#Override
public void onInit(int status) {
if (status == TextToSpeech.SUCCESS) {
result = tts.setLanguage(Locale.US);
if (result == TextToSpeech.LANG_MISSING_DATA || result == TextToSpeech.LANG_NOT_SUPPORTED) {
l = new Intent();
l.setAction(TextToSpeech.Engine.ACTION_INSTALL_TTS_DATA);
startActivity(l);
}
}
}
});
k = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
k.putExtra(RecognizerIntent.EXTRA_PROMPT, "Say something");
try {
startActivityForResult(k, 400);
} catch (ActivityNotFoundException a) {
Log.i("CrowdSpeech", "Your device doesn't support Speech Recognition");
}
if(crowdFilter && running==4){
try {
startRecording();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
}
Recognizer intent web search version:
public void getWeb() { //Search the web from voice input
k = new Intent(RecognizerIntent.ACTION_WEB_SEARCH);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
k.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.getDefault());
k.putExtra(RecognizerIntent.EXTRA_PROMPT, "Say something");
try {
startActivityForResult(k, 400);
} catch (ActivityNotFoundException a) {
Log.i("CrowdSpeech", "Your device doesn't support Speech Recognition");
}
if (crowdFilter && running == 4) {
try {
startRecording();
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
}
And the startRecording method that applies the bandpass filter:
private void startRecording() throws FileNotFoundException {
if (running == 4) { //start recording from mic, apply bandpass filter and save as wave file using TARSOS library
dispatcher = AudioDispatcherFactory.fromDefaultMicrophone(RECORDER_SAMPLERATE, bufferSize, 0);
AudioProcessor p = new BandPass(freqChange, tollerance, RECORDER_SAMPLERATE);
dispatcher.addAudioProcessor(p);
isRecording = true;
// Output
File f = new File(myFilename.toString() + "/Filtered result.wav");
RandomAccessFile outputFile = new RandomAccessFile(f, "rw");
TarsosDSPAudioFormat outputFormat = new TarsosDSPAudioFormat(44100, 16, 1, true, true);
WriterProcessor writer = new WriterProcessor(outputFormat, outputFile);
dispatcher.addAudioProcessor(writer);
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
dispatcher.run();
}
}, "Crowd_Speech Thread");
recordingThread.start();
}
}
The only reason I'm doing it this way is in hopes that by applying the filter that the tts engine would receive the modified audio, which is also saved in a file because originally I wanted to just pass the file to tts to read after recording, Is there any way to accomplish this?
Another thing I'm thinking of is there any possible way inside my project that I can modify the source code inside the library that the recognizer intent references so that I can add a parameter to get audio from file?
Im trying to do an application for android 4.2.2. It just have to play some muted videos in a endless loop inside an android videoview.
It works fine but, sometimes, I get a "cannot play this video" popup, despite I see the video playing under that popup. My app is supposed to run without any input method, so... ¿Is there any chance to avoid that popup? It appears to happen randomly, sometimes after 30-40 minutes, sometimes after some hours... Its just unpredictable.
I tried to debug it and, finally adding an onInfoListener, it seems like I got the error code (952). Besides, i didnt find any help with that error... Here is the code... Any tips/help would be VERY nice, so thank you in advance...
...
MediaPlayer.OnPreparedListener PreparedListener = new MediaPlayer.OnPreparedListener(){
#Override
public void onPrepared(MediaPlayer m) {
try {
if (m.isPlaying()) {
m.stop();
m.release();
m = new MediaPlayer();
m.setVolume(0f, 0f);
}
m.setLooping(false);
m.start();
} catch (Exception e) {
e.printStackTrace();
}
}
};
video.setOnPreparedListener(PreparedListener);
video.setOnInfoListener(new MediaPlayer.OnInfoListener() {
#Override
public boolean onInfo(MediaPlayer mp, int what, int extra) {
File file = new File(myRootDirectory.getAbsolutePath()+"/logVideo.txt");
SimpleDateFormat s = new SimpleDateFormat("dd-MM-yyyy//HH:mm:ss");
String format = s.format(new Date());
String text = format+": ("+what+","+extra+")\n";
if(what==952){
text = "!!!!!!!!!!\n"+text; //What is this code 952...??
}
FileOutputStream f = null;
try {
f = new FileOutputStream(file,true); //True = Append to file, false = Overwrite
PrintStream p = new PrintStream(f);
p.print(text);
p.flush();
p.close();
f.close();
} catch (FileNotFoundException e) {
e.printStackTrace();
Log.i("Error", "******* File not found. Did you" +
" add a WRITE_EXTERNAL_STORAGE permission to the manifest?");
} catch (IOException e) {
e.printStackTrace();
}
Log.d("INFO LISTENER",what+" " + extra);
return false;
}
});
video.setVideoPath(nextVideo);
Maybe one of the standard "what" constants is misspelled? You might try to match against the known "what" codes (MEDIA_INFO_BAD_INTERLEAVING, etc).
i am developing a native android application, i am using a third party api as well. The problem is, when i connect mobile (S3) to my machine and run application directly on mobile then it works fine. But when i copied the APK to android mobile, installed APP and run. Then on one of api call it crashes saying "Unfortunately AppName stopped working".
I could not find any way around to find out that what is the issue and what thing is the cause of application crash.
Anyone please suggest how to find out the problem or what can be the possible cuse. I am developing in Eclipse.
Why don't you set up a 5 minute quick BugSense library and free account and check the exception you get? http://www.bugsense.com/
You can set up your own log writing system via implementing java.lang.Thread.UncaughtExceptionHandler within your app.
e.g.
public class myExceptionHandler implements UncaughtExceptionHandler {
private UncaughtExceptionHandler defaultUEH;
private String localPath;
public myExceptionHandler(String localPath) {
this.localPath = localPath;
this.defaultUEH = Thread.getDefaultUncaughtExceptionHandler();
}
#Override
public void uncaughtException(Thread t, Throwable e) {
final long ts = new Date().getTime();
final Calendar cal = Calendar.getInstance();
cal.setTimeInMillis(ts);
final String timestamp = new SimpleDateFormat("HH_mm_ss_SSS")
.format(cal.getTime());
final Writer result = new StringWriter();
final PrintWriter printWriter = new PrintWriter(result);
e.printStackTrace(printWriter);
String stacktrace = result.toString();
printWriter.close();
String filename = "logcat"+timestamp + ".txt";
if (localPath != null) {
writeToFile(stacktrace, filename);
}
defaultUEH.uncaughtException(t, e);
}
private void writeToFile(String stacktrace, String filename) {
try {
BufferedWriter bos = new BufferedWriter(new FileWriter(localPath
+ "/" + filename));
bos.write(stacktrace);
bos.flush();
bos.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Call this handler from the MainActivity like this:
Thread.setDefaultUncaughtExceptionHandler(new myExceptionHandler("Put your target directory/folder path where you would like to store the log file"));
Now you will have a logfile written whenever app crashs with in the folder that you have used in code.
Just connect your device to the PC and let the LogCat window in Eclipse open. Logcat messages will still be outputted to LogCat without actually debugging your app.
I'm creating a TTS app using google's unofficial tts api and it works fine but the api can only process a maximum of 100 characters at a time but my application may have to send strings containing as many as 300 characters.
Here is my code
try {
String text = "bonjour comment allez-vous faire";
text=text.replace(" ", "%20");
String oLanguage="fr";
MediaPlayer player = new MediaPlayer();
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource("http://translate.google.com/translate_tts?tl=" + oLanguage + "&q=" + text);
player.prepare();
player.start();
} catch (Exception e) {
// TODO: handle exception
}
So my questions are
How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
Is there any need for me to use Asynctask for this process?
1.How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
ArrayList<String> arr = new ArrayList<String>();
int counter = 0;
String textToSpeach = "Your long text";
if(textToSpeach.length()>100)
{
for(int i =0 ; i<(textToSpeach.length()/100)+1;i++)
{
String temp = textToSpeach.substring(0+counter,99+counter);
arr.add(temp.substring(0, temp.lastIndexOf(" ")));
counter = counter + 100;
}
}
2.How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
player.setOnCompletionListener(new OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
// pass next block
}
});
3.Is there any need for me to use Asynctask for this process?
Right now I dont see any need for that.
Simple: this is not a public API, don't use it. Use Andorid's built-in TTS engine for speech synthesis. It does not have string length limitations.
can anybody help me how to parse and play this .pls file in android
[playlist]
NumberOfEntries=1
File1=http://stream.radiosai.net:8002/
A quick search resulted in this very basic PlsParser from the NPR app that, by the looks of it, will simply parse the .pls file and return all embedded URLs in a list of strings. You'll probably want to take a look at that as a starting point.
After that, you should be able to feed the URL to a MediaPlayer object, although I'm not completely sure about what formats/protocols are supported, or what limitations might apply in the case of streaming. The sequence of media player calls will look somewhat like this.
MediaPlayer mp = new MediaPlayer();
mp.setDataSource("http://stream.radiosai.net:8002/");
mp.setAudioStreamType(AudioManager.STREAM_MUSIC);
mp.prepare(); //also consider mp.prepareAsync().
mp.start();
Update: As far as I can tell, you can almost literally take the referenced code and put it your own use. Note that the code below is by no means complete or tested.
public class PlsParser {
private final BufferedReader reader;
public PlsParser(String url) {
URLConnection urlConnection = new URL(url).openConnection();
this.reader = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()));
}
public List<String> getUrls() {
LinkedList<String> urls = new LinkedList<String>();
while (true) {
try {
String line = reader.readLine();
if (line == null) {
break;
}
String url = parseLine(line);
if (url != null && !url.equals("")) {
urls.add(url);
}
} catch (IOException e) {
e.printStackTrace();
}
}
return urls;
}
private String parseLine(String line) {
if (line == null) {
return null;
}
String trimmed = line.trim();
if (trimmed.indexOf("http") >= 0) {
return trimmed.substring(trimmed.indexOf("http"));
}
return "";
}
}
Once you have that, you can simply create a new PlsParser with the url of the .pls file and call getUrls afterwards. Each list item will be a url as found in the .pls file. In your case that'll just be http://stream.radiosai.net:8002/. As said, you can then feed this to the MediaPlayer.