Controlling input for Google TTS API - android

I'm creating a TTS app using google's unofficial tts api and it works fine but the api can only process a maximum of 100 characters at a time but my application may have to send strings containing as many as 300 characters.
Here is my code
try {
String text = "bonjour comment allez-vous faire";
text=text.replace(" ", "%20");
String oLanguage="fr";
MediaPlayer player = new MediaPlayer();
player.setAudioStreamType(AudioManager.STREAM_MUSIC);
player.setDataSource("http://translate.google.com/translate_tts?tl=" + oLanguage + "&q=" + text);
player.prepare();
player.start();
} catch (Exception e) {
// TODO: handle exception
}
So my questions are
How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
Is there any need for me to use Asynctask for this process?

1.How do I get it to check the number of characters in the string and send only complete words within the 100 character limit.
ArrayList<String> arr = new ArrayList<String>();
int counter = 0;
String textToSpeach = "Your long text";
if(textToSpeach.length()>100)
{
for(int i =0 ; i<(textToSpeach.length()/100)+1;i++)
{
String temp = textToSpeach.substring(0+counter,99+counter);
arr.add(temp.substring(0, temp.lastIndexOf(" ")));
counter = counter + 100;
}
}
2.How do I detect when the first group of TTS is finished so I can send the second to avoid both speech overlapping each other
player.setOnCompletionListener(new OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) {
// pass next block
}
});
3.Is there any need for me to use Asynctask for this process?
Right now I dont see any need for that.

Simple: this is not a public API, don't use it. Use Andorid's built-in TTS engine for speech synthesis. It does not have string length limitations.

Related

Android Studio how to split up a string to display serial data in two different forms?

I am being sent capacitance readings and temperature sensor readings over bluetooth using UART.
I am attempting to do this by editing a pre existing open source app called nRF-UART.
I am trying to display the temperature data on a live updating static text view, and display the capacitance data as a vertical progress bar to show the level of water being measured. In order to do that I think I need the capacitance data to be on a separate string from the temperature data.
Here is where the string is being printed out at:
if (action.equals(UartService.ACTION_DATA_AVAILABLE)) {
final byte[] txValue = intent.getByteArrayExtra(UartService.EXTRA_DATA);
runOnUiThread(new Runnable() {
public void run() {
try {
String text = new String(txValue, "UTF-8");
listAdapter.add(text);
messageListView.smoothScrollToPosition(listAdapter.getCount()-1);
} catch (Exception e) {
Log.e(TAG, e.toString());
}
}
});
}
How can I split up the values and bring them outside of `listAdapter'?
The string will look like this
1,75,460,0
Where 75 is the temperature in Fahrenheit, and 460 is the capacitance reading in mL. 1 is just a starting bit, and 0 is the ending bit.

How to know incoming call have video or not in PJSIP in android

I'm working on PJSIP in android, How to check incoming call is only audio call or video.?? How identify? I have used below code but it's not working
#Override
public void onIncomingCall(OnIncomingCallParam prm) {
System.out.println("======== Incoming call ======== ");
MyCall call = new MyCall(this, prm.getCallId());
try {
CallSetting setting = call.getInfo().getSetting();
Log.d(" Log APP ", "onIncomingCall: Audio " + setting.getAudioCount() + " Video" + setting.getVideoCount());
} catch (Exception e) {
e.printStackTrace();
}
}
But audio and video count always come 1 but while making call, I have put video 0
MyCall call = new MyCall(account);
CallOpParam prm = new CallOpParam();
CallSetting setting = new CallSetting();
setting.setAudioCount(1);
setting.setVideoCount(0);
prm.setOpt(setting);
try {
call.makeCall(buddy_uri, prm);
} catch (Exception e) {
call.delete();
e.printStackTrace();
return;
}
Please tell me hows to identify incoming call has video or not .?
After lots of research, I found the PJSIP protocol does not provide caller video count. CallSetting is own setting by all user. Asterisk server does not send call setting to receiver end. But Asterisk id gives information about media support
callInfo.getRemOfferer()
// It returns a boolean value if true then server support video calling.
So you can use logic like this
long videoCount = (callInfo.getRemOfferer()) ? callInfo.getRemVideoCount() : callInfo.getSetting().getVideoCount();
// if server support video call then check remote server video count value its retrun value in 0,1 format if server not support then chek call setting
íf video count is 1 mean this is a video call.
For more details check this PJSIP Call Setting
You need to check the remote details:
sipCall.getInfo().getRemVideoCount()
where "sipCall" is your "call (MyCall)" object.

android tts for multiple languages

I am developing a android translator app which have TTS feature also. My problem is the voice for Greek language not working properly. It just speak characters also does not speak in Hindi. I have goggled and found that google TTS not support Greek language. Will it be possible if I download the Greek language pack and then it will work properly. Please suggest me if it work or if not then what is the reasons.
I had same issue before.
You can use Google Online TTS.
It is my example code.
public void Online_TTS(final String text,final String lan) {
new Thread(new Runnable() {
#Override
public void run() {
String Url = "https://translate.google.com/translate_tts?ie=UTF-8";
String pronouce = "&q=" + text.replaceAll(" ", "%20");
String language = "&tl=" + lan;
String web = "&client=tw-ob";
String fullUrl = Url + pronouce + language + web;
Uri uri = Uri.parse(fullUrl);
MediaPlayer mediaPlayer = new MediaPlayer();
try {
mediaPlayer.setDataSource(MainActivity.this,uri);
mediaPlayer.prepare();
mediaPlayer.start();
} catch (IOException e) {
e.printStackTrace();
Log.i(TAG,"error");
}
}
}).start();
}
Hope to help you

android-midi-lib delay before notes

I'm trying to generate midi file and play it on Android. I found android-midi-lib, but there are almost no any documentation about this library. I tried to run example from this lib. It works. But there is delay about 6 seconds before track from my notes start playing. I don't know anything about notes and midi format. Everything is new for me.
Here is my code:
public class MyActivity extends Activity {
private MediaPlayer player = new MediaPlayer();
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
MidiTrack tempoTrack = new MidiTrack();
MidiTrack noteTrack = new MidiTrack();
// 2. Add events to the tracks
// 2a. Track 0 is typically the tempo map
Tempo t = new Tempo();
t.setBpm(228);
tempoTrack.insertEvent(t);
// 2b. Track 1 will have some notes in it
for(int i = 0; i < 128; i++) {
int channel = 0, pitch = i, velocity = 100;
NoteOn on = new NoteOn(i*480, channel, pitch, velocity);
NoteOff off = new NoteOff(i*480 + 120, channel, pitch, 0);
noteTrack.insertEvent(on);
noteTrack.insertEvent(off);
}
// It's best not to manually insert EndOfTrack events; MidiTrack will
// call closeTrack() on itself before writing itself to a file
// 3. Create a MidiFile with the tracks we created
ArrayList<MidiTrack> tracks = new ArrayList<MidiTrack>();
tracks.add(tempoTrack);
tracks.add(noteTrack);
MidiFile midi = new MidiFile(MidiFile.DEFAULT_RESOLUTION, tracks);
// 4. Write the MIDI data to a file
File output = new File("/sdcard/example.mid");
try {
midi.writeToFile(output);
} catch(IOException e) {
Log.e(getClass().toString(), e.getMessage(), e);
}
try {
player.setDataSource(output.getAbsolutePath());
player.prepare();
} catch (Exception e) {
Log.e(getClass().toString(), e.getMessage(), e);
}
player.start();
}
#Override
protected void onDestroy() {
player.stop();
player.release();
super.onDestroy();
}
}
I figured out that this delay depends on first param in NoteOn constructor (maybe NoteOff too). I don't understand what is 480 number is. I tried to change this number, and than less this number than shorter delay before track, BUT whole track is shorter to.
Seems like time between notes with 480 value is fine for me, but I don't need a delay before them.
Help me please!
Ok, I figured out what is the problem.
According to this url http://www.phys.unsw.edu.au/jw/notes.html MIDI values for piano for example starts from 21. So if I start cycle from 0, then first 20 values won't play anything.
Now about delay.
The cycle should look like this:
delay = 0;
duration = 480; // ms
for (int i = 21; i < 108; ++i) {
noteTrack.insertNote(chanel, i, velocity, delay, duration);
delay += duration;
}
Delay means at what time note should be played. So if we want to play all notes one by one, we need to set delay as sum of all previous notes duration.

Android Speech Recognition for specific sound pitch

Can we detect 'scream' or 'loud sound' etc using Android Speech Recognition APIs?
Or is there is any other software/third party tool that can do the same?
Thanks,
Kaps
You mean implement a clapper?
There's no need to use fancy math or the speech recognition API. Just use the MediaRecorder and its getMaxAmplitute() method.
Here is some of code you'll need.
The algorithm, records for a period of time and then measures the amplitute difference. If it is large, then the user probably made a loud sound.
public void recordClap()
{
recorder.start();
int startAmplitude = recorder.getMaxAmplitude();
Log.d(D_LOG, "starting amplitude: " + startAmplitude);
boolean ampDiff;
do
{
Log.d(D_LOG, "waiting while taking in input");
waitSome();
int finishAmplitude = 0;
try
{
finishAmplitude = recorder.getMaxAmplitude();
}
catch (RuntimeException re)
{
Log.e(D_LOG, "unable to get the max amplitude " + re);
}
ampDiff = checkAmplitude(startAmplitude, finishAmplitude);
Log.d(D_LOG, "finishing amp: " + finishAmplitude + " difference: " + ampDiff );
}
while (!ampDiff && recorder.isRecording());
}
private boolean checkAmplitude(int startAmplitude, int finishAmplitude)
{
int ampDiff = finishAmplitude - startAmplitude;
Log.d(D_LOG, "amplitude difference " + ampDiff);
return (ampDiff >= 10000);
}
If I were trying to detect a scream or loud sound, I would just look for a high root-mean-squared of the sounds coming through the microphone. I suppose that you can try to train a speech recognition system to recognize a scream, but it seems like overkill.

Categories

Resources