Hey guys, I was wondering if it were possible to translate audio without having to call a recognizer intent (ie a dialog that says you are recording audio). I want to be able to recover the results of the voice recognition every 2 to 3 seconds or so and plan to use this with a bunch of listviews. Is this possible? If so any ideas? Thanks!
Edit: I forgot to mention that I am playing around with android.speech.SpeechRecognizer but so far, in my implementation of the RecognitionListener interface all I have been able to get from ddms is that there is a client side error. Nothing else seems to be called. Also, is it essential that I implement a RecognitionService? I know that the example in the API is just that. If so, how would I create and use this service? Thanks again.
Speech recognition does not work in the Emulator. You need a device.
I just posted some working code skeleton stuff in another thread -
Voice Recognition Commands Android
The speech recognizer can be triggered every few seconds without UI. You might need to write your own code to decide when is good to record and when is not (you get an audio buffer you could peek through) - or your could do something in your own UI.
I think you could re-trigger it over and over again. Not sure it'd work perfectly but worth a try.
It is impossible in Android < 2.1 and probably in 2.2
When I asked a Google support person, he said, "Maybe you can figure out what packets are being sent and then just make a direct web call"
Wow.
Related
Our company is developing a full-featured app for blind people, which means it relies heavily on text-to-speech (TTS). We have noticed that the TTS voice simply stops speaking randomly. It usually works fine and we have no issues with speech, but once in a blue moon we get no voice output and the app doesn't know otherwise so it continues to work like usual, but without any voice. Users can still use the app for the most part, but they no longer hear the speech until the app is restarted and everything is reset.
Is there a reliable way to know if the voice fails to speak something?
I already utilized an utterance complete listener to handle certain scenarios with what it says, but that makes no difference when the TTS simply doesn't output the speech. It's as if the voice "thinks" it said it but we never hear it.
Is there an event we can capture that would be fired when the TTS engine tries to say something but fails?
In my experience, at the time of writing, the most reliable TTS Engine is Google's own. This of course, is a matter of opinion and not something Stackoverflow encourages. Currently, Google TTS is the only one that uses the latest Voice API correctly, where as other engines crash, fail or simply report incorrectly.
It's unfortunately all too common that a TTS Engine will believe it has spoken the utterance correctly, but hasn't.
To combat this, when I request the speech, I set a Runnable, which checks if onDone() fails to be called within a period relative to the length of the speech. I also check onDone() is not called too quickly, which would suggest that the speech failed, unless the request was deliberately one of silence.
Those two checks enable me to toast to the user if there's an issue - given that you are dealing with the blind, you will obviously have to find another way to communicate the problem! Perhaps a series of multiple small vibrates could generically denote an issue?
Hope that helps.
I have an activity that implements RecognitionListener. To make it continuous, every time onEndOfSpeech() I start the listener again:
speech.startListening(recognizerIntent);
But, it takes some time (around half a second) till it starts, so there is this half a second gap, where nothing is listening. Therefore, I miss words that were spoken in that time difference.
On the other hand, when I use Google's Voice input, to dictate messages instead of the keyboard - this time gap does not exist. Meaning - there is a solution.
What is it?
Thanks
I'll recommend using CMUSphinx to recognize speech continuously. To achieve continuous speech recognition using google speech recognition api, you might have to resort to a loop in a background service which will take too much resources and drains the device battery.
On the other hand, Pocketsphinx works really great. It's fast enough to spot a key phrase and recognize voice commands behind the lock screen without users touching their device. And it does all this offline. You can try the demo.
If you really want to use google's api, see this
try looking at a couple other api's....
speech demo : has source here and is discussed here and operated on CLI here
you could use the full duplex google api ( its rate capped at 50 per day )
Or if you like that general idea check ibm's watson discussed here
IMO - its more complex but not capped .
There are options like:
intent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_COMPLETE_SILENCE_LENGTH_MILLIS, 2000); // value to wait
or
intent.putExtra(RecognizerIntent.EXTRA_SPEECH_INPUT_POSSIBLY_COMPLETE_SILENCE_LENGTH_MILLIS, 2000);
These ceased to work on Jelly Bean and above, but work on ICS and below - not sure if intended or a bug!
I have an idea to build an android application for dumb people which can be helped to answer phone calls. I thought to convert the text into voice and then transfer through call stream.Is it still not possible to play audio so that other party can hear during a phone call in android platforms
Sorry, the short answer still seems to be no.
I would love to be shown wrong on this.
It seems however that the issue goes back to the hardware level and even when that's exposed, Android isn't coded to take advantage of it.
There's a good explanation here.
Good luck.
All Xperia phones have a built-in Answering Machine. When a call comes..the phone can pickup automatically and answer with a pre-recorded voice message which you can customize any time.may you try with these phones.
I have an voice record in my iPhone if suppose I call to friends number as soon as he take the call he should be able to listen voice I already recorded. Is it possible in iphone and android?
Answer is NO
This is not possible as far as iPhone is concerned. We loose control over our application when any call comes or go and control is over to Calling application.
Being an Java ME, Android Developer I have no idea about the iPhone, but i guess you got the iPhone answer from Jennis's answer.
Coming to the Android, let me give you guide you some points,
Yes, you can achieve this requirement in Android Technology. You need to user Telephony Manager API. There is a state called CALL_STATE_OFFHOOK this will help you to achieve your goal.
I noticed that Flash allows you to insert cue's into a video file (flv). Is something like this possible on Android? I have a video that runs locally in my Android app and I would like to insert cues into the video which will give me callbacks when a certain portion of the video has been reached. If this is not possible, are there any other methods to do something similar? I have to be pretty precise with where the cue is located.
Thanks
Note:
I just found this same question on stackoverflow. Can anyone verify that this is still the case? (That it is not possible, only by polling the video continually). I did know of this way, but it's not the most accurate way if you need to be precise and stich dynamic pieces of video together seamlessly.
Android VideoView - Detect point of time in video
I´m working on this as well and a kind of cue/action scripts. For tutorials, instruction video I need to keep track of current position to serve for example questions and navigation menus appropriate for that point in time. Easy when it´s sufficient to act in response to user input but otherwise firing up a thread to poll at some decent interval is the thing. Accuracy might be acceptable and can be calibrated by sensing actual position.