i have used the code provided in this link for the speech recognition. in emulator it is saying recognizer not present,so i installed it on mobile. when i click on speak button it is working. but when i speak some names "rajesh" it is showing some possible verbs and all but not the name. but i want to use the input to select a contact from the address book in order to make a call . so please tell me how to carry on in this direction. one more thing, every time i need to develop the code in eclipse then install it on mobile and then check for output. is there any alternative to edit and check the app code in the mobile from eclipse.
please provide me any possible links. i want to develop a call app for blind,if the voice recognition does not work, what else could be done to take input from the user.
Names are hard for Speech recognition. There are more possible names in the world than words in any dictionary, so being able to recognise any arbitrary name is hard. Though common names are easier.
Anyway, if you want to recognise a customized list of words/names, You might want to look at Dragon Mobile from Nuance. Here is a copy-and-paste from another similar question I answered:
If you use 3rd party Android recognition from Nuance (The people behind DragonDictate), it supports a "grammar mode" where you can somewhat restrict the phrases that will be recognised during recognition.
Importantly, if you add unusual names into a Custom Vocabulary, they SHOULD become recognizable (Complex pronunciation issues aside).
You can find information if you dig through:
http://dragonmobile.nuancemobiledeveloper.com ,
looking for 'Custom Vocabularies'. Grammar mode is essentially a special mode of custom vocabularies.
At the time of writing, there was a document here that makes some mention of grammar mode:
http://dragonmobile.nuancemobiledeveloper.com/downloads/custom_vocabulary/Guide_to_Custom_Vocabularies_v1.5.pdf - It only really becomes clear when you try to progress in their provisioning web GUI.
You have to set up an account, and jump through other hoops, but there is a free tier. This is the only potential way I have found to constrain a recognition vocabulary.
Well, short of running up PocketSphinx, but that is still described as a 'Research' 'PreAlpha'.
No, I don't work for Nuance. Not sure anyone does. They may have all been eaten by zombies. You would guess as much reading their support forums. They never reply.
Related
I would like to implement offline voice recognition in my app. But I want it for two purposes:
For a small set of commands (play, stop, previous, next and a couple of others);
For a list of a few hundred bird names.
To implement (1), it seems to me a bad idea (slower and resource consuming) to use the full voice recognition force of android. In my mind, it would be easier to tell my app to only interpret a few words. That is, to use my own dictionary, telling my app to "use only these 10 words".
To implement (2) is similar to (1), but with a few hundred instead of 10.
Does this makes sense, and if so is there an easy way to implement it? Is it worth it?
Thanks!
L.
You can implement your app using CMUSphinx on Android. CMUSphinx tutorial is here:
http://cmusphinx.sourceforge.net/wiki/tutorial
The language models to recognize limited set of words are described here
http://cmusphinx.sourceforge.net/wiki/tutoriallm
You can use keyword spotting mode to recognize few commands.
Pocketsphinx on Android is described here:
http://cmusphinx.sourceforge.net/wiki/tutorialandroid
The demonstration includes the way to switch recognition modes from 10 words to few hundred words as you intend.
I am building a speech recognition android app that will act as a virtual personal assistant with tasks such as:
Make appointments/Reminders
Weather Info
General queries to Wolfram|Alpha / Wikipedia - (i.e Who directed Ghostbusters, whats the £-$ Exchange rate)
My question is wheather to use Pocketsphinx or the Google API?
Originally I set this up with "android.speech.RecognitionListener", worked great, however I want to implement Keyword spotting so the user doesn't need to have any interaction other than just speaking.
Apparently Google API doesn't support this, so I looked into using pocketsphinx for this, and still using google for the rest of the app (As I heard pocketsphinx is not as accurate?)
However the two don't get along as they can't both occupy the microphone at the same time.
Is there a nice way to switch between recognizers? (cant even import both to same project)
Should I just go with pocketshinx and deal with the lower accuracy?
Suggestions would be helpful
Cheers
For anybody who wants to implement a similar project, I have found a work around. It's abit hacky and not entirely clean, but it works.
Using the android speech recognizer with a toggle on/off switch like in many examples across the web, when onResults comes back, the string will be checked for said "hotword", if it is not present, discard the string, if it is, process it. Once the query has been processed and the text to speech is responding, programatically reclick the toggle button, ensuring constant listening.
Do the same on "onError" as well.
I did also have it onPartialResults as well, but it appeared to make the thread crash, not entirely sure why but once it was removed everything seems to work nicely.
You can use pocketsphinx only to recognize predefined set of commands due to really poor accuracy (you should prepare your own dictionary and language model). Also pocketsphinx can be used offline and it is a big cons for some project.
In other hand google is very accurate but it's not free and works only online.
I'm looking for an Automatic Speech Recognition solution for Android that can handle many locutors at the same time. I need a solution that can understand who is speaking what.
For example, if I have two users in front of my app, and if they are speaking at the same time, I need to know who said "YES" and who said "NO". My grammar is very simple, if it helps...
I've already tested both Android Recognizer Intent and implemented SpeechRecognizer, so I can already recognize the words, but I cannot associate them to specific locutors.
By the way, If you know this kind of solution for other platforms such as iOS or web-based app, I'll take it as well :).
Thank you.
Is it possible to restrict voice search search widget to look for a match near to a given set of words. For example if I am using it to search over a list of names, its not meaningful as names are often corrected to some words.
If you use 3rd party Android recognition from Nuance (The people behind DragonDictate), it supports a "grammar mode" where you can somewhat restrict the phrases that will be recognised during recognition.
Importantly, if you add unusual names into a Custom Vocabulary, they SHOULD become recognizable (Complex pronunciation issues aside)
You can find information if you dig through:
http://dragonmobile.nuancemobiledeveloper.com ,
looking for 'Custom Vocabularies'. Grammar mode is essentially a special mode of custom vocabularies.
At the time of writing, there was a document here that makes some mention of grammar mode:
http://dragonmobile.nuancemobiledeveloper.com/downloads/custom_vocabulary/Guide_to_Custom_Vocabularies_v1.5.pdf - It only really becomes clear when you try to progress in their provisioning web GUI.
You have to set up an account, and jump through other hoops, but there is a free tier. This is the only potential way I have found to constrain a recognition vocabulary.
Well, short of running up PocketSphinx, but that is still described as a 'Research' 'PreAlpha'.
No, I don't work for Nuance. Not sure anyone does. They may have all been eaten by zombies. You would guess as much reading their support forums. They never reply.
No, unfortunately this is not possible.
You could also look at recognition from AT&T.
They have a very feature-rich web API, including full grammar support. I only found out about it recently!
1,000,000 transactions per month for free. Generous!
Look for 'AT&T API Program'. Weird name.
Link at time of writing:
http://developer.att.com/apis/speech
Unfortunately for me, no Australian accent language models at time of writing. Only US and UK English. Boooo.
EDIT: Some months after I submitted the above, AT&T retired the service mentioned. It seems everyone just wants a 'dumbed down' API where you just call a recognizer, and it returns words. Sure, that is of course the holy grail, but a properly designed, constrained grammar will generally work better. As someone with speech skills, the minimalism of the common Speech APIs today is really frustrating...
I want to know how to control system resources and services like bluetooth, SMS, phone contacts etc.
Honestly, i want to know how or what to do to control sms usage based on user behavior, block incoming call or change it to auto vibrate mode without user noticed like that.
Actually, I want it for my assignment about context aware access control paper.
I choose Android for implementation but i am afraid i
couldn't submit my paper in time if i study android from the beginning and all by myself.
No offense but I want to avoid errors.
I feel my head becomes swollen whenever "force close error" show as I need it urgent.
As Willytete said developer site is the best one for you
There you can find
Application Fundamentals
Download the Android SDK and start programing
The first program tutorial where you can start Hello World
Notepad Tutorial where it give you a lot of ideas
List of Sample Apps, where there is a lot of codes
Getting the Samples, it explain how to use this.
You will get all the information from developer site that you needed, while move from beginner to an expert