I'm starting my final year project. I will do an android application, which will take commands from the user, and then process the input in order to show results.
My question is, what ways can I use to process the input( what I mean by input here is the data or text after transferring speech to text)?
I have found some ways to do that like matching the input with data stored already(template matching), but Im looking for something more better and smarter that that (and if there are any suggested references).
Thanks
I would suggest you start with a very basic and clearly defined set of keyword rules of your own:
#Override
public void onResults(final Bundle results) {
final ArrayList<String> heardVoice = results.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
if(heardVoice != null && !heardVoice.isEmpty()) {
for(String result: heardVoice){
if(result.contains("bluetooth")){
if(result.contains("on")){
// turn on bluetooth
break;
} else if(result.contains("off")){
// turn off bluetooth
break;
}
}
}
}
}
Once you've understood these basic keyword parameters, you can then look to using a Natural Language Processing (NLP) model and the performance of your code.
There are many examples out there, but the Apache OpenNLP is a good place to start, with comprehensive documentation.
Related
I am trying to implement speech recognition. I keep getting the error:
ERROR_NO_MATCH - No recognition result matched - 7
I can't find anything that explain what does this means.
What does "No recognition result matched" mean?
you need to enable partial results first, and to call UNSTABLE_TEXT
// When creating the intent, set the partial flag to true
intent.putExtra(RecognizerIntent.EXTRA_PARTIAL_RESULTS,true);
use the partialResults returned in onPartialResults(). In the returned bundle "SpeechRecognizer.RESULTS_RECOGNITION" has all the terms minus the last term and "android.speech.extra.UNSTABLE_TEXT" has the last missing recognized term.
#Override
public void onPartialResults(Bundle partialResults) {
ArrayList<String> data =
partialResults.getStringArrayList(SpeechRecognizer.RESULTS_RECOGNITION);
ArrayList<String> unstableData =
partialResults.getStringArrayList("android.speech.extra.UNSTABLE_TEXT");
mResult = data.get(0) + unstableData.get(0);
}
you can follow below link for better understanding -
speech recognition
No recognition result matched. Meaning superbox s3 cannot us voice command message no recognition result matched . Come up on screen
I'm using Android's TextToSpeech class. Everything is working normally. However, there are languages/locales that aren't installed by default but supported by the TTS engine and I can't capture the state of missing voice data.
With the internet on, when I try to setLanguage to a new locale which its voice data hasn't been downloaded, it'll simply download the voice data and perform the speak method normally/successfully.
However, with internet off, when I try to setLanguage to a new locale which its voice data hasn't been downloaded, it attempts to download the voice data. But with no internet, it just indicates "downloading" on the "TTS voice data" settings screen under "Language and input" for the selected locale, without any progress. And as expected the speak method doesn't work since the voice data isn't downloaded. When this happens, I would think TTS methods setLanguage/isLanguageAvailable will return LANG_MISSING_DATA for me to capture this state, however, it simply returns LANG_COUNTRY_AVAILABLE. The situation is shown in this image:
I want to be able to detect when the voice data of the locale being chosen isn't downloaded/missing and either give a toast message or direct user to download it. I have seen several posts suggesting the use of using isLanguageAvailable like this one. I also looked at the android documentation and it seems like isLanguageAvailable's return values should capture the state of missing voice data with LANG_MISSING_DATA.
I also tried sending an intent with ACTION_CHECK_TTS_DATA as the other way to check for missing data as suggested in the Android documentation I linked. However, the resultCode again didn't capture/indicate that the voice data is missing (CHECK_VOICE_DATA_FAIL) but returned CHECK_VOICE_DATA_PASS instead.
In this case, how should I capture the state of a language/locale being available/supported, with the voice data missing? I'm also curious why CHECK_VOICE_DATA_PASS and LANG_MISSING_DATA aren't the values returned. When the voice data is missing, shouldn't it return these values? Thanks!
Below is the return value when I try to use setLanguage and isLanguageAvailable on locales that haven't had its voice data downloaded (0 and 1 are the returned value of the method shown in the logs, -1 is the one that corresponds to missing voice data):
You can find all available Locale of the device using following function. hope this code will help you.
Locale loc = new Locale("en");
Locale[] availableLocales= loc.getAvailableLocales();
Boolean available=Boolean.FALSE;
for (int i=0;i<availableLocales.length;i++)
{
if(availableLocales[i].getDisplayLanguage().equals("your_locale_language"))
{
available=Boolean.TRUE;
// TODO:
}
}
I have this implementation as part of a wrapper class to work with TextToSpeech, I hope it helps:
public boolean isLanguageAvailable(Locale language)
{
if(language == null) return false;
boolean available = false;
switch (tts.isLanguageAvailable(language))
{
case TextToSpeech.LANG_AVAILABLE:
case TextToSpeech.LANG_COUNTRY_AVAILABLE:
case TextToSpeech.LANG_COUNTRY_VAR_AVAILABLE:
if(Build.VERSION.SDK_INT >= 21){
tts.setLanguage(language);
Voice voice = tts.getVoice();
if(voice != null){
Set<String> features = voice.getFeatures();
if (features != null && !features.contains(TextToSpeech.Engine.KEY_FEATURE_NOT_INSTALLED))
available = true;
} else available = false;
tts.setLanguage(this.language);
}
break;
case TextToSpeech.LANG_MISSING_DATA:
case TextToSpeech.LANG_NOT_SUPPORTED:
default:
break;
}
return available;
}
It looks like a long waiting question but anyway. It seems that you have to check voice features to find it out:
Set<String> features = voice.getFeatures();
if (features.contains(TextToSpeech.Engine.KEY_FEATURE_NOT_INSTALLED)) {
//Voice data needs to be downloaded
...
}
original question
I have a standard texttospeech, android.speech.tts.TextToSpeech
I initialize it and set a language by using tts.setLanguage(Locale.getDefault())
That default Locale is de_DE (for germany, correct).
Right after setting it, i ask the tts to give me its language tts.getLanguage()
now it tells me that its set to "deu_DEU"
There is no Locale with that setting. So i cant even check if its set to the right language because i cant find the Locale object that has the matching values.
Issue might be related to Android 4.3, but i didnt find any info.
Background is, that i need to show values with the same decimal symbol, but tts needs the correct symbol or it says "dot" in german which makes NO sense at all.
Conclusion:
A Locale is a container that contains a string that is composed of a language, a country and an optional string. Every text-to-speech engine can return a custom Locale like "eng_USA_texas".
Furthermore the Locale that is returned by the tts engine can only be a "close match" to the wanted Locale. So "en_US" instead of "en_UK".
However, Locale has a method called getLanguage() and it returns the first part of above mentioned string. "en" or "eng". Those Language codes are regulated by ISO and one can hope that everyone sticks to it. (see link in the accepted answer)
So checking for tts.getLanguage().getLanguage().startsWith("en") should always be true if its some form of english language setting and the ISO standards are fulfilled.
It is important to mention that Locales should not be compared by locale_a == locale_b as both can be different objects yet have the same content, they are containers of sort.
Always compare with locale_a.equals(locale_b)
I hope this helps people sort out some problems with tts and language
You're right, it's frustrating how the locale codes the TTS object uses are different to those of the device locale. I don't understand why this decision was made.
To add further complication, the TTS Engine can supply all kinds of different locales, such as eng_US_sarah or en-US-female etc. It's down to the TTS Engine how these are stored and displayed.
I've had to write additional code to iterate through the returned locales and attempt to match them to the locale the system can use, or vica-versa.
To start with, take a look at how the engines you have installed are returning their locale information. You can then start to collate in your code a list to associate 'deu_DEU' to 'de_De'.
This is often simplistic by using split("_") & startsWith(String), but unfortunately not for all locales.
Here's some base code I've used to analyse the installed TTS Engines' locale structure.
private void getEngines() {
final Intent ttsIntent = new Intent();
ttsIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
final PackageManager pm = getActivity().getPackageManager();
final List<ResolveInfo> list = pm.queryIntentActivities(ttsIntent, PackageManager.GET_META_DATA);
final ArrayList<Intent> intentArray = new ArrayList<Intent>(list.size());
for (int i = 0; i < list.size(); i++) {
final Intent getIntent = new Intent();
getIntent.setAction(TextToSpeech.Engine.ACTION_CHECK_TTS_DATA);
getIntent.setPackage(list.get(i).activityInfo.applicationInfo.packageName);
getIntent.getStringArrayListExtra(TextToSpeech.Engine.EXTRA_AVAILABLE_VOICES);
intentArray.add(getIntent);
}
for (int i = 0; i < intentArray.size(); i++) {
startActivityForResult(intentArray.get(i), i);
}
}
#Override
public void onActivityResult(final int requestCode, final int resultCode, final Intent data) {
try {
if (data != null) {
System.out.print(data.getStringArrayListExtra("availableVoices").toString());
}
} catch (NullPointerException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
From the above ISO-3 codes and the device locale format, you should be able to come up with something for the locales you are concerned with.
I've been intending to submit an enhancement request to AOSP for a while, as all TTS Engines need to use constant values and extras such as gender etc need to be added to use the TTS Engines to their full capabilities.
EDIT: Further to your edit, note the wording regarding setLanguage(). The individual TTS Engine will try and match as close as possible to the requested locale, but that applied locale may be completely wrong, depending on how lenient the Engine provider is in their code and their response.
After creating an object of TextToSpeech class, you should configure it (or check it's available state/values) into TextToSpeech.OnInitListener's onInit() callback. You will get reliable information there about your TextToSpeech object.
Check my answer here:
https://stackoverflow.com/a/65620221/7835969
I am writing an app which will be run in tablets. The tablet will be connected to ACR1222L NFC reader.
I am using their android library to interact with the reader. I can detect the USB reader and also can read the readers name.
BUT i am struggling to read data from NFC tag. In fact I have no clue where to start, which classes/methods to use.
Is there anyone who already worked with ACR1222L and its android library?
Some guidelines, sample code, tutorial would save my life.
EDIT:
Well, I got little smarter now, I can read the UID. this is how to do it.
#Override
protected void onCreate(Bundle savedInstanceState) {
............... your code
mReader = new Reader(mManager);
mReader.setOnStateChangeListener(new OnStateChangeListener() {
#Override
public void onStateChange(int slotNum, int prevState, int currState) {
//This command is for the card UID
byte[] command = {(byte) 0xFF,(byte) 0xCA,0x00,0x00,0x00};
byte[] response = new byte[300];
int responseLength;
if (currState == Reader.CARD_PRESENT) {
try {
mReader.power(slotNum,Reader.CARD_WARM_RESET);
mReader.setProtocol(slotNum, Reader.PROTOCOL_T0| Reader.PROTOCOL_T1);
responseLength=mReader.transmit(slotNum,command, command.length, response,response.length);
//Here i have the card UID if i send the proper command
responsedata=NfcUtils.convertBinToASCII(response);
}
}
}
BUT I am still struggling to read the payload from the tag. I have also look into nfctools library. But I don't know where to start. Would be great if anyone guide my through the library.
Yes, this is possible - and quite the same as working with the ACR 122 as the API is almost identical.
I've developed a (commmercially available) library which probably does most of what you're looking into, or can serve as a starting point for your own implementation.
I am trying to add a few commands to android default voicedialer app. It has commands like Open, dial, call, redial etc, I want to include lets say 'Find' to it. I have downloaded the source code from here and compiled it in Eclipse. the application sets up Grammar for arguments of these commands like it stores the names and phone numbers of the persons in contact list to generate intents when their names are recognized for CALL JOHN voice command. For CALL in this command it is just comparing the first word of resulting recognized string to "CALL".
I added "FIND" as an extra else if condition in the onRecognitionSuccess() function as shown below:
public class CommandRecognizerEngine extends RecognizerEngine
{
............
protected void onRecognitionSuccess(RecognizerClient recognizerClient) throws InterruptedException
{
.....................
if ("DIAL".equalsIgnoreCase(commands[0]))
{
Uri uri = Uri.fromParts("tel", commands[1], null);
String num = formatNumber(commands[1]);
if (num != null)
{
addCallIntent(intents, uri, literal.split(" ")[0].trim() + " " + num, "", 0);
}
}
................
else if ("FIND".equalsIgnoreCase(commands[0]))
{
if (Config.LOGD)
Log.d(TAG, "FIND detected...");
}
}//end onRecognitionSuccess
}//end CommandRecognizerEngine
but my app can't recognize it. Does anyone know how does recognizer detects commands like OPEN or CALL etc or refer me to appropriate documentation?
Thanks.
As it has been over a year, I doubt you need this answer anymore. However, some other people might find this through Google, as I did.
Right now, the best way to apply grammars to speech recognition on Android is to set the number of results higher, and then filter the results based on your grammar. It is not perfect, as the word recognized may not have passed a threshold to be included in the list, but it does greatly improve the accuracy of all speech recognition applications where the types of things you can say are somewhat limited.