Using google speech recognition on android wear without showing specific ui - android

This is my first question here on stackoverflow, so sorry if i am doing something wrong.
I have an app on android wear, where the user shall select an item from a list.
I want to enable the user to select the item via voice commands. I managed to do it as suggested in the google documentation, but i have to implement a button to start speech recognition, and it will show up as a fullscreen activity.
I want the user to be able to see the list while speech recognition is active. Is there any solution for this?
EDIT: I might have found the solution i was looking for. It is the speech recognizer, which seems to be able to do just what i want. I will have to dig into that and will update this post if it is the solution.

Related

How to use Android Speech to text as Service?

In android i am trying to develop an application where I can open messages or make a call etc. through voice. But I want my application to keep running in background, how can I convert the Google speech to text API into a service ? I will provide my work progress only if you need it, as posting it did not help me in previous questions. I have taken help from this link so far I will share my code if you want. http://www.androidhive.info/2014/07/android-speech-to-text-tutorial/

Android create custom voice recognizer

I want to create a custom voice recognizer and design it the way i want.
This is my current one:
http://i.imgur.com/FhHEOZR.png?1
I want to create a custom one as in this app:
http://i.imgur.com/Alc3ggJ.png?1
I saw few apps with this custom voice recognizer such as indigo, dragon,aivc etc.
I tried google developer solution for SpeechRecognizer but it didn't work for me.
Any help please?
From this question: How can I use speech recognition without the annoying dialog in android phones
Use the SpeechRecognizer interface. Your app needs to have the
RECORD_AUDIO permission, and you can then create a SpeechRecognizer,
give it a RecognitionListener and then call its startListening method.
You will get callbacks to the listener when the speech recognizer is
ready to begin listening for speech and as it receives speech and
converts it to text.
And then use whatever layout you want, and even animate it.
Visit:
Android( RecognitionListener) live speech to text preview
also you can visit:
https://developer.android.com/reference/android/speech/RecognitionListener
maybe it can be useful to you
By the way, you said in your comment that you don't fully understand.Recognition listener is a listener that runs in the background. In this way, there will be no unnecessary dialogs. I'm trying to automatically add subtitles to the video that opens, so I'm trying to do something like you want to do, but I haven't been able to do it yet.

Adding the "ok glass contextual voice menu" within an immersion activity

Is there a way to insert the "ok, glass" trigger into an immersion activity on Glass? I want to make the launch of my application as seamless and quick as possible. Making an immersion application seemed to be the way but I can not find a way to bring up the "ok, glass" footer trigger within my activity to launch my application menu to be navigated hands free. Any clue as to how this works?
Note: I have a voice trigger to launch the app from the Glass home screen.
I'm not creating a card but rather just using an XML layout as I'm changing text on the screen dynamically to user interaction using an AsyncTask. Any advice would be great.
Contextual voice commands are not yet supported by the platform, feel free to file a feature request in our issues tracker.
UPDATE: as mentioned in the comments and the other answer, contextual voice commands are now part of the platform for Immersion: https://developers.google.com/glass/develop/gdk/voice#contextual_voice_commands
As of XE18.1 and GDK Preview 19, contextual voice commands are available in the GDK. Documentation is available at https://developers.google.com/glass/develop/gdk/voice.

Why does Android (Jelly Bean) ignore an additional RecognizerIntent (Kõnele)?

I installed the open source Kõnele (source code) for the purpose of studying how to write and register a custom speech recognition service. As first step, before delving deep into the source code, I tried to verify that it indeed works as I expected. So, I went to my phone's System settings > Language & input > Voice recognizer and selected Kõnele as the system's recognizer:
I then tried various application in the phone that present the keyboard with the microphone option, expecting that when I touch the mic symbol, Kõnele will be used. Instead, however, the system always pops up Google's built-in voice search.
Why is that?
Have I missed additional settings that I need to configure (as a user) in order to make this work?
Is this a "by design" limitation of the Android OS?
Is it possible to tell Android to always use a different RecognizerIntent that isn't Google Voice Search? If so, how?
Update: I managed to find one app that seems not to ignore the additional RecognizerIntent: Google Maps:
To me that suggests that this has something to do with Android intent resolution. But then why do some apps do not trigger that "Complete action using" dialog, while Google Maps does?
I think you have done everything that you can as a user, but an app that wants to use the speech recognizer is of course free to ignore your configuration. E.g. it can directly choose a particular speech recognizer implementation by constructing the recognizer something like this:
SpeechRecognizer.createSpeechRecognizer(this,
new ComponentName("com.google",
"com.google.Recognizer");
);
In this case, your only option is to uninstall or disable this particular implementation and hope that the app falls back to the general method:
SpeechRecognizer.createSpeechRecognizer(this);
Unfortunately, at some point Google started promoting the idea that apps directly link to the Google speech recognizer (see Add Voice Typing To Your IME). So many keyboard apps now do that (see e.g. the issue that I raised with SwiftKey), and your only option is to find one that does not...
It can also be that the app sends an intent that Kõnele does not support (the supported intents are listed in the manifest), but which would make sense to support in a speech recognition app. In this case it would be a feature request for Kõnele.

Implement QSB like suggestions in android app

I need to give a search box in my android app. As the user starts typing in the search text, I need to show him relevant suggestions. (As we see in the google-search widget on the home screen. If we see from the logs, com.android.quicksearchbox/.SearchActivity is started with android.search.action.GLOBAL_SEARCH intent and it searches in following: corpora:[web, apps, com.android.contacts/.activities.PeopleActivity]).
Only thing is I need the suggestions to be displayed from the web & my application DB.
Any idea how to implement this ? Do I need to implement my own SuggestionsProvider or can I directly use the native implementation? If so, how?
I think i figure it out myself.
Went through Searchable Dictionary code & QuickSearchBox code in android source.
Need two start 2 activities in a background thread. One will search for the search-term in my DB & other will search the same in Google. All the results will be seen in the suggestion list.
Google Suggest API provides suggestions as the user enters the text.

Categories

Resources