Implementing Android Auto voice command support - android

I am working on implementing voice commands for Android Auto using media player callback methods. I am having trouble getting Auto to recognize that I am asking to play a title on my application: “Play [x] on MyApp”. I understand that it takes a few days after being published for this command to work, but I should still be able to say “Play [x]” while the media session is already running and it should use onPrepareFromSearch and onPlayFromSearch methods to search for and play content.
When I say “Play [x] on MyApp” I get the response “I looked for [x] on MyApp on Google Play Music but it either isn’t available or it can’t be played right now”.
When I say “Play [x]” while my media service is running it will usually redirect me to Google Play Music.
I am able to pause, resume, skip forward and skip backward using voice commands and I see those being logged but when I try to perform a voice search neither onPrepareFromSearch nor onPlayFromSearch are called.
Things I have tried:
Added an <intent-filter> for MEDIA_PLAY_FROM_SEARCH to an activity in my application as to mark my application available to search for media
Implemented onPrepareFromSearch and onPlayFromSearch in my MediaSessionCompat.Callback
Added ACTION_PLAY_FROM_SEARCH and ACTION_PREPARE_FROM_SEARCH as supported actions.
Set the media session flags FLAG_HANDLES_MEDIA_BUTTONS and FLAG_HANDLES_TRANSPORT_CONTROLS
Added support for Android Auto as per the documentation here: https://developer.android.com/training/cars/media/auto
Is there a step I am missing in order to get this to work?

Related

Playing sounds sequentially on android's google chrome (given the new restrictions on playing sound)

I have a small app that plays sequential sounds (a teaching app playing the sillables of a word)
This could be accomplished by firing an event right after each sound stopped playing. Something like:
var sounds = new Array(new Audio("1.mp3"), new Audio("2.mp3"));
var i = -1;
playSnd();
function playSnd() {
i++;
if (i == sounds.length) return;
sounds[i].addEventListener('ended', playSnd);
sounds[i].play();
}
(source)
However, now android chrome has implemented some new restrictions on how to play sound: Sound events must all be fired by a user action.
So, when I run code very similar to the above, the first sound plays, and then I get
Uncaught (in promise) DOMException: play() can only be initiated by a user gesture.
How can a sequence of sounds, determined at run time, be played on Android's Chrome?
To start with, Google Chrome on Android has been having the limitation of not allowing applications to play HTML audio(s) without an explicit action by the user. However, it is different than how stock browser(s), in most cases, handles it.
The reason, as Chromium Org puts it, is that, Autoplay is not honored on android as it will cost data usage.
You may find more details on the same here.
Apart from the fact that this results in wastage of bandwidth, this also makes some sense, since mobile devices are used in public and in houses, where unsolicited sound from random Web sites could be a nuisance.
However, in the later versions, this idea was over ruled and Chrome on Android started allowing autoplay of HTML audios and videos on it. Again after a set of reviews and discussions, this feature was reverted to what it was, making it mandatory for a user action to invoke HTML audios and videos on the Chrome for Android.
Here is something that I found more on the same. As it says, the reason stated was that "We're going to gather some data about how users react to autoplaying videos in order to decide whether to keep this restriction". And hence the playing option without a user action was reverted back.
You can also find more about the blocking of _autoplay of audio(s) and video(s) here on Forbes and The Verge.
However, this is something that I can suggest you to try which will help you achieve what you intend to. All you have to do is copy this code and paste in your Chrome for Android. This helps you reset the flag which is default set to not allowing to play HTML audios and videos without user interaction:
chrome://flags/#disable-gesture-requirement-for-media-playback
OR
about:flags/#disable-gesture-requirement-for-media-playback
If the above procedure doesn't help/work for you, you can do this:
Go into chrome://flags OR about:flags (this will direct you to chrome://flags) and Enable the "Disable gesture requirement for media playback" option (which is actually the same as the above URL specified).

Retrieving the application name that has audio focus change

I can't seem to find anything related to finding out what application got audio focus. I can correctly determine from my application what type of focus change it was, but not from any other application. Is there any way to determine what application received focus?
"What am I wanting to do?"
I have managed to record internal sound whether it be music or voice. If I am currently recording audio no matter the source, I want to determine what application took the focus over to determine what my application need's to do next.
Currently I am using the AudioManager.OnAudioFocusChangeListener for my application to stop recording internal sounds once the focus changes, but I want the application's name that gained the focus.
Short Answer: There's no good solution... and Android probably intended it this way.
Explanation:
Looking at the source code, AudioManager has no API's(even hidden APIs) for checking who has Audio Focus. AudioManager wraps calls to AudioService which holds onto the real audio state. The API that AudioService exposes through it's Stub when AudioManager binds to it also does not have an API for querying current Audio Focus. Thus, even through reflection / system level permissions you won't be able get the information you want.
If you're curious how the focus changes are kept track of, you can look at MediaFocusControl whose instance is a member variable of AudioService here.
Untested Hacky Heuristic:
You might be able to get some useful information by looking at UsageStats timestamps. Then once you have apps that were used within say ~500ms of you losing AudioFocus you can cross-check them against apps with Audio Permissions. You can follow this post to get permissions for any installed app.
This is clearly a heuristic and could require some tuning. It also requires the user to grant your app permissions to get access to the usage stats. Mileage may vary.
Looking at the MediaContorller class (new in lollipop, available in comparability library for older versions).
There are these two methods that look interesting:
https://developer.android.com/reference/android/media/session/MediaController.html#getPackageName()
https://developer.android.com/reference/android/media/session/MediaController.html#getSessionActivity()
getPackageName supposedly returns the current sessions package name:
http://androidxref.com/5.1.1_r6/xref/frameworks/base/media/java/android/media/session/MediaController.java#397
getSessionActivity gives you a PendingIntent with an activity to start (if one is supplied), where you could get the package as well.
Used together with your audio listener and a broadcast receiver for phone state to detect if the phone is currently ringing you might be able to use this in order to get a more fine grained detection than you currently have. As Trevor Carothers pointed out above, there is no way to get the general app with audio focus.
You can use dumpsys audio to find who are using audio focus. And, you can also look into the results of dumpsys media_session.
And, if you want to find who're playing music, you can choose dumpsys media.audio_flinger. For myself, I switch to this command.

Cast Receiver App does not show subtitles

According to the Release Notes (of July 8), the docs for the Sender and the updated answer of this question, the Styled Media Receiver of Google Cast does now support Closed Captioning or Subtitle tracks.
However, when I tell the Default or the Styled Media Receiver to show a text track, nothing happens. It does not even load the .vtt from the server, as I can see in the logs.
I can tell the receiver app got the text tracks just fine, but even using the Android example app, the subtitles never show up. According to all the logs, they are being sent and the receiver app is told to show them - but they never appear, they are never even loaded.
The MediaTrack is being created as follows:
new MediaTrack.Builder(2, MediaTrack.TYPE_TEXT)
.setName("Deutsch")
.setSubtype(MediaTrack.SUBTYPE_CAPTIONS)
.setContentId("https://example.com/video/caption_de.vtt")
.setContentType("text/vtt")
.setLanguage("de").build();
I have checked thrice that the file exists and is being loaded with the type text/vtt. But that does not matter, as the file is never even requested by the player. I have tried both MediaTrack.SUBTYPE_CAPTIONS and MediaTrack.SUBTYPE_SUBTITLES.
So I need to know, is this claimed support of CC in the Styled Media Receiver simply a lie? Or is there some undocumented trick required to make it possible?
If there is still a custom receiver required, I would like to know how to convert the example player to support subtitles, as it doesn't seem to support them either.
First, I suggest you change your wording in future posts (re: "..is simply a lie.."); that is not appropriate at all. Secondly, it works and you can test that with the CastVideos-android app (or ios variation of it for that matter); the first three videos have CC. Lastly, we have documentation on that subject on our documentation site (https://developers.google.com/cast/docs/android_sender, under "Using the Tracks API").

Google ChromeCast Playlist from Android Device

I'm using a sample app for the RemotePlaybackClient from #commonsware to play a video from a url to Google ChromeCast dongle, the app works like a charm but I would like to implement a playlist, any idea how to send a playlist to ChromeCast from an Android device?
As usual, I don't need code, just links, tutorials, etc... Tks.
Are you using a custom receiver?
If so, you can pass a json to such receiver with your playlist and manage that list with a playback state.
you might try looking at "mediaList" object here. Thats your playlist object.
This is a totally different project (not mediaRouter api but ccl instead) that i used because i wanted to implement a playlist and wanted to NOT take on my own receiver app. I wanted to see whether the default receiver could collaborate with an existing github sender sample - altered slightly to manipulate both a playList implemented in the "mediaList" AND to send appropriate and successive PLAY instructions to the default recieiver app when that app's state as relayed in normal "consumer" message traffic indicated state=ready.
D/ccl_VideoCastManager(31057): onApplicationStatusChanged() reached: Ready To Cast
So, when the default receiver fires the "ready" message, the senderApp can just call getNext to return an entry from "mediaList" and then send a "play(mediaInfo.entry)" to the default receiver.
onApplicationStatusChanged() is the interface used by the ccl to commmunicate/ sync player state between the local/remote players. When the default-remote-state changes to "ready to cast" you can use "VideoCastManager" and its base class to select the next MediaInfo entry and format a message for the remote to play it...
this.startCastControllerActivity(this.mContext, nextMediaInfo, 0, true);
code above from sender/ccl base tells the receiver to play the item that the sender has determine is next from list.
Note : i was advised to implement the playlist on a custom receiver app that i would write. Im not that ambitious and found a very simple hack on the sender/ccl classes that was reliable enough for me.

Recorded sound file (ala google now, google keep) - RecognizerIntent/Listener

I have been developing an application that uses the recognizerIntent to get voice input. However, since jelly bean was launched, I have not been able to get the actual sound file from my voice input.
In the recognitionListener (http://developer.android.com/reference/android/speech/RecognitionListener.html) there is a method called onBufferReceived. However, there are no promises that this method will be called, and when I implemented it, it never got called. Is there any way to force this method to execute or what is the "best-practice"-approach to gather the sound file that the recognizerIntent analyzes?
It should be possible since both google now can do it with the voice-command "note-to-self", and Google Keep:s voice-notes does the same.
Thanks
I don't think there is a way to force it. It clearly depends on the recognition service implementation. If Google decides to not call onBufferReceived then there is no way to get the actual data that is used. Note that the mentioned Google apps don't use the (public) Intent / Service API to access the speech recognition but seem to use a private API within the apps (the speech recognition might be bundled within their apps).

Categories

Resources