popup notifications for google glass - android

How do i make custom popup notifications for google glass the way it is shown in the google glass commercial. I want my service to run on the background and send timely popups only when necessary.
Link for reference
http://www.mobile88.com.sg/news/read.asp?file=/2012/5/11/20120511113022&phone=Google_Glass_augmented_reality__futuristic_concept
Is there a possible way to do so or should I just use only Toast notifications or invoke activities through intents.

The "popup" design from that conceptual video was tested by the Glass team before it was released to the public and found to be overly distracting. It evolved into the concept of timeline notifications which are not as invasive, but which still act as timely alerts only when appropriate.
These timeline notifications can generally be done in one of two ways, depending on your exact use case:
Your app or webapp can use the Mirror API to insert a timeline item at the appropriate time with a notification sound. Users will hear the sound and, if they are in a position to do so, can wake Glass to see the card.
Your GDK app can create a LiveCard and publish it with LiveCard.PublishMode.REVEAL which will automatically display the card right after you publish it.

Related

Can we handle voice command from assistant widget?

I'm able to integrate Android widgets with Google Assistant. And want to have some voice command experience.
For example the CREATE_CALL intent, if user is trying to call Alice by saying call Alice with some app, and if there are 2 Alice in my app, is it possible for me to response with a widget showing 2 Alice, and asking user by voice, and user can choose which one to actually call, all by voice? Can it be done by SpeechRecognizer API?
Broadly speaking, App Actions do not have a voice conversation experience. There are some tricks you can pull that might head in that direction, but they are largely outside of the App Action Widget experience itself.
Can I respond with a widget showing that there are multiple matches?
Yes, you can send back a Control Widget that might allow them to choose which user they mean.
Can they speak which user?
Probably not in the way you're thinking. To use your example, they can re-invoke the CREATE_CALL BII using any of the phrases, but you can't prompt them with "Who did you mean, exactly?" and for them to just say the name.
Can I use the SpeechRecognizer API?
Not as part of a widget.
Widgets get embedded in the conversation with the Assistant.
In theory (and this is on my list to eventually test and figure out), you should be able to deep link to an Android Intent in cases such as this and open a view. While there, you could use SpeechRecognizer or just open the microphone to send audio somewhere. But this isn't done using the Widget itself.
In this scenario, SpeechRecognizer just does the Speech To Text (STT) or Automatic Speech Recognition (ASR) part of the processing. To actually match this up to phrases to determine an Intent, you would need a Natural Language Understanding (NLU) module such as Dialogflow. (But you may not need the SpeechRecognizer in that particular case, since Dialogflow can also take an audio stream to do the ASR part for you.)

What is this persistent in-call/in-progress bar called when a mobile app is backgrounded (iOS and Android)?

At least for iOS, apps like Spotify, Hangouts, and Google Maps have a way of showing that they're still active when in the background (when you temporarily leave them to check a different app, etc.). This typically appears as a thicker status bar in iOS.
Other variants:
What is this bar called, and is this possible to implement for Android? I have a video chat app (in both Google Play and App Store) that uses TokBox (essentially WebRTC), and it renders this bar already for iOS when backgrounded, but not Android.
The consistent way to indicate an ongoing phone call in Android is using a notification. The Android phone app also uses a notification in this scenario:
To prevent the user from dismissing this notification, you can use the FLAG_ONGOING_EVENT or FLAG_NO_CLEAR - see https://developer.android.com/reference/android/app/Notification#FLAG_ONGOING_EVENT
On iOS, you might want to set a specific Audio Mode and Audio Category to get the right status bar look.
See https://developer.apple.com/documentation/audiotoolbox/audio_session_services/1618405-audio_session_modes
and https://developer.apple.com/documentation/audiotoolbox/audio_session_services/1618427-audio_session_categories
Further, the Background Audio capability is required - otherwise, your Apps audio session will be stopped after moving it to the background.
To achieve an even more consistent user experience, you might want to integrate the iOS CallKit or the Android ConnectionService, see
- https://developer.apple.com/documentation/callkit
- https://developer.android.com/reference/android/telecom/ConnectionService
It can be implemented in Android by using a notification and adding a flag FLAG_NO_CLEAR
This is a link where this is explained pretty clearly

Google home to send card to the Google Home app

I made a small Google Home App and my service returns a response with a SimpleMessage + Card.
It works perfectly when running the app in the console.actions.google.com simulator. I get the card all good.
But when I test talking to the Google Home, it only sends the text, no trace of the Cards anywhere.
However If i talk to the Google home app on my phone, it does send the card correctly.
Is there something to enable to be able to receive cards sent by Google Home? Is it possible at all?
There is no way to make cards that were sent while the user is talking via Google Home visible, but there are several techniques that you, as a developer, can use if cards are necessary.
First of all - good design suggests that cards should be use to supplement the conversation, not be the focus of the conversation. Make sure the voice conversation itself is important and use the visual elements only when necessary. If your action is overly visual - it may be better suited as a mobile or web app, rather than an Action.
If your device requires a screen, then you can set this in the Action Console when you configure your question. This will, however, prevent it from being used on a Google Home device.
If you don't want to go this route, and want to allow it to be used on a smart speaker, but still take advantage of a screen where it is available, you have a few options.
First is that you can just send the cards. As you've discovered, they won't show up, but they won't cause any problems.
If you want to act slightly differently if a screen is available, you can check for the surface capabilities that the user's Assistant is capable of at that moment. If you're using the node.js library, you can have a command such as
let hasScreen = app.hasSurfaceCapability(app.SurfaceCapabilities.SCREEN_OUTPUT)
to determine if a screen is available and take action based on the variable hasScreen. If you're using JSON, you need to check the array at surface.capabilities or data.google.surface.capabilities to see if "actions.capability.SCREEN_OUTPUT" is one of the available surfaces.
If not, and you get to a point in the conversation where you feel you need to send a visual result, you can also request to continue the conversation on a device that does support screen output.
First, you'll need to make sure that they have a screen available. You'll do this with the node.js library with something like
const screenAvailable = app.hasAvailableSurfaceCapabilities(app.SurfaceCapabilities.SCREEN_OUTPUT);
or by checking the availableSurfaces.capabilities or data.google.availableSurfaces.capabilities parameters in JSON.
If one is available, you can request to continue the conversation there with something like
app.askForNewSurface(context, notif, [app.SurfaceCapabilities.SCREEN_OUTPUT]);
where context is the message that will be said on the Google Home, and notif is the notification that will appear on their mobile device (for example) to let them continue the conversation. If using JSON, you'll need to use a actions.intent.NEW_SURFACE next intent.
Either way, the user will get a notification on their mobile device. Selecting the notification will start up the Assistant on that device and your Action will be called again with parameters that let you check if they are on the new surface. If so - you can send the card.

Android TurnBased Multiplayer not calling UpdateMatchResult

I am writing a multiplayer game and if I start the app from a desktop icon and another player takes a turn then the game updates correctly via the UpdateMatchResult callback. However, if I don't have the app open and get a notification that a turn has been taken by Google Play Games, then if I respond to that notification and use Google Play Games to take me to the match in my app, then if I take a turn and await an update after another player takes a turn, I continue to get notifications, even though I am actually in the app.
What could be causing this? Ideally, I'd like to see the update in the game, as I do when I go in via the normal icon.
This was a simple coding issue. I have solved this now.

Voice recognition APIs, will Google Voice do this?

I've got an idea for an android app, I want to be able to say commands and have the application listen out for these and perform some action.
For example, I want my app to sit idle and listen for my voice, when it hears me say "start", the app will start doing something until I say "stop".
The idea is to lay the phone down and not have to physically touch it in order to control my app.
Would this be possible with any current APIs? If so which ones should I look into?
You can take a look at the Google voice commands.
http://www.google.com/mobile/voice-actions/
Alternatively, if you want to customise your application, you can use the google voice service and write an activity that will invoke the voice service and return you the result.
Check out the below link for the sample application.
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html

Categories

Resources