I have an android application that uses the Microsoft band 2's sensors and displays the data processed.
The app works fine, I just want to add voice commands to the app via the microphone of the band. Is it possible?
I am using Microsoft Cognitive Services, the Speech Recognition Service to get the voice command, transform it to text then process it. I know that this API works fine with the android microphone, but I would like to know if it is possible to use the band's mic or maybe integrate Cortana.
Thanks.
Access to the microphone of the Band is not currently supported by Microsoft's SDK. But Band 2 does work with Cortana on Android. So, while I am not familiar with what Cortana for Android's capabilities are, your best bet is likely to see if there is a way to work with that application in some way.
Related
I am a deaf software developer and have developed the app that convert voice to text using speech recognition api (requires internet access) but it's not a continuous functionality. I want to make a app for both deaf and hard of hearing people like Innocaption or Rogervoice that is designed for users residing in USA. Can you tell me where I can find a technology that may help me develop a captioning phone app for android and maybe iOS? Thanks in advance.
I have a game programmed in Unity3D in order to publish it in the end to iOS and Android.
Suppose the game App is installed on the mobile and I want to control it through an external Bluetooth Low energy device (for example - Heart rate sensor) What would be the better way to make the architecture for that for making it cross platform?
For example - to make it the easiest way to implement it on iOS and Android as well?
*External* *Mobile*
Bluetooth Low Energy device --> Something.. --> Unity
Something = What cross platform framework do you advice me to use that can interface with the Bluetooth Low Energy (in the future also with Database) and the Unity3D to pass it the logic ?
Thanks a lot
Unity does NOT have a Bluetooth API to use bluetooth on iOS and Android. The way to do this to make a plugin.
For iOS, you can write a plugin or functions in Ocjective-C or C++. You then place the .cpp or.mm file in your Assets/Plugins/iOS directory.
For Android, You have to write the Android plugin in Java or C++ then compile it into .jar exntention and place the jar file in Assets/Plugins/Android
Each plugin should have thesame function names so that they will be compatible with each other. When you want to compile for Android or iOS, Unity will automatically choose the correct plugin folder to use for the specified mobile platform.
The good way of making this plugin is to write a whole Application in iOS or Android and test your functions in XCode or Android Studio. If they work, then you can go ahead and convert it to plugin to be used in Unity. This saves time.
EDIT:
I don't like adversing products but I know this will help you so here is a plugin that you can use for Bluetooth LE communcaitions that works on Android and iOS. https://www.assetstore.unity3d.com/en/#!/content/26661
This plugin is new but ihas good reviews. That's why I posted it.
Please bear in mind that most Heart rate sensors come with a built in security and proprietary communication protocol. So, even if it uses Bluetooth, you will have to reverse engineer the protocol and then write a c# class using the Bluetooth plugin I linked here to be able to communicate with them. You can reverse engineer any Heart rate sensor. Just buy one of the famous ones then get an iPhone and Android phone.
Jailbreak the iPhone and root the Android device. Install bluetooth sniffing app and try to communicate your Heart rate sensor device with the with either iPhone or Android. You can read what is been sent to the Heart rate sensor device from the iPhone/Android or what iPhone/Android is sending to the Heart rate sensor device. That is what you need to be sending to it in your c# class to make it work. Other ways of doing this is de-compiling the app that comes with the Heart rate sensor device(Not recommended). You can see what's going on from there.
If the Heart rate sensor has an API for android or iOS, you will then have no other option but to write the plugin yourself on top of the API they provided for it.
If you are making your own Heart rate sensor device, then that should be easier. You can make a prototype with Arduino and communicate to it with the Plugin I linked here. If it works, you can then move on to make it a real product.
It looks like the H6 and H7 are the most famous ones. They also have a developer page. It hould be easy to communicate with these because they use standard protocol instead of proprietary ones just like some do.
http://developer.polar.com/wiki/H6_and_H7_Heart_rate_sensors
Android heart rate sensor code(standard):
http://developer.android.com/guide/topics/connectivity/bluetooth-le.html
iOS heart rate sensor code(not about standard):
http://developer.polar.com/wiki/H6_and_H7_Heart_rate_sensors#HR_example_code_for_Android
You can email the developer of the plugin I posted in link and ask they could support or help you implement heart rate sensor. They might even do it.
Good luck.
Basically I want to make the tizen smartwatch into a bluetooth headset for a period of time. We have a tizen and an android developer handy and we're willing to build anything necessary to make this work.
This kind of process seems to work with built-in android applications like the standard phone app. But there doesn't seem to be any documentation online as to how an app developer would leverage streaming the mic.
It should be noted that we do need to get the audio into the microphone input on the phone for our third party software to work. It's not as simple as just getting the audio to the phone.
Any help, even someone telling us what isn't possible, will be greatly appreciated.
It is possible to play sound with the HTML audio tag: http://developer.samsung.com/forum/board/thread/view.do?boardName=SDK&messageId=269002&startId=zzzzz~&searchSubId=0000000032&searchType=ALL&searchText=sound
It is possible to capture the sound in a Host Android application
It is possible to exchange data bytes by bluetooth with the accessory SDK: http://developer.samsung.com/samsung-mobile#accessory
The data transfer is quick and efficient, so low quality sound may works with little delay
So it certainly is possible. But you'll have to code (or use compatible javascript and android libraries) all the streaming code which is quite a lot of work
I have just acquired an Android phone recently... wonderful stuff. Starting to look at the OS guts and how to program the thing.
The voice-recognition-for-dictation is good too... given that this is an open-source OS, is there any way of harnessing the Android-Google speech recognition? My current understanding is that the voice trace has to be sent to the Google servers to be processed, i.e. the software is not on the machine. But I may be wrong!
Either way, does anybody have any idea whether such harnessing for one's own apps (on Android or another OS on a full-size 'puter, for example) is possible?
If you are talking about using voice recognition in your code somehow, then you can use it with the help of SpeechRecognizer class(http://developer.android.com/reference/android/speech/SpeechRecognizer.html) and RecognizerIntent.
But you can only use the currently existing functionality to some extent only.
About the confusion as to whether it lies in device or not, try using your Voice Recognition after turning off internet on your phone. It wont work.
You can also look into API Demos for some example:
sdk\samples\android-10\ApiDemos\src\com\example\android\apis\app\VoiceRecognition.java
I want to develop an Speech recognizer in android, which should work in offline. As the android's built-in speech recognizer uses google server which needs internet, i want an alternative which works in the absence of internet.
Please suggest me some way to achieve the above feature.
We used to recommend pocketsphinx, but now more advanced technology based on Kaldi toolkit is available.
The demo is here: Vosk API, you can simply load it in Android Studio and run. Full disclosure: I am the primary author of Vosk.
It supports speech recognition in 7 major languages - English, Chinese, Spanish, Portuguese, German, French and Russian.
If the speech recognizer has limited vocabulary (as in a simple voice user interface) and is limited few samples - it maybe possible. Applications such as Transcription is not a likely task to be performed on Android (in offline mode). Also DSP is required for Voice Recognition ... A limited vocabulary and limited to very few samples might be your best bet.
If you really want to invest time and manpower for your goal, look at the Java-Project Java Speech API 2.0 (JSR 113).
It is used on "normal" mobile phones for voice commands and works offline.
Unfortunately, the project is discontinued.
You can download Google voices for later use.
From you mobile -> Setting -> “Language and Input” -> "Voice Search" -> "Download offline speech recognition" -> Choose the language pack .
Or you can use other programs, such as
Dragon Mobile Assistant
https://play.google.com/store/apps/details?id=com.nuance.balerion&hl=en
You're not going to be happy with this workaround but here goes: Record the speech & store it for later. When an internet connection is available, connect to the internet, playback the recorded speech and convert it to text.
Hey, it's the easiest way I can think of and might work for some applications, like dictation and memos.