I am a deaf software developer and have developed the app that convert voice to text using speech recognition api (requires internet access) but it's not a continuous functionality. I want to make a app for both deaf and hard of hearing people like Innocaption or Rogervoice that is designed for users residing in USA. Can you tell me where I can find a technology that may help me develop a captioning phone app for android and maybe iOS? Thanks in advance.
Related
iOS App Clips supports Bluetooth, but Android Instant apps do not.
Is it possible to make an Android Instant app with a web view that uses the Bluetooth web API?
This is more a theoretical question and out of curiosity than if it is a good idea. So stability and UX should not be taken into consideration.
I have an android application that uses the Microsoft band 2's sensors and displays the data processed.
The app works fine, I just want to add voice commands to the app via the microphone of the band. Is it possible?
I am using Microsoft Cognitive Services, the Speech Recognition Service to get the voice command, transform it to text then process it. I know that this API works fine with the android microphone, but I would like to know if it is possible to use the band's mic or maybe integrate Cortana.
Thanks.
Access to the microphone of the Band is not currently supported by Microsoft's SDK. But Band 2 does work with Cortana on Android. So, while I am not familiar with what Cortana for Android's capabilities are, your best bet is likely to see if there is a way to work with that application in some way.
I'm preparing to create a open source mobile app for learning to type in Braille. The app will need to:
Access up to 6 simultaneous touch points
Generate text-to-speech audio on the fly
Play MIDI sounds with 6 simultaneous channels, generated on the fly
Connect with a BlueTooth device
Ideally, I would like to create the app once in one development environment, and then deploy it to Android, iOS and other devices.
PhoneGap
Titanium
LiveCode
However, as far as I can tell from my research, none of these gives me access to all the native features that my project will need.
I would be interested to hear from developers who are working in these and similar development environments on how easy it is to handle the four requirements I have listed above.
With all those requirements i'd go for Xamarin, i know it can deal with text-tp-speech, bluetooth and multiple touch inputs but you have to check whether it supports your MIDI requirement.
And of course you can port to all platforms.
Bluetooth Chat Sample Application
TextToSpeech class
I am currently thinking about the best multi-platform language to build a multiplayer app with, and I was just wondering if anyone knows if AIR supports multiplayer locally between devices i.e over a LAN or bluetooth? Would I need to run some aspects of the game via a server?
Not to give too much away (of the game idea) but it would be similar to a "Simon" type game, with the only info being passed to each device either a score/amount of moves to beat or other simple piece of data.
Thanks
Adobe AIR supports the ServerSocket class so yes it's more than possible.
Edit
As #davivid accurately pointed out, ServerSocket doesn't seem to be implemented on mobile devices. You're not SOL here though, you can use Native Extensions or AIR and still accomplish your end goal. See this official adobe page for more info and a ton of downloadable examples.
Multiplayer on devices connected to the same local network is supported in Adobe AIR including iOS and Android. You use NetConnection.connect().
Example with source code.
If you are developing on iOS. It is best to use GameKit that came with iOS. GameKit is also connected to GameCenter. So, players can challenge their friends close to them or play against someone over the Internet. This is all handled for you by the API, so you don't need to worry about matchmaking or even low level socket communications.
AIR doesn't support GameKit out of the box, but there are some Native Extensions that support Multiplayer Gameplay. The one I use is at: http://airextensions.net/shop/extensions/game-kit-by-vitapoly/
I want to develop an Speech recognizer in android, which should work in offline. As the android's built-in speech recognizer uses google server which needs internet, i want an alternative which works in the absence of internet.
Please suggest me some way to achieve the above feature.
We used to recommend pocketsphinx, but now more advanced technology based on Kaldi toolkit is available.
The demo is here: Vosk API, you can simply load it in Android Studio and run. Full disclosure: I am the primary author of Vosk.
It supports speech recognition in 7 major languages - English, Chinese, Spanish, Portuguese, German, French and Russian.
If the speech recognizer has limited vocabulary (as in a simple voice user interface) and is limited few samples - it maybe possible. Applications such as Transcription is not a likely task to be performed on Android (in offline mode). Also DSP is required for Voice Recognition ... A limited vocabulary and limited to very few samples might be your best bet.
If you really want to invest time and manpower for your goal, look at the Java-Project Java Speech API 2.0 (JSR 113).
It is used on "normal" mobile phones for voice commands and works offline.
Unfortunately, the project is discontinued.
You can download Google voices for later use.
From you mobile -> Setting -> “Language and Input” -> "Voice Search" -> "Download offline speech recognition" -> Choose the language pack .
Or you can use other programs, such as
Dragon Mobile Assistant
https://play.google.com/store/apps/details?id=com.nuance.balerion&hl=en
You're not going to be happy with this workaround but here goes: Record the speech & store it for later. When an internet connection is available, connect to the internet, playback the recorded speech and convert it to text.
Hey, it's the easiest way I can think of and might work for some applications, like dictation and memos.