I have a set of some messages which should be speaked by android app. I could use something like Svox, but I don't need to read user input.
I was thinking about using prerecorded words and putting them together myself - could you please show me some way how to do it properly?
Android has a built in feature for Text To Speech since API level 4.
Go through this tutorial for a step by step guide.
clone this git project
Its a well architected example for recognizing spoken commands in ( english/estonian )
If you like the approach, there is a lib project to use as a service
If you can take the time to follow opensource, this approach to implementing recognized commands is quite good.
Related
Android google speech to text SDK, the voice recording is controlled by SDK .
I need to make it manual button for start and stop voice recording for speech to text. for eg: while click a button for start the voice recognition , its continue to record the audio until click the stop button. but in android SDK , its automatically stop the recording and pass the recorded audio for processing.
I created an updated version of the Android sample application with Start and Stop, and posted it here:
https://github.com/Avilaaiops/SpeechRecognitionClient
It updates Gradle to 4.0.1, Kotlin to 1.3.72, and the Speech SDK to 1.24.0
This should help people looking for an up to date sample that isn't on the alpha SDK anymore.
As far as i know this is how its attended to work. There is no continuous speech recognition. To implement something like you requested, you need to use third party libraries like this or this one.
There is no official documentation on how to transcribe Audio from Streaming Input in Android yet but Java, C#, Go, Python and Node.JS. However there is a sample Android app for the API. You can use it as a starting point and convert the Java code into Android Native code.
Note: Even though Android uses Java but it is a different version which is designed to fit the Android architecture.
Using the approach I mentioned above requires extra effort, research skill as well as your Java and Android Fundamental Programming Skill. #thisisthehardway
The easier way will be applying external libraries like Droid Speech (As mentioned by #kAliert).
From the documentation of Droid Speech:
Droid Speech aims to close this gap and provide unparalleled
optimisation of continuous speech recognition without any of the above
said issues. It is developed keeping in mind all the loopholes which
needs to be blocked to have the speech recognition run seamlessly in
an android device.
This would be relative easy but it is made by third party as you have no full control on it.
Cheers!
For this i searched a lot, i didnt find any solution to implement Microsoft speech to text api. Finally i got the solution it worked for me , and hope i will work for you or it can help somebody who is searching. I am just mentioning the git repository link, Pickup the MainActivity.java,gradle(aap and project level) , layout xml and put in your project. Execute and enjoy the solution.
Git repository link is: MircoSoftSpeechToText
I want to use vuforia sdk's text recognition module in my new app.
I have been successful in building and running their sample apps. However, having done that was apparently not enough for me to figure out how to start using vuforia android sdk in my mew project in a proper way.
The IDE is Android Studio. I have the license key for my vuforia account.
Could someone help me get started? I jist need to start using vuforia as a library in my own personal project. I do not want to start developing directly over their sample apps.
I am sorry if the context of the questions doesn't fit into that of this particular community.
I wanted to do exactly what you said. I didn't want to build on the top of their samples. I struggled to find a tutorial but there is none, at least I was not able to fine one. Turns out it is really easy. Go to the Vuforia folder(wherever you installed it) and then head to vuforia-sdk-android-X-XX-X\build\java\vuforia and copy the .jar file and paste that file to your new project in Android studio inside your libs folder. Then add this line to your Build.Gradle
compile files('libs/Vuforia.jar')
inside your dependencies. That's it. You can access all the Vuforia classes from there. Don't forget to instanciate them just like in the example with your own key from Vuforia developer console.
I don't know if this is still relevant, but... This link shows what you need to do, code-wise, for creating a new Vuforia App: How can I build a basic Vuforia app. This, along with the links you have already seen regarding the setup, should be enough for you to get started from scratch.
Instead of relying on Vuforia's module, I built my own real-time OCR from scratch, using the Camera (1) API and Google's Mobile Vision Library.
My app - Optical Dictionary & Vocabulary Teacher - performs a lot better than Vuforia's module. It does real-time scanning of words, shows them in a more friendly manner, and does way better validation of words. It even lemmatizes the words (e.g fooled -> fool, thieves -> thief etc).
Also, needless to say that this gave me complete control over my module as well.
Here is a Video Demo of my app.
If any one wants to build something similar, they can feel free to contact me for assistance.
There's a very few examples on the web showing how to work with Notification Hub and android. Besides that, it seems that the samples are using a old version of android SDK. I'm trying to use the notification hub with tags (first with android), but I could not find a good resource to learn how to do that. I'm wondering if anyone already did that, and could show me some code.
I'm following this article: http://www.windowsazure.com/en-us/manage/services/notification-hubs/get-started-notification-hubs-android/
but as I said, I would like to notify users according to tags.
I'll appreciate any help.
PS: I'm not a android/java programmer, so a working project will be awesome.
Are you able to run the get started sample that you linked?
Even if they refer to API version 17, it should work the same (you can use Google API 18 without problems).
From the sample code, in order to use tags, simply add your tags in the call:
hub.register(regid, tags); // with tags a list of strings such as ["Yankees", "RedSox"]
Then when you send a notification you have to specify the tag you want to target.
Unfortunately, we do not have the full Breaking news tutorial for Android yet (only Windows and iOS): http://www.windowsazure.com/en-us/manage/services/notification-hubs/breaking-news-dotnet/
The breaking news tutorial does also provide a lot of insights into the things that have to be done for most of tag-based scenarios.
I am making an app which will allow the user to click a picture and then apply various effect filters on the picture. Basically, I want to create an app similar to Pudding Camera.
I researched a lot and came across 3 options to do this:-
1) Use OpenCV and implement all the effects manually [not my first priority as it uses a lot of time, but will do this if all else is unfruitful].
2) Use a library like ImageMagick / ImageJ / Marvin by porting to Android via NDK.
3) Use a library like jjil.
Now I want to know which is the best way to proceed. My priorities are:-
1) I want to be able to tweak the effects and maybe create new custom effects of my own.
2) I want it to run fast as I want my app to be quick and responsive.
3) I want to use a library which is easiest to learn as I am not an expert in image proccesing.
Please help!
OpenCV works well for Android 2.3 and beyond, you can consider FastCV by Qualcomm, which is like OpenCV but more optimized for Qualcomm chips.
I don't not recommend JJIL, it hasn't been updated for ever and only works on very old version of Android.
Best lib for to use and to learn. Catalano Framework. Check this article, you will learn quickly with few lines of code, contains several examples. There is several filters running in multithread, you can check in this namespace Catalano.Imaging.Concurrent.Filters
I am very new to eclipse and android developing in general and need help with the following. I have built two android projects in Eclipse with the android SDK:
"ORF401 Project" - targets Android 2.2 platform
"Map Project" - targets Google APIs 2.2 platform
I have followed the steps as specified by the standard Hello World Google Maps for android tutorial and have gotten the Google map to display on the emulator when I run the second project.
I have a menu set up in the 1st project for which one of the options is to load the map. However, I'm not sure how to load the map within this project since only one build target can be specified for each project, and so I cannot specify the Google Maps API as a (additional) build target. Is there a way to call the main .java class from the second project within the first project? I see where a reference can be made to the 2nd project under the project properties, but I'm not sure how to make use of this. One possible solution I found on the web was to add the following code under the switch case in the first project:
Intent intent = new Intent(this, {googleMap}.class);
startActivity(intent);
I presume this would require an additional googleMap.java class in the first project and also another activity, but I can't get it to work. Can anyone make a suggestion as to how I can accomplish this?
If the code for either or both projects would help, I'll be happy to post it. Thanks
The main idea with projects is to have one project per application that does some thing.
I assume your application needs to do something with maps, as well as something else. There is no need to split those ideas. Keep them in one project, because they make up one single application that you develop.
First thing I would suggest - read carefully about activities and intents. Head to http://developer.android.com - everything's clearly explained.
With that all cleared up you will see the point in making some button, which, being tapped, opens a new screen with the map feature that you've developed. And then let's you go back or do something else, like open a new screen, a browser, etc.
And give up trying to call the other project from a different one :) It's not the way I think you want to do stuff.
Just to make sure I'm not misunderstood - of course you might want to have two projects. But those will most probably result in two separate applications. Luckily, applications may also interact by intents, or content providers, or a couple more. Just see how Contacts app takes you to GMail app if you want to send a mail. If that's what you want to achieve - still need to read about intents.
edit:
Here's the link I mentioned about in the comments:
http://android-developers.blogspot.com/2010/07/how-to-have-your-cupcake-and-eat-it-too.html
It explains how to achieve the 'additional target' that you would like to have.
There are ways to call a class from one project in another project, but there are bigger problems here. The first project can run on any Android device. The second, however, requires Google Maps APIs. You won't be able to invoke it anyway, because it can't be installed unless you're in an environment that supports Google APIs. There really is no benefit to doing this, unless you have additional functionality in project 1 such that it can exist without project 2.
I would suggest using the Intent method rather than trying to hack something together that allows you to access another class. Regardless, though coupling these two together like this seems overly complicated and error prone. I would suggest simply embedding the mapping functionality in project 1 and requiring Google APIs. Most mainstream device support them anyway.
If you're married to the idea of having two separate projects with different build targets, I would suggest looking into using BroadcastReceivers with a custom intent that you broadcast from application 1. I don't believe startActivity will work across application boundaries because of class loader issues, but I could be wrong about that.