Handwritten text dictionary - android

I don't know if I am asking the right question even: Is there an Apple/Android app that allow non-native English speakers to draw a word and look up its meaning at the same time? I high value any suggestions/responses to my question. Thanks

The Google Translate application can do this for you. There is a squiggly line at the lower right of the window on startup that lets you draw there and get a translation or description.
I believe that the same application did that for iOS as well. It can even use the camera to translate what it sees for you instead of writing it down and getting a translation if it is in front of you or in written material.

Related

Question regarding Arabic on Google Mlkit

I have a question about harakah on Arabic language, the result that i get from MLKIT doesn't provide arabic with harakah in its text, only arabic without it.
I tried making a feature for my flutter app, and it needs to recognize arabic words from gesture that user paint. So I use google_mlkit_digital_ink_recognition, The example works great, i downloaded arabic model but the problem is it cannot read arabic word with harakah that i input (paint) in the screen. I tried reading documentation and search about this but no dice.
The result that i get is only arabic without harakah, and the feature i plan to make need it. Backend will send a text (1 to 4 arabic alphabet with or without harakah) and user need to paint it on the screen, MLKIT will recognize the strokes user made and then the app will decide if what user paint is correct or not, this is what i plan to make. Is there a way for MLKIT digital ink to input and output arabic with harakah with this plugin? Is there alternative or a better way to handle this? or do i need to make and train a custom modal for this? or am i missing something that needs to be done first?
There are some app that i see in google play that's a bit similiar, i think i can achieve similiar result like the apps i see with clipper and path but they don't have harakah on it, the app need to know if it's correct or not, and the text that i need to match comes from backend which may vary and more complicated. There's other alternatives that i thought, instead of text backend send me an image, use scribble to generate image on what user paint, then match it with image from backend with image_compare, but i'm not sure since i haven't tried it yet. The chance of getting them right when matching might be low and this will burden other team since they need to make the image one by one instead of just text. Right now, the fastest way i can think of is using MLKIT since i need to work on this feature this late month or early next month. I hope you guys can help, Thanks.

Android Translate One Language to Another and Text to Speech

I am trying to make an application in which user can translate text one language to another and will be able to hear the translated text.
I Somehow Done Translating One Language to another using Microsoft Translate API.
But I am still unable to to speak out that translated text.
Please Guide me, How to achieve this functionality.
You can suggest me totally different API with translation and speech, I can convert my project to that.
Please Please any help.
Regards
You can you Google's text to speech. See this.

Is it possible to scan Logical Gates from a handrawn image

I am thinking of a project for my university the teachers liked it but I am not sure if its even possible.
I am trying to make an andriod app.
What I want to do is take a picture of a hand drawn logic circuit (having the AND, OR, NOT ... gates) recognize the gates, and make a circuit in the moblie and run it on all possible inputs
Example of logical circuit ( assume its hand drawn )
For this I will have to make a simulator on mobile, that I dont think is the hard part. The problem is how could recognize the gates from a picture.
I found out that theres a edge detection plugin in java but still I dont think its enought to recognize the gates. Please share any algorithm or any technique or tools that I can use to make this thing.
This is actually for my FYP, I cant find any good ideas and have to present this on thursday.
you will need to do some kind of object recognition the easiest way (conceptually) to identify gates is to simply do a correlation between the image and a bank of gates, or an "alphabet" You run the gate template over the entire image and look for the highest correlation, this means it matches the template closely and you likely found your gate of interest. here are a few interesting s0 posts
Simple text reader (OCR) in Matlab
MATLAB Optical character recognition - need help
On it's own this could be a daunting task, but you can simplify the problem by adding constraints.
For instance the user must draw on graph paper and they can only have one gate per grid. This ensures you won't have to check a large variety of sizes for each gate
If you use graph paper with colored lines (like blue) and the user is only allowed to use a non-blue pen/pencil, you MAY be able to easily remove the grid when processing the image by filtering out the blue channel, and still have a clean image to process with.
of course there are more advanced methods than correlation, but as I said before, conceptually, this model is very easy to understand. Hope that helps
edit
I just realized both my examples were in matlab, the important point here is the logic/process used, not the exact code.

Can LiveCode recognise the characters that I draw?

I want my app to be able to recognise characters that I draw on screen, but I really don't know where to start - is there an external, etc. ?
I did some basic character recognition for my 'Spell With Kyle' app. It currently only recognises one character at a time, but the idea could be worked on if you need something more complex. There's an explanation of the technique and an example stack at http://splash21.com/Gestures.html
HTH :D
(It's just LiveCode - no external)

How to get strings and coordinates from the screenshot using python or monkeyrunner

Lets say i captured a screen shot using monkey runner. This screen shot contains some text ex: Contacts, Dialer et.c. I want to extract the strings and coordinates from the screenshot. So, in my monkey runner script, i can search for the string and get the coordinates. Using this coordinate i can use monkey runner to tap on the coordinate.
So, this will solve the purpose of searching a text in a screen and tapping on it.
Can somebody help me in this.
This is a question of OCR.
Try here:
https://code.google.com/p/pytesser/
It is probably easier to access the low level user interface elements than trying to figure out what reads on the screenshot. However, the question lacks related information about used software, operating system, etc.

Categories

Resources