Tracing an SVG path - android

I'm currently working on an app that will teach users how to write a foreign character(Character Tracing/Alphabet Tracing) such as Kanji, Hangul, Arabic and etc. I made the characters through Adobe Illustrator and imported it as XML file in Android. The SVG will then serves as a guide to trace its stroke and detect the user gesture, the user should follow the stroke and then it should be filled once it is done correctly else it should display the stroke what user should trace first.
Please see the sample image below:
The red line below is my gesture while the green line shows the correct way of tracing the character before proceeding to the other strokes.
Does anyone here has already experienced working with this kind of projects? Is it possible to do it using native android gesture detection? Thanks in advance
Disclaimer: The screenshot below is from the app Japanese Kanji Study, developed by Chase Colburn

If I were you, I probably wouldn't use SVG <path> elements. I would use a sequence (array) of points (ie the equivalent of an SVG <polyline>). The points should be close enough together that they look like a smooth line when drawn. Or you could apply some smoothing when you render them.
The advantage of the points array is that it is much easier to find the closest point to your touch location, than it is to find the closest point on an arbitrary <path>. And when you are "tracing" with the finger, you just need to draw a line through all the points up to the one closest to your touch location.
Obviously for most characters you would actually have two or more point arrays. But you would just work with each array in sequence.

Actually just take a look html and get some ideas. In html you can give coordinate area inside a image map and make it clickable. This following link elaborated what i am trying to say.
So after giving area, you can make order like 1 area 2 area ... n area. After that you gonna need first area clickable just give a flag, if there flag = true, then change background color when it is touched. When first area touched you make following area flag to true. It is all up to you. One of the solution. But main thing is in xml you can create MappedImage with co-ordinates

Related

Motion to draw numbers on android

im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]

Android: How to detect these objects in images? (Image included). Tried OpenCV and metaioSDK, but both are not working good enough

i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)

Mission impossible? google maps mask effects in iphone or android

i've come up with an idea that has a mask effects in google maps so that it can highlight a city .but i've searched the internet again and again found no document about this.
so ,is it possible to make mask effects in android or ios just like the effets below? and how to?
http://cadgis-blog.blogspot.com/2011/10/google-maps-create-cool-mask-effect-on.html
So there's two problems there. The first is where to get the boundary data from, and the second is how do you draw it on a map.
Answering the second part first, if you're using the iOS map view (MKMapView): you'll want to look into MKPolygonView. You can definitely highlight an arbitrary polygon, but the usual highlighting effect looks like a coloured overlay inside the polygon.
The thing to do, therefore, would be to make a huge polygon that encompasses the entire country, with your region as a hole in the polygon. That is, I believe, what your demo does. You can make an MKPolygon with the polygonWithPoints:count:interiorPolygons: method, and pass in your 'hole' as an interior polygon to be cut out.
Regarding the first part, how to get the data: what you need is a set of latitude/longitude pairs that make up the border for your region. Your example used this dataset, which is administrative borders for Switzerland. You'll need to find yourself a dataset that encompasses the borders for cities in the country you're interested in. I would imagine that you would store the coordinates of the borders in a database embedded in your app.

Android Bitmap Stretching/Pinching

I am hoping this question/issue is not too vague as I have tried asking something earlier but seemed to have came to a dead end.
Basically I am looking at stretching/pinching parts of a Bitmap within my Android project. There would be coordinates passed to the function in order to indicate where the move would need to take place (x,y).
I need to find a way to shift pixels up and down (in either a line or arc type format) and allow the pixels in between to be warped accordingly (not disappear or hide).
A sample image of what I am trying to achieve would be something like:
I would paste an image here but apparently am not allowed yet. (Image URL: http://t1.gstatic.com/images?q=tbn:ANd9GcQtLEHS-ZRQs3p7XmeU2TM6Vwgfh7DGnh-5nDIDu3Yd7zTIR0zX)
(Just grabbed a random Google face warp)
I have read about a few things like openCV and javaCV but they seem like overkill. I am simply looking for something that might allow me to move an array of coordinates from a source point to destination and allow for a smooth warp.
Any help/information is greatly appreciated.
Brad,
Heres a link on how to create a smudge tool. That should help you create images like the one you included within your post.
Hope that helps!

Custom layout in Android: scrollable graphic with selectable elements over top

I'm fairly new to the Android platform and was wondering if I could get some advice for my current head scratcher:
I'm making an app which in one view will need an image, which can be scrolled on one axis, with a load of selectable points over the top of it. Each point needs to be positionable on the x and y (unlikely to change once the app is running, but I'll need to fine tune the positions whilst I'm developing it).
I'd like to be able to let the user select each point and have a graphic drawn on the point the user has selected or just draw a graphic on one/more points without user intervention.
I though for the selectable points I could extend the checkbox with a custom image for the selected state - does that sounds right, or is there a better way of doing this? Is there any thing I can read up on doing this, I can't seem to find anything on the net about replacing the default images?
I was going to use the absolute layout, but see that it's been depreciated and I can't find anything to replace it.
Can anyone give me some code or advice on where to read up on what I need to do?
Thank you in advance
This really feels like something you should be doing with the Canvas and 2D graphics, rather than trying to twist the widget framework to fit.

Categories

Resources