Android Bitmap Stretching/Pinching - android

I am hoping this question/issue is not too vague as I have tried asking something earlier but seemed to have came to a dead end.
Basically I am looking at stretching/pinching parts of a Bitmap within my Android project. There would be coordinates passed to the function in order to indicate where the move would need to take place (x,y).
I need to find a way to shift pixels up and down (in either a line or arc type format) and allow the pixels in between to be warped accordingly (not disappear or hide).
A sample image of what I am trying to achieve would be something like:
I would paste an image here but apparently am not allowed yet. (Image URL: http://t1.gstatic.com/images?q=tbn:ANd9GcQtLEHS-ZRQs3p7XmeU2TM6Vwgfh7DGnh-5nDIDu3Yd7zTIR0zX)
(Just grabbed a random Google face warp)
I have read about a few things like openCV and javaCV but they seem like overkill. I am simply looking for something that might allow me to move an array of coordinates from a source point to destination and allow for a smooth warp.
Any help/information is greatly appreciated.

Brad,
Heres a link on how to create a smudge tool. That should help you create images like the one you included within your post.
Hope that helps!

Related

Tracing an SVG path

I'm currently working on an app that will teach users how to write a foreign character(Character Tracing/Alphabet Tracing) such as Kanji, Hangul, Arabic and etc. I made the characters through Adobe Illustrator and imported it as XML file in Android. The SVG will then serves as a guide to trace its stroke and detect the user gesture, the user should follow the stroke and then it should be filled once it is done correctly else it should display the stroke what user should trace first.
Please see the sample image below:
The red line below is my gesture while the green line shows the correct way of tracing the character before proceeding to the other strokes.
Does anyone here has already experienced working with this kind of projects? Is it possible to do it using native android gesture detection? Thanks in advance
Disclaimer: The screenshot below is from the app Japanese Kanji Study, developed by Chase Colburn
If I were you, I probably wouldn't use SVG <path> elements. I would use a sequence (array) of points (ie the equivalent of an SVG <polyline>). The points should be close enough together that they look like a smooth line when drawn. Or you could apply some smoothing when you render them.
The advantage of the points array is that it is much easier to find the closest point to your touch location, than it is to find the closest point on an arbitrary <path>. And when you are "tracing" with the finger, you just need to draw a line through all the points up to the one closest to your touch location.
Obviously for most characters you would actually have two or more point arrays. But you would just work with each array in sequence.
Actually just take a look html and get some ideas. In html you can give coordinate area inside a image map and make it clickable. This following link elaborated what i am trying to say.
So after giving area, you can make order like 1 area 2 area ... n area. After that you gonna need first area clickable just give a flag, if there flag = true, then change background color when it is touched. When first area touched you make following area flag to true. It is all up to you. One of the solution. But main thing is in xml you can create MappedImage with co-ordinates

Motion to draw numbers on android

im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]

Android: How to detect these objects in images? (Image included). Tried OpenCV and metaioSDK, but both are not working good enough

i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)

Is there a way to make custom path effects in Android?

I'd like to make an Android app that will display a drawing of lines and I'd like to make something beautiful (big challenge).
Basically the app will show a drawing based on x y coordinates. The result must be identical to another drawing in a web based app. For the web based app, we'll be using this library.
I know how to draw paths on a canvas in an Android app but I don't know how to apply a custom effect on it. I've noticed I could use a PathEffect with this method:
paint.setPathEffect(myEffect);
but I'm not sure how to create another effect than the available ones (ComposePathEffect, CornerPathEffect, DashPathEffect, DiscretePathEffect, PathDashPathEffect, SumPathEffect).
Any tips and help would be much appreciated!!
An option would be to look in the source code of these path effects and understand how do they work, what's the idea, and derive how you could make your own path effect.

Rectify a rectangle to get area

I need to get the area of a known object inside a scene to get the distance from that. The problem is rectifying it so that the area is independent from the angle.
I'm using opencv (on Android) with some java code that is equivalent to this:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
In other words: how do i get the area of the object observed perpendicularly from that distance given the H matrix.
Thank you in advance and sorry for my poor english... :)
You can call cvCalibrateCamera, but am not sure if it works with one image only. The algorithm it is based upon can cope with the one image case, see section 3.1. where it says "if n=1...". So in a pinch you can re-implement it.

Categories

Resources