I want to create a magic wand tool in Android like it is implemented in Photoshop. Is there an opensource library to perform this work? And if not can anyone guide me on the right way?
OpenCV has floodFill that with a little work can give you the magic wand functionality.
basically you need acccess to the pixels of the image. You can do that in numerous ways -> Canvas. Then, your algorithm is a bit like the A* Pathfinding algorithm (but not really);
set color-diff threshold
define starting point
check every pixel arround the starting point if it passes threshold. if yes -> save coords
for every pixel that passed threshold, go to 2
the pixel-color difference that should pass the threshold is in essence the pythagoras theorem between the original starting point and the pixel you are comparing; d=SQRT((x2-x1)^2+(y2-y1)^2+(z2-z1)^2)
of course photoshop has a number extreme efficient algorithms, but essentially it boils down to above
Related
I'm working on a university project in which I need to visualize on a smartphone datas from pressure sensors built in an insole.
I need to draw on a View, as a background, a footprint, something like the image below but just for one foot.
I don't want to use a static image because with different screens resolution it could lose too much quality, so I'm trying to do it by code.
The main problem is that I'm not very skilled in graphic programming so I have not a smart approach to this problem.
The first and only idea that I had was to take the insole's CAD representation, with its real dimensions, scale it in function of screen's ones and put it together using simple shapes (arc, circle, ecc...) available in Android.
In this way I'm creating a Path which will compose the whole footprint, once I will draw it with a Canvas.
This method will let me to do the work but is awful and needs an exceptional amount of work and time to set every part.
I have searched for some similar questions but I haven't found anything to solve my problem.
Is there (of course there is) a smarter way to do this stuff, saving time and energies?
Thank you
of course, you can always use open gl es
this method might not save your time and energy but will give you a better graphic and scope to improve it for later purpose
the concept is to create a float buffer with all the points of your footwear and then draw is with connected lines on a surfaceView
to get started with open gl es you can use this tutorial : https://developer.android.com/guide/topics/graphics/opengl.html
and here is an example : https://developer.android.com/training/graphics/opengl/index.html
im new to this android things. And i have to develop an application that can help an autism to learn numbers. I have a few ideas and I've been trying to learn and implement the code. But it's failed. The question is how can i apply the motion code or sprite to draw a numbers or letter? For example like this, i wanna make the penguin move through the line and draw a number nine.
There is example from mybringback.com which is the image move to draw a rectangle. How can i implement it to draw a number? Im sorry if i asking too much, i just trying to get some ideas.
I think that you should first build an utility program, in order to create the "path vector".
What I mean by path vector is simply a vector of Points (where a point has x value, and y value). And your utility should let you draw whatever you want, with a simple pen. You should draw on surface and store points when mouse is down, and ignore points when mouse is up.
Then, in the main program, you will just have to read at the path of your number/letter.
I've tried to implement something like this for the Sugar OLPC platform, without serializing path into files : I was able to draw, and to view the animation. And I used the process I've just described you.
Hope it can help you.
P.S : I used the word mouse, but you guessed that I talk about finger ...
There are various ways to achieve animation effects. One approach that is quite versatile involves creating a custom View or SurfaceView in which you Override the onDraw method. Various tutorials can be found on this; the official Android discussion of it is here:
http://developer.android.com/guide/topics/graphics/2d-graphics.html#on-view
Your implementation will look something like this:
// Find elapsed time since previous draw
// Compute new position of drawable/bitmap along figure
// Draw bitmap in appropriate location
// Add line to buffer containing segments of curve drawn so far
// Render all segments in curve buffer
// Take some action to call for the rendering of the next frame (this may be done in another thread)
Obviously a simplification. For a very simplistic tutorial, see here:
http://www.techrepublic.com/blog/software-engineer/bouncing-a-ball-on-androids-canvas/1733/
Note that different implementations of this technique will require different levels of involvement by you; for example, if you use a SurfaceView, you are in charge of calling the onDraw method, whereas subclassing the normal View lets you leave Android in charge of redrawing (at the expense of limiting your ability to draw on a different thread). In this respect, Google remains your friend =]
i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)
I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.
I usually play a game called Burako.
It has some color playing pieces with numbers from 1-13.
After a match finishes you have to count your points.
For example:
1 == 15 points
2 == 20 points
I want to create an app that takes a picture and count the pieces for me.
So I need something that recognizes an image inside an image.
I was about to read about OpenCV since there is an Android port but it feels there should be something simpler to do this.
What do you think?
I had not used the Android port, but i think it's doable under good lighting conditions.
I would obtain the minimal bounding boxes of each of the pieces and rotate it accordingly so you can compare it with a model image.
Another way could be to get the contour of the numbers written on the piece ( which i guess are in color) and do some contour matching with the numbers.
Opencv is a big and complex framework but it's also suitable for simple tasks like this.