I have used OpenCV 2.3.1 with Android 2.2 to find contours in bitmaps which seems to be working fine on Samsung Galaxy Ace, but now I need help with moving those contours. My aim is to make a selected contour follow the user's finger when dragged to a different location. Help of any kind would be appreciated.
EDIT:
I am now able to move the contours based on the user's touch, but then they don't stay at the new position. So, I assume I need to erase the image from the original position and redraw it at the new one. Moreover, its only the surrounding contour which moves and not the pixels of the image within the contour. I am more concerned about the image pixels. How can I get the image pixels to move to the new location? It would also be great if I could somehow get the co-ordinates of the pixels within the contour.
Sorry, I wanted to upload an image but it seems new members cant upload images at this stage. For example - I have the contour surrounding the line in pink. When I drag, only the contour moves and the black pixels of the line do not move at all. Is there any way by which I can get the black pixels within the pink contour to move?
Another problem is that when I try my code on a closed figure like a circle or a square, I get two contours. One for the inner boundary and one surrounding for the outer boundary. But again as I said earlier, I am more interested in the image pixels. Please help.
P.S. - The image can be anything, any shape. I have just taken the example of a line.
First of all you have to add TouchListener/ClickListener (or something else, I don't know Android API) to your bitmap or canvas.
When user is touched the screen (listener is fired) than you have to identify which contour has user selected. For this use pointPolygonTest function.
About moving: Contour is just a sequence (vector) of Points so if you want to shift (move) some contour you have to do the following (c++ code):
void moveContour(vector<Point>& contour, int dx, int dy)
{
for (size_t i=0; i<contour.size(); i++)
{
contour[i].x += dx;
contour[i].y += dy;
}
}
Hope it helps.
Related
I need a basic idea for how can i warp image on touch of a particular area. Image filters apply warp on whole image but i want to warp single point, like if i want to warp eye of a person then i will touch on that point. So I need a basic idea about this work.
I have tried this one but its also applies filters on whole image.
https://github.com/Jtfinlay/PhotoWarp
App:
https://play.google.com/store/apps/details?id=hu.tonuzaba.android&hl=en
A warp is not just at a "single point" but over some area that you deform in a smooth way.
To achieve this, you need a geometric transform of the coordinates that works in some neighborhood of the touched point. One way to do this is by applying a square grid on the image and moving the grid nodes around the touched points with some law of yours (for instance, apply a displacement vector to all nodes, with a decaying factor such that far away nodes don't move).
Then you need a resampling function that computes the new coordinates of every pixel and copies the color of the source pixel.
For good results, you must actually work in reverse: scan the destination image and for every pixel retrieve the source coordinates and source pixels. Apply bilinear or bicubic resampling to avoid aliasing.
For ease of implementation, the gridding idea should be adapted as well: rather than deforming the destination grid, keep it unchanged and apply the inverse deformation to the source grid.
Last thing: in the grid approach, see the displacements of the grid nodes as two scalar functions DX(i, j) and DY(i, j) that you can handle separately. From the knowledge of the displacements at the nodes, you can estimate the displacement of any pixel by interpolation (bicubic would be appropriate here).
you can use canvas to detect that portion and stop action on that portion in ontouchlistener
code sample
Bitmap pricetagBmp = BitmapFactory.decodeResource(getActivity().getResources(), R.drawable.ic_tag_circle_24dp);
// canvas.drawBitmap(pricetagBmp,left + (right - left) / 2, top + (bottom - top) / 2 - (bounds.height() / 2),circlePaint);
float imageStartX = (left + ((right-left)/2)) - (pricetagBmp.getWidth()/2);
float imageStartY = (top + ((bottom - top) / 2)) - (pricetagBmp.getHeight()/2);
canvas.drawBitmap(pricetagBmp, imageStartX, imageStartY,circlePaint);
and in ontouchlistener if that points detected you can perform no action
Note: you can replace drawBitmap with drawRect or something else with invisible color
I would like to detect collisions between shapes dynamically drawn on a canvas (SurfaceView) for an Android game.
I can easily use intersect method of Rect or RectF objects but the result is not very good (see picture below where I have a "false" detection).
I don't want to use Bitmap so it's impossible to use the "pixel perfect" method.
Do you know a way to do this for circle, rect, triangle and other basic shapes intersection ?
Thx for help ;)
For a good collision detection you have to create your own models behind. In those models you specify the conditions that two objects colide.
For example, a circle is described by the center position and by the radius. A square is described by the left down corner and by the edge length.
You don' t have to describe all possible poligons, you can use the so called bounding boxes, meaning that, for a complex random poligon you can use a square or whathever shape fits it best(also you can use multiple shapes for a single object).
After you have the objects in mind you compute the condition that each one of them will colide with all other shapes including itself.
In your example The sphere and the square colides if the distance between any corner of the square is greater than the circle's radius.
Here you can read more http://devmag.org.za/2009/04/13/basic-collision-detection-in-2d-part-1/
This problem can get very complex, keep it simple if you want something simple.
Here is a directly applicable method I use in my own game to detect circle and rectangle intersection. It takes the ball (which is a view in this case) and the rectangle (also a view) to be checked for collision with the ball as parameters. You can put the method in a Timer and set the interval you want the circle and rectangle to be checked for collision.
Here is the method:
public boolean intersects(BallView ball, Rectangle rect) {
boolean intersects = false;
if (ball.getX() + ball.getR() >= rect.getTheLeft() &&
ball.getX() - ball.getR() <= rect.getTheRight() &&
ball.getY() + ball.getR() <= rect.getTheBottom() &&
ball.getY() - ball.getR() >= rect.getTheTop())
{
intersects = true;
}
return intersects;
}
getR() gets the circle's radius
getX() gets the center of the circle's X position value
getTheLeft() gets the rectangle's left X value
getTheRight() gets the rectangle's right X value
getTheTop() gets the rectangle's top Y value
getTheBottom() gets the rectangle's bottom Y value
If you can't directly use this method in your code you can still conjecture the logic it entails to implement it where it would work for you. It detects all collisions without using pseudo-collision detection like a collision box for the circle.
Good luck! And if you have any questions feel free to ask, I'm here to help!
To know if a polygon in 2d is colliding with a circle, you can test, for each of its lines, where is the point on the line that is closest to the center of the circle (this might help).
Then, check if the point you found is between to two corners that make the line - that is, that the point is actually on the line, and not just on its continuation - and if the distance of that point to the center of the circle is smaller or equal to the radius of the circle. If both are true for any of the lines of the polygon, you have a collusion. You also have to check for the edge cases where the corners of the polygon might be in, or touching the circle.
For two circles, this is easier. Check the distance between the centers, and compare it to the sum of their radiuses. If the distance is smaller or equal to the sum, you have a collusion.
I'm trying to recognize hand positions in OpenCV for Android. I'd like to reduce a detected hand shape to a set of simple lines (= point sequences). I'm using a thinning algorithm to find the skeleton lines of detected hand shapes. Here's an exemplary result (image of my left hand):
In this image I'd like to get the coordinates of the skeleton lines, i.e. "vectorize" the image. I've tried HoughLinesP but this only produces huge sets of very short lines, which is not what I want.
My second approach uses findContours:
// Get contours
Mat skeletonFrame; //image above
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(skeletonFrame, contours, new Mat(), Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
// Find longest contour
double maxLen = 0;
MatOfPoint max = null;
for (MatOfPoint c : contours) {
double len = Imgproc.arcLength(Util.convert(c), true); //Util.convert converts between MatOfPoint and MatOfPoint2f
if (len > maxLen) {
maxLen = len;
max = c;
}
}
// Simplify detected contour
MatOfPoint2f result = new MatOfPoint2f();
Imgproc.approxPolyDP(Util.convert(max), result, 5.0, false);
This basically works; however, the contours returned by findContours are always closed, which means that all the skeleton lines are represented twice.
Exemplary result: (gray lines = detected contours, not skeleton lines of first image)
So my question is: How can I avoid these closed contours and only get a collection of "single stroke" point sequences?
Did I miss something in the OpenCV docs? I'm not necessarily asking for code, a hint for an algorithm I could implement myself would also be great. Thanks!
I would start with real hand skeleton as kinematic
find fingers endpoints and hand/wrist base and perimeter boundary (red)
solve inverse kinematics
for example by CCD to match the fingers endpoints and not overlapping image. This way you should obtain anatomically correct answer
for simplification you can use kinematics like this
you should handle male/female/child differently (different finger lengths) or use some kind of calibration or measurement because of the different finger lengths. As you can see I skip the hand/wrist base bones they are not that important. The red outline can be found where perimeter has smaller curve radius.
How to solve your problem in your current implementation?
The first thinning approach is better so when you got the huge set of lines connect them to polylines after that compute angle of each line. If two joined lines have similar angle (up to treshold) then join them that should do what you want but do not expect you will get the lines similar to human bones especially for curves the result will be much different. In both count of lines and shape.
For better result you need to use geometrical thinning
But I have no Idea if it is present in OpenCV (I do not use this lib) the idea is to find the perimeter line and shift it perpendicular inwards by some small step similar to this. Stop if desired width is acquired
When shifted perimeter leads to too thin shape stop there and connect to thinned point from previous step (yellow line). This is all done on vectors (polylines) not on image pixels !!! Width can be computed as smallest perpendicular distance to any nearby line.
I am using CCTMXTiledMap on cocos2dx-2.2, I created and added the tiled map like this:
// TileMap
CCTMXTiledMap *m_pTileMap = CCTMXTiledMap::create("tilesets/my-isometric-small.tmx");
float fPosX = m_pTileMap->getPositionX();
float fPosY = m_pTileMap->getPositionY();
CCLOG( "TileMapPos: %f, %f", fPosX, fPosY );
this->addChild(m_pTileMap);
The tiled map are created and rendered successfully, but out of position. I use CCTMXTiledMap::getPosition, CCTMXLayer::positionAt, and also examine the CCSprite that I get from CCTMXLayer::tileAt... all of them are returning the correct value based on cocos2d screen coordinate { (0, 0) from bottom left and increasing upward and rightward } However, when viewed on the screen, there is always a slight offset and I can't get where it come from. All the m_obOffsetPosition are confirmed to be zero...
By correct value, I mean the tiles are positioned in the pink area (I getPosition from each of the tile, create CCSprite for each, setPosition of each tile and add it to the screen... They show up in the pink area)
Image supposed to be positioned at shady pink boxes, but instead positioned in the blue area (the entire blue sea is the whole map)
Any ideas are highly appreciated... Thanks!!
After wasting days trying to dissect tilemap_parallax_nodes in cocos2d-x, finally I figured out the culprit... it is the Layer Property cc_vertexz that cause it to be rendered off position. I haven't got the time to figure out how and why it works that way and since I'm not going to use it anyway (I just need flat, single layer, thus no need z order etc), so I just remove that property from all of my Layers and the problem is gone..
Hope it helps someone... Thanks!
I need to apply click/touch events for only visible part of the View. Say for example a image of size 200X200. Apart from center 50X50, remaining part is transparent. I want to get touch events only for that 50X50 visible part Not on remaining transparent part.
In above image (its single image), only inner Diamond has got visible part. Apart from that Diamond is transparent area. So, if I touch Diamond then only I want to do something else ignore.
Edit :
Rachita's link helped me. I gone through that link and got idea how can I implement. But I could not understand some constants like 320, 240 etc while creating Points. In my case, I know the Diamond (in above image) x and y Ponits (hard coded values asctually). So, using those how can I determine, whether I touched inside Diamond or outside?
my Diamond points are as below
pointA = new Point(0, 183);
pointB = new Point(183, 0);
pointC = new Point(366, 183);
pointD = new Point(183, 366);
Edit :
Finally got solution from Luksprog. Its based on checking touched point pixel color. If color is 0 means, you touched transparent layer else you touched some colored part of the image. Simple, but very effective. Have a look at it here.
AFAIK you can not implement this with onclick listener or my any other direct way .You will have to use onTouchListener .
Firstly set your view dynamically at a specific (x,y) position using this How can I dynamically set the position of view in Android?
Calculate the region your diamond will occupy (you should khow the size of image inorder to calculate area of diamond)
3.Trigger a action in onTouchListener only when x, y fall in the required region. Use How to get the Touch position in android?
check this link to calculate if a given point lies in the required square
EDIT
To understand the coordinate system of android refer to this link How do android screen coordinates work?
Display mdisp = getWindowManager().getDefaultDisplay();
int maxX= mdisp.getWidth();
int maxY= mdisp.getHeight();
(x,y) :-
1) (0,0) is top left corner.
2) (maxX,0) is top right corner
3) (0,maxY) is bottom left corner
4) (maxX,maxY) is bottom right corner
here maxX and maxY are screen maximum height and width in pixels, which we have retrieved in above given code.
Remember if you want to support multiple devices with different screen sizes,make sure you use a relative value for x,y ie some ratio of screen height or width ,as different devices have different ppi
Check if touched point lies in the required polygon
I thinks these link might help you determining if the point touched (you can get x,y from onTouch event eg.event.getX()) lies in the required polygon whose points you have mentioned in the question . determine if a given point is inside the polygon and How can I determine whether a 2D Point is within a Polygon?