Get coordinates of non-geometric lines in a binary image - android

I'm trying to recognize hand positions in OpenCV for Android. I'd like to reduce a detected hand shape to a set of simple lines (= point sequences). I'm using a thinning algorithm to find the skeleton lines of detected hand shapes. Here's an exemplary result (image of my left hand):
In this image I'd like to get the coordinates of the skeleton lines, i.e. "vectorize" the image. I've tried HoughLinesP but this only produces huge sets of very short lines, which is not what I want.
My second approach uses findContours:
// Get contours
Mat skeletonFrame; //image above
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(skeletonFrame, contours, new Mat(), Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
// Find longest contour
double maxLen = 0;
MatOfPoint max = null;
for (MatOfPoint c : contours) {
double len = Imgproc.arcLength(Util.convert(c), true); //Util.convert converts between MatOfPoint and MatOfPoint2f
if (len > maxLen) {
maxLen = len;
max = c;
}
}
// Simplify detected contour
MatOfPoint2f result = new MatOfPoint2f();
Imgproc.approxPolyDP(Util.convert(max), result, 5.0, false);
This basically works; however, the contours returned by findContours are always closed, which means that all the skeleton lines are represented twice.
Exemplary result: (gray lines = detected contours, not skeleton lines of first image)
So my question is: How can I avoid these closed contours and only get a collection of "single stroke" point sequences?
Did I miss something in the OpenCV docs? I'm not necessarily asking for code, a hint for an algorithm I could implement myself would also be great. Thanks!

I would start with real hand skeleton as kinematic
find fingers endpoints and hand/wrist base and perimeter boundary (red)
solve inverse kinematics
for example by CCD to match the fingers endpoints and not overlapping image. This way you should obtain anatomically correct answer
for simplification you can use kinematics like this
you should handle male/female/child differently (different finger lengths) or use some kind of calibration or measurement because of the different finger lengths. As you can see I skip the hand/wrist base bones they are not that important. The red outline can be found where perimeter has smaller curve radius.
How to solve your problem in your current implementation?
The first thinning approach is better so when you got the huge set of lines connect them to polylines after that compute angle of each line. If two joined lines have similar angle (up to treshold) then join them that should do what you want but do not expect you will get the lines similar to human bones especially for curves the result will be much different. In both count of lines and shape.
For better result you need to use geometrical thinning
But I have no Idea if it is present in OpenCV (I do not use this lib) the idea is to find the perimeter line and shift it perpendicular inwards by some small step similar to this. Stop if desired width is acquired
When shifted perimeter leads to too thin shape stop there and connect to thinned point from previous step (yellow line). This is all done on vectors (polylines) not on image pixels !!! Width can be computed as smallest perpendicular distance to any nearby line.

Related

OpenCV different approach on detecting go board

i am working on an Android app that will recognize a GO board and create a SGF file of it.
i made a version that is able to detect a board and warp the perspective to make it square ( code and example image below) unfortunately it gets a bit harder when adding stones.(image below)
Important things about a average go board:
round black and white stones
black lines on the board
board color ranges from white to light brown and sometimes with a wood grain
stones are placed on intersections of two lines
correct me if i am wrong but i think my current approach is not a good one.
Has somebody a general idea on how i can separate the stones and lines from the rest of the picture?
My code:
Mat input = inputFrame.rgba(); //original image
Mat gray = new Mat(); //grayscale image
//convert image to grayscale
Imgproc.cvtColor( input, gray, Imgproc.COLOR_RGB2GRAY);
//try to improve histogram (more contrast)
equalizeHist(gray, gray);
//blur image
Size s = new Size(5,5);
GaussianBlur(gray, gray, s, 0);
//apply adaptive treshold
adaptiveThreshold( gray, gray, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY,11,2);
//adding secondary treshold, removes a lot of noise
threshold(gray, gray, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
Some images:
(source: eightytwo.axc.nl)
(source: eightytwo.axc.nl)
EDIT: 05-03-2016
Yay! managed to detect lines stones and color correctly. precondition the picture has to be only the board itself, without any other background visible.
I use houghLinesP (60lines) and houghCircles (17circles), duration on my phone(1th gen Moto G) about 5 seconds.
Detecting board and warp it turns out to be quite a challenge when it has to be working under different angles and lightning conditions.. still working on that
Suggestions for different approaches are still welcome!!
(source: eightytwo.axc.nl)
EDIT: 15-03-2016
i found a nice way to get line intersects with cross type morphological transformations, works amazing when the picture is taken directly above the board unfortunately not while at an angle (see below)
(source: eightytwo.axc.nl)
In my last update i showed line and stone detection with a picture taken from directly above since then i have been working on detecting the board and warping it in a way that my line and stone detection becomes useful.
harris corner detection
I struggled to get the right parameter settings and i am still not sure if they are optimal, can't find much information on how to optimize image before using harris corners. right now it detects to many corners to be useful. though it feels like it could work. (upper line with pictures in example)
Mat corners = new Mat();
Imgproc.cornerHarris(image, corners, 5, 3, 0.03);
Mat mask = new Mat(corners.size(), CvType.CV_8U, new Scalar(1));
Core.MinMaxLocResult maxVal = Core.minMaxLoc(corners);
Core.inRange(corners, new Scalar(maxVal.maxVal * 0.01), new Scalar(maxVal.maxVal), mask);
cross type morphological transformations
works great when picture is taken directly from above, used from an angle or with a rotated board does not work (middle line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
int morph_elem = 1; //0: Rect - 1: Cross - 2: Ellipse
int morph_size = 5;
int morph_operator = 0; //0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat
Mat element = getStructuringElement( morph_elem, new Size(2 * morph_size + 1, 2 * morph_size + 1), new Point( morph_size, morph_size ));
morphologyEx(image, image, morph_operator + 2, element);
contour and houghlines
if there are no stones on the outer boardline and light conditions not to harsh it works pretty well. contours are only part of the board quite often(lower line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
Mat hierarchy = new Mat();
MatOfPoint biggest = null;
int contourId = 0;
double biggestArea = 0;
double minSize = 2000;
List<MatOfPoint> contours = new ArrayList<>();
findContours(InvertedImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
//find biggest
for( int x = 0; x < contours.size() ; x++ ){
double area = Imgproc.contourArea(contours.get(x));
if( area > minSize && area > biggestArea ){
biggestArea = area;
biggest = contours.get(x);
contourId = x;
}
}
providing the right picture all three the methods work but not good enough to be reliable. any thoughts on parameters, image pre-processing, different approaches or anything that might improve the detection are welcome=)
link to picture
EDIT: 31-03-2016
detecting lines and stones is pretty much solved so i will close this question. created a new one for detecting and warping accurately.
anybody interested in my progress: this is my GOSU Snap Alpha channel don't expect to much of it right now!
EDIT: 16-10-2016
Update: i saw some people are still following this question.
I tested some more stuff and started using Tensorflow, my neural network looks promising, you can have a look at it here.
A lot of work has to be done still, my current image dataset is awful and right now i am working on getting a big dataset.
the app works best using a square board with thick lines and decent lightning.
Assuming that you don't want to "force" your end user to take a cleanest pictures (like using an overlay like some of the QR code scanner for example)
Perhaps you could use some morphological transformations with differents kernels :
Opening and closing with a rectangular kernel for the lines
Opening and closing with an ellipse kernel to get the stones (it should be possible to invert the image at some point to get back the white or the black one)
Take a look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html (sorry this one is in C++ but I think this is almost the same in Java)
I had try these operations to remove a grid from a Sudoku to avoid noise in cell extraction and it worked like a charm.
Let me know of these informations were usefull for you (this is for sure a very interesting case)
I'm working on same program. I avoid finding lines at all.
First use perspective transform to get the board into a square as you have done. Find the edges of the 19x19 grid. Then assuming the board is 19x19 you can just compute the position of the lines. This works well for me. Then you compute the closest intersection of the center of the stone to determine which row and col line the stone is on. Works pretty well for me. Only probably is calibrating program for different lighting conditions and different color stones and boards.

How detect long edges of wall to prepare mask and recolor

Main idea is to allow user to recolor to specific wall based user selection.
Currently i have implemented this feature using cvFloodFill (helps to prepare mask image) which can help me to change relative HSV value for wall so i can retain edges. but problem with this solution is that it works on color and all walls are repainted instead of single wall selected by user.
i have also tried canny edge detection but it just able to detect edge but not able to convert it to area.
Please find below code which i am currently using for repaint function
Prepare mask
cvFloodFill(mask, new CvPoint(295, 75), new CvScalar(255, 255, 255,0), cvScalarAll(1), cvScalarAll(1), null, 4, null);
split channel
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
change color
cvAddS(vChannel, new CvScalar(255*(0.76-0.40),0,0,0), vChannel, mask);
How can we detect edges and corresponding area from the image.
i am looking for solution which can be other than opencv but should be possible for iPhone and android
Edit
i am able to achieve somewhat result as below image using below steps
cvCvtColor(image, gray, CV_BGR2GRAY);
cvSmooth(gray,smooth,CV_GAUSSIAN,7,7,0,0);
cvCanny(smooth, canny, 10, 250, 5);
there are two problem with this output not sure how to resolve them
1. close near by edges
2. remove small edges
You could try something like :
Mat imageOut = Mat::zeros(imageIn.rows, imageIn.cols, CV_8UC3);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( imageIn, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int idx = 0; idx >= 0; idx = hierarchy[idx][0] )
{
Scalar color( rand()&255, rand()&255, rand()&255 );
drawContours( imageOut, contours, idx, color, CV_FILLED, 8, hierarchy );
}
It should draw the walls in different colors. If it works, that means that in "hierarchy" each wall is identified as a contour, you then will have to find out which one the user selected on his touch screen and do your color tuning processing.
You may have to change the different parameters in "findContours" link.
You will also need to smooth the input image before the contour detection to avoid being annoyed with the details or textures.
Hope that helps,
Thomas
I think I might have the solution for you!
There is a sample file called watershed.cpp in OpenCV, just run it and you'll get this result :
You can make your user draw on his screen to discriminate each wall.
Then if you want something more precise you can outline the areas (without touching other lines) like this :
And TADA! :
With a little work you can make it user-friendly (cancel last line, connect areas etc...)
Hope that helps!
I think you can use Canny Edge Detection algorithm to find edge difference. Some links
StackOverFlow
StackOverFlow
OpenCV QA
OpenCV
Native Tutorial
I hope this can help you out. Thanks.
Here is some OpenCV4Android code to find the largest contour in a Mat called image, which we'll assume is in the RGBA colour space. To find contours, it's first necessary to threshold or binarize the image (convert to black and white). Using a Gaussian Blur on the image before thresholding reduces the number of small contours that are produced. The size parameters to the blur and threshold must be odd numbers; you can play around to find which value gives the best results (here, I've used 7 for both).
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat BW = new Mat();
Mat hierarchy = new Mat();
MatOfPoint largestContour;
Imgproc.cvtColor(image, image, Imgproc.COLOR_RGBA2GRAY); // convert to grayscale
Imgproc.GaussianBlur(image, BW, new Size(7,7), 0);
Imgproc.adaptiveThreshold(BW, BW, 255,
Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 7, 2.0);
Imgproc.findContours(BW, contours, hierarchy, Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = 0;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
largestContour = contour;
}
}
there are two problem with this output not sure how to resolve them 1. close near by edges 2. remove small edges
You can use morphologic operations to close the edges. Look for the dilation and closing operators.
You can remove small edges by doing labeling. Count the number of pixels in each region (connected white pixels). Remove any region with a number of pixels less than some threshold. I don't use opencv, but most libraries have a labeling function that will create an image where each set of touching pixels of a single color are assigned a unique color in the output image.

Is android Region is always a rectangular area or it may be polygonal or curvy?

Does Android Region (android.graphics.Region) always have a rectangular area or can it be polygonal or rounded (curvy)?
Actually I have to do some Region.Op.UNION and Region.Op.INTERSECTION operation with multiple regions.
I want to know the shape of Ultimate Output Region, does it still have a rectangular area or not?
It can be complex (isComplex()), i.e. it consists of more than one rectangle. Not sure what do you mean by "curvy", but it can be polygonal. If I understand it correctly, you can use getBoundaryPath() to get the Path describing resulting shape.
Nothing in the documentation would lead one to conclude that a Region can be anything but a rectangle, it being constructed from either a rectangle, an x,y coordinate plus width and height, or by another region.
One can describe a rectangle from a path, so getBoundaryPath() does not necessarily conclude that a non-rectangle is possible. An encompassing rectangular boundary may instead be implied.
The isComplex() property only says that it consists of multiple rectangles. Are they all bound by a single exterior, defining rectangle? If so, how do we separate them? In the absence of sufficient documentation, one cannot tell without experimentation:
The following code describes a path and creates a polygonal region. We start with an array of any number of coordinate pairs. Then:
//describe a path corresponding to the transformed polygon
Path transformPath;
transformPath = new Path();
//starting point
transformPath.moveTo(getTransformedPolygon()[0], getTransformedPolygon()[1]);
//draw a line from one point to the next
for(int i = 2; i < arrayCoordinates.length; i = i + 2)
{
transformPath.lineTo(arrayCoordinates[i], arrayCoordinates[i + 1]);
}
//then end at the starting point to close the polygon
transformPath.lineTo(arrayCoordinates[0], arrayCoordinates[1]);
//describe a region (clip area) corresponding to the game area (my example is a game app)
Region clip = new Region(0, 0, gameSurfaceWidth, gameSurfaceHeight);
//describe a region corresponding to the transformed polygon path
transformRegion = new Region();
transformRegion.setPath(transformPath, clip);
If you display the region as a string, you will see the several pair of coordinates that make up the polygonal shape.

Move contours on Android

I have used OpenCV 2.3.1 with Android 2.2 to find contours in bitmaps which seems to be working fine on Samsung Galaxy Ace, but now I need help with moving those contours. My aim is to make a selected contour follow the user's finger when dragged to a different location. Help of any kind would be appreciated.
EDIT:
I am now able to move the contours based on the user's touch, but then they don't stay at the new position. So, I assume I need to erase the image from the original position and redraw it at the new one. Moreover, its only the surrounding contour which moves and not the pixels of the image within the contour. I am more concerned about the image pixels. How can I get the image pixels to move to the new location? It would also be great if I could somehow get the co-ordinates of the pixels within the contour.
Sorry, I wanted to upload an image but it seems new members cant upload images at this stage. For example - I have the contour surrounding the line in pink. When I drag, only the contour moves and the black pixels of the line do not move at all. Is there any way by which I can get the black pixels within the pink contour to move?
Another problem is that when I try my code on a closed figure like a circle or a square, I get two contours. One for the inner boundary and one surrounding for the outer boundary. But again as I said earlier, I am more interested in the image pixels. Please help.
P.S. - The image can be anything, any shape. I have just taken the example of a line.
First of all you have to add TouchListener/ClickListener (or something else, I don't know Android API) to your bitmap or canvas.
When user is touched the screen (listener is fired) than you have to identify which contour has user selected. For this use pointPolygonTest function.
About moving: Contour is just a sequence (vector) of Points so if you want to shift (move) some contour you have to do the following (c++ code):
void moveContour(vector<Point>& contour, int dx, int dy)
{
for (size_t i=0; i<contour.size(); i++)
{
contour[i].x += dx;
contour[i].y += dy;
}
}
Hope it helps.

Collision Detection working half the time

I am trying to check collisions between two arrays, one of moving rectangles and the other of stationery boundaries (trying to get the rectangles to bounce off the walls).
The problem is that I wrote a nested for loop that seems to work for 2 out of 4 boundaries. Is my loop not reaching all possible combinations?
Here is my loop:
for(int n=0;n<_f;n++){
for(int m=0;m<_b;m++){
if(farr[n].inter(barr[m]))
farr[n].setD();
}
}
_f counts the moving rectangles (starts at 0 and increases after each one is added) and _b counts the boundaries. The inter() is a method I am using to detect collisions and it has worked in all other parts of my program.
Any help would be greatly appreciated,
Thanks in advace!!!
public boolean inter(Rect rect){
if(Rect.intersects(rect, rec))
return true;
else
return false;
}
The setD() method:
public void setD(){
if(_d==0)
_d=2;
if(_d==1)
_d=3;
if(_d==2)
_d=0;
if(_d==3)
_d=1;
}
The move method where _d is used:
public void moveF(){
if(_d==0){_l+=_s;_r+=_s;}
if(_d==1){_t+=_s;_b+=_s;}
if(_d==2){_l-=_s;_r-=_s;}
if(_d==3){_t-=_s;_b-=_s;}
}
_l is left side, _t is top, _r is right, and _b is bottom, and _s is how many pixels it moves per iteration(set to 1 in all cases)
Assuming _f, _b, farr, and barr do not change during the execution of the loop, your loop checks all combinations exactly once. So how is it that you "check some collisions twice"? Does setD() do something sneaky? Do you mean that once a rectangle collides there is no need to check more boundaries? If so, that can be fixed with a simple break statement. Otherwise, there likely is a problem with your inter() method, independent as to whether or not it appears to work elsewhere. Can you post your inter implementation?
There is a possibility of another problem, that of assuming continuous properties in a discrete space. As my amazing ascii art (titled: ball and wall) skills demonstrate...
Frame 1:
o__|_
Frame 2:
_o_|_
Frame 3:
__o|_
Frame 4:
___|o
Notice that the ball passed through the wall! In no frame did the ball intersect the wall. This happens if your distance moved per frame can be roughly the same or larger than the characteristic size of your moving object. This is difficult to check for with a simple intersection check. You actually need to check the path that the ball occupied between frames.
If your rectangles and barriers are oriented without rotation, this is still a fairly easy check. Use the bounding rectangle of the moving rectangle between the two frames and intersect that with the barriers.
Other ideas:
You are double colliding, switching the direction twice.
Your rectangles are in two different coordinate spaces.
Some other thread is screwing with your rects.
But basically, your code looks good. How many rectangles do you have? Can you make them distinct colors? Then, in your loop, when you collide, call setD and output the color of the rectangle that collided, and where it was. Then, when you notice a problem, kill the code and look at the output. If you see two collisions in a row (causing the rect to switch directions twice), you'll have caught your bug. Outputting the coordinates might also help, on the off chance that you are in two different coordinate spaces.
If it's a threading issue, then it's time to brush up on critical sections.
Found your mistake:
public void setD(){
if(_d==0)
_d=2;
if(_d==1)
_d=3;
if(_d==2)
_d=0;
if(_d==3)
_d=1;
}
Each of these needs to be else if, otherwise you update 0 to become 2 and then 2 to become 0 in the same call.

Categories

Resources