i am working on an Android app that will recognize a GO board and create a SGF file of it.
i made a version that is able to detect a board and warp the perspective to make it square ( code and example image below) unfortunately it gets a bit harder when adding stones.(image below)
Important things about a average go board:
round black and white stones
black lines on the board
board color ranges from white to light brown and sometimes with a wood grain
stones are placed on intersections of two lines
correct me if i am wrong but i think my current approach is not a good one.
Has somebody a general idea on how i can separate the stones and lines from the rest of the picture?
My code:
Mat input = inputFrame.rgba(); //original image
Mat gray = new Mat(); //grayscale image
//convert image to grayscale
Imgproc.cvtColor( input, gray, Imgproc.COLOR_RGB2GRAY);
//try to improve histogram (more contrast)
equalizeHist(gray, gray);
//blur image
Size s = new Size(5,5);
GaussianBlur(gray, gray, s, 0);
//apply adaptive treshold
adaptiveThreshold( gray, gray, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY,11,2);
//adding secondary treshold, removes a lot of noise
threshold(gray, gray, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU);
Some images:
(source: eightytwo.axc.nl)
(source: eightytwo.axc.nl)
EDIT: 05-03-2016
Yay! managed to detect lines stones and color correctly. precondition the picture has to be only the board itself, without any other background visible.
I use houghLinesP (60lines) and houghCircles (17circles), duration on my phone(1th gen Moto G) about 5 seconds.
Detecting board and warp it turns out to be quite a challenge when it has to be working under different angles and lightning conditions.. still working on that
Suggestions for different approaches are still welcome!!
(source: eightytwo.axc.nl)
EDIT: 15-03-2016
i found a nice way to get line intersects with cross type morphological transformations, works amazing when the picture is taken directly above the board unfortunately not while at an angle (see below)
(source: eightytwo.axc.nl)
In my last update i showed line and stone detection with a picture taken from directly above since then i have been working on detecting the board and warping it in a way that my line and stone detection becomes useful.
harris corner detection
I struggled to get the right parameter settings and i am still not sure if they are optimal, can't find much information on how to optimize image before using harris corners. right now it detects to many corners to be useful. though it feels like it could work. (upper line with pictures in example)
Mat corners = new Mat();
Imgproc.cornerHarris(image, corners, 5, 3, 0.03);
Mat mask = new Mat(corners.size(), CvType.CV_8U, new Scalar(1));
Core.MinMaxLocResult maxVal = Core.minMaxLoc(corners);
Core.inRange(corners, new Scalar(maxVal.maxVal * 0.01), new Scalar(maxVal.maxVal), mask);
cross type morphological transformations
works great when picture is taken directly from above, used from an angle or with a rotated board does not work (middle line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
int morph_elem = 1; //0: Rect - 1: Cross - 2: Ellipse
int morph_size = 5;
int morph_operator = 0; //0: Opening - 1: Closing \n 2: Gradient - 3: Top Hat \n 4: Black Hat
Mat element = getStructuringElement( morph_elem, new Size(2 * morph_size + 1, 2 * morph_size + 1), new Point( morph_size, morph_size ));
morphologyEx(image, image, morph_operator + 2, element);
contour and houghlines
if there are no stones on the outer boardline and light conditions not to harsh it works pretty well. contours are only part of the board quite often(lower line with pictures in example)
Imgproc.GaussianBlur(image, image, new Size(5, 5), 0);
Imgproc.adaptiveThreshold(image, image, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY_INV, 11, 2);
Mat hierarchy = new Mat();
MatOfPoint biggest = null;
int contourId = 0;
double biggestArea = 0;
double minSize = 2000;
List<MatOfPoint> contours = new ArrayList<>();
findContours(InvertedImage, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
//find biggest
for( int x = 0; x < contours.size() ; x++ ){
double area = Imgproc.contourArea(contours.get(x));
if( area > minSize && area > biggestArea ){
biggestArea = area;
biggest = contours.get(x);
contourId = x;
}
}
providing the right picture all three the methods work but not good enough to be reliable. any thoughts on parameters, image pre-processing, different approaches or anything that might improve the detection are welcome=)
link to picture
EDIT: 31-03-2016
detecting lines and stones is pretty much solved so i will close this question. created a new one for detecting and warping accurately.
anybody interested in my progress: this is my GOSU Snap Alpha channel don't expect to much of it right now!
EDIT: 16-10-2016
Update: i saw some people are still following this question.
I tested some more stuff and started using Tensorflow, my neural network looks promising, you can have a look at it here.
A lot of work has to be done still, my current image dataset is awful and right now i am working on getting a big dataset.
the app works best using a square board with thick lines and decent lightning.
Assuming that you don't want to "force" your end user to take a cleanest pictures (like using an overlay like some of the QR code scanner for example)
Perhaps you could use some morphological transformations with differents kernels :
Opening and closing with a rectangular kernel for the lines
Opening and closing with an ellipse kernel to get the stones (it should be possible to invert the image at some point to get back the white or the black one)
Take a look at http://docs.opencv.org/2.4/doc/tutorials/imgproc/opening_closing_hats/opening_closing_hats.html (sorry this one is in C++ but I think this is almost the same in Java)
I had try these operations to remove a grid from a Sudoku to avoid noise in cell extraction and it worked like a charm.
Let me know of these informations were usefull for you (this is for sure a very interesting case)
I'm working on same program. I avoid finding lines at all.
First use perspective transform to get the board into a square as you have done. Find the edges of the 19x19 grid. Then assuming the board is 19x19 you can just compute the position of the lines. This works well for me. Then you compute the closest intersection of the center of the stone to determine which row and col line the stone is on. Works pretty well for me. Only probably is calibrating program for different lighting conditions and different color stones and boards.
Related
I am trying to develop an App that detects cards "master cards, visa, cutomer cards, etc" using Android Camera, for that purpose i used OpenCV4Android version 3.0.0. To achieve this task, i did the following:
1- converted the frame taken from the camera to gray scale using
Imgproc.cvtColor(this.mMatInputFrame, this.mMatGray, Imgproc.COLOR_BGR2GRAY);
2- blurring the frame using
Imgproc.blur(this.mMatGray, this.mMatEdges, new Size(7, 7));
3- apply Canny edge detector as follows
Imgproc.Canny(this.mMatEdges, this.mMatEdges, 2, 900, 7, true);
4- to show Canny's result on the real image, i did the following
this.mDest = new Mat(new Size(this.mMatInputFrame.width(), this.mMatInputFrame.height()), CvType.CV_8U, Scalar.all(0));
this.mMatInputFrame.copyTo(this.mDest, this.mMatEdges);
5- dialated the image using
dilated = new Mat();
Mat dilateElement = Imgproc.getStructuringElement(Imgproc.MORPH_DILATE, new Size(3, 3));
Imgproc.dilate(mMatEdges, dilated, dilateElement);
6- finding the contour of the card detected as follows:
ArrayList<MatOfPoint> contours = new ArrayList<>();
hierachy = new Mat();
Imgproc.findContours(dilated, contours, hierachy, Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i < contours.size(); i++) {
if (Imgproc.contourArea(contours.get(i), true) > 90000) {
Rect rect = Imgproc.boundingRect(contours.get(i));
if (rect.height > 60) {
Imgproc.rectangle(mMatInputFrame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0));
}
}
}
When I run the App,
Case 1
if the card to be detected is of a homogenous color "the entire card is painted with the same color", Canny produces well defined edges which can easily detected as shown in the image "same-color-0" and "same-color-1".
moreover, when i place the card that of a homogenous color on a table and move the camera around it, the edges are getting detected properly despite i am moving the camera. or in other words, the red frame that surrounds the edges of
the card is always fixed around the edges and never disappears
case 2
if the card is not of a homogenous color "of a mixed colors", then the edge detection is bad as shown in image "mixed-color-0" and "mixed-color-1", and moreover, the red frame that surrounds the edges of the card disappears so often.
Another case extended from this case is, when the card is of two colors, one is light and one is dark, in this case, the edge detector detects only the dark part in the card because its edges are well defined as shown in image
"mixed-color-2"
Please let me know how to get well defined and card-sized edges of the cards regardless of the color?
is there any other more accurate way for edge detection?
same-color-0:
same-color-1
mixed-color-0
mixed-color-1
mixed-color-2
original images:
You can use Structured Edge Detection.
I got these results running the C++ code in this my other answer. This seems like a good and robust result to me.
To use this in Java, you should know that Structured Edge Detection is in contrib module ximgproc.
You probably need to recompile OpenCV to use it: Build OpenCV with contrib modules and Java wrapper
I am using matchTemplate() in openCV to search within a small region of the camera's frame, lets say the 128x128 region in the top left corner for a smaller template image lets say of size 32x32.
I'm having a weird issue. When drawing the rectangle at the minLoc, I sometimes have perfectly smooth and normal tracking/matching, and therefore know that my code (mostly) is working.
The problem is however, I pick random templates to match on each initialize, and 90% of the time the center of the searchRegion is where the match is (incorrectly) detected. No matter where I move the camera, the center of the image is being 'matched' (with minor fluctuations at random points every so many frames).
Am I missing something about the way matchTemplate/normalize works? Why is the center of the src image wrongly being selected as a match?
Here is some code summarizing what I'm doing.
Mat searchRgn = frame.submat(searchRgnRect);
int result_cols = searchRgn.cols() - foi_img.cols() + 1;
int result_rows = searchRgn.rows() - foi_img.rows() + 1;
Mat result = new Mat(result_rows,result_cols, CvType.CV_32FC1);
Imgproc.matchTemplate(searchRgn, foi_img, result,Imgproc.TM_SQDIFF_NORMED);
Core.normalize(result, result, 0, 1, Core.NORM_MINMAX, -1);
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
//draw rectangle at mr.minLoc
I'm trying to recognize hand positions in OpenCV for Android. I'd like to reduce a detected hand shape to a set of simple lines (= point sequences). I'm using a thinning algorithm to find the skeleton lines of detected hand shapes. Here's an exemplary result (image of my left hand):
In this image I'd like to get the coordinates of the skeleton lines, i.e. "vectorize" the image. I've tried HoughLinesP but this only produces huge sets of very short lines, which is not what I want.
My second approach uses findContours:
// Get contours
Mat skeletonFrame; //image above
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(skeletonFrame, contours, new Mat(), Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE);
// Find longest contour
double maxLen = 0;
MatOfPoint max = null;
for (MatOfPoint c : contours) {
double len = Imgproc.arcLength(Util.convert(c), true); //Util.convert converts between MatOfPoint and MatOfPoint2f
if (len > maxLen) {
maxLen = len;
max = c;
}
}
// Simplify detected contour
MatOfPoint2f result = new MatOfPoint2f();
Imgproc.approxPolyDP(Util.convert(max), result, 5.0, false);
This basically works; however, the contours returned by findContours are always closed, which means that all the skeleton lines are represented twice.
Exemplary result: (gray lines = detected contours, not skeleton lines of first image)
So my question is: How can I avoid these closed contours and only get a collection of "single stroke" point sequences?
Did I miss something in the OpenCV docs? I'm not necessarily asking for code, a hint for an algorithm I could implement myself would also be great. Thanks!
I would start with real hand skeleton as kinematic
find fingers endpoints and hand/wrist base and perimeter boundary (red)
solve inverse kinematics
for example by CCD to match the fingers endpoints and not overlapping image. This way you should obtain anatomically correct answer
for simplification you can use kinematics like this
you should handle male/female/child differently (different finger lengths) or use some kind of calibration or measurement because of the different finger lengths. As you can see I skip the hand/wrist base bones they are not that important. The red outline can be found where perimeter has smaller curve radius.
How to solve your problem in your current implementation?
The first thinning approach is better so when you got the huge set of lines connect them to polylines after that compute angle of each line. If two joined lines have similar angle (up to treshold) then join them that should do what you want but do not expect you will get the lines similar to human bones especially for curves the result will be much different. In both count of lines and shape.
For better result you need to use geometrical thinning
But I have no Idea if it is present in OpenCV (I do not use this lib) the idea is to find the perimeter line and shift it perpendicular inwards by some small step similar to this. Stop if desired width is acquired
When shifted perimeter leads to too thin shape stop there and connect to thinned point from previous step (yellow line). This is all done on vectors (polylines) not on image pixels !!! Width can be computed as smallest perpendicular distance to any nearby line.
Main idea is to allow user to recolor to specific wall based user selection.
Currently i have implemented this feature using cvFloodFill (helps to prepare mask image) which can help me to change relative HSV value for wall so i can retain edges. but problem with this solution is that it works on color and all walls are repainted instead of single wall selected by user.
i have also tried canny edge detection but it just able to detect edge but not able to convert it to area.
Please find below code which i am currently using for repaint function
Prepare mask
cvFloodFill(mask, new CvPoint(295, 75), new CvScalar(255, 255, 255,0), cvScalarAll(1), cvScalarAll(1), null, 4, null);
split channel
cvSplit(hsvImage, hChannel, sChannel, vChannel, null);
change color
cvAddS(vChannel, new CvScalar(255*(0.76-0.40),0,0,0), vChannel, mask);
How can we detect edges and corresponding area from the image.
i am looking for solution which can be other than opencv but should be possible for iPhone and android
Edit
i am able to achieve somewhat result as below image using below steps
cvCvtColor(image, gray, CV_BGR2GRAY);
cvSmooth(gray,smooth,CV_GAUSSIAN,7,7,0,0);
cvCanny(smooth, canny, 10, 250, 5);
there are two problem with this output not sure how to resolve them
1. close near by edges
2. remove small edges
You could try something like :
Mat imageOut = Mat::zeros(imageIn.rows, imageIn.cols, CV_8UC3);
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( imageIn, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int idx = 0; idx >= 0; idx = hierarchy[idx][0] )
{
Scalar color( rand()&255, rand()&255, rand()&255 );
drawContours( imageOut, contours, idx, color, CV_FILLED, 8, hierarchy );
}
It should draw the walls in different colors. If it works, that means that in "hierarchy" each wall is identified as a contour, you then will have to find out which one the user selected on his touch screen and do your color tuning processing.
You may have to change the different parameters in "findContours" link.
You will also need to smooth the input image before the contour detection to avoid being annoyed with the details or textures.
Hope that helps,
Thomas
I think I might have the solution for you!
There is a sample file called watershed.cpp in OpenCV, just run it and you'll get this result :
You can make your user draw on his screen to discriminate each wall.
Then if you want something more precise you can outline the areas (without touching other lines) like this :
And TADA! :
With a little work you can make it user-friendly (cancel last line, connect areas etc...)
Hope that helps!
I think you can use Canny Edge Detection algorithm to find edge difference. Some links
StackOverFlow
StackOverFlow
OpenCV QA
OpenCV
Native Tutorial
I hope this can help you out. Thanks.
Here is some OpenCV4Android code to find the largest contour in a Mat called image, which we'll assume is in the RGBA colour space. To find contours, it's first necessary to threshold or binarize the image (convert to black and white). Using a Gaussian Blur on the image before thresholding reduces the number of small contours that are produced. The size parameters to the blur and threshold must be odd numbers; you can play around to find which value gives the best results (here, I've used 7 for both).
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat BW = new Mat();
Mat hierarchy = new Mat();
MatOfPoint largestContour;
Imgproc.cvtColor(image, image, Imgproc.COLOR_RGBA2GRAY); // convert to grayscale
Imgproc.GaussianBlur(image, BW, new Size(7,7), 0);
Imgproc.adaptiveThreshold(BW, BW, 255,
Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 7, 2.0);
Imgproc.findContours(BW, contours, hierarchy, Imgproc.RETR_EXTERNAL,
Imgproc.CHAIN_APPROX_SIMPLE);
double maxArea = 0;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
largestContour = contour;
}
}
there are two problem with this output not sure how to resolve them 1. close near by edges 2. remove small edges
You can use morphologic operations to close the edges. Look for the dilation and closing operators.
You can remove small edges by doing labeling. Count the number of pixels in each region (connected white pixels). Remove any region with a number of pixels less than some threshold. I don't use opencv, but most libraries have a labeling function that will create an image where each set of touching pixels of a single color are assigned a unique color in the output image.
I am taking a screenshot with glReadPixels to perform a "cross-over" effect between two images.
On the Marmalade SDK simulator, the screenshot is taken just fine and the "cross-over" effect works a treat:
However, this is how it looks on iOS and Android devices - corrupted:
(source: eikona.info)
I always read the screen as RGBA 1 byte/channel, as the documentation says it's ALWAYS accepted.
Here is the code used to take the screenshot:
uint8* Gfx::ScreenshotBuffer(int& deviceWidth, int& deviceHeight, int& dataLength) {
/// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
int rowLength = deviceWidth * 4; /// data always returned by GL as RGBA, 1 byte/each
dataLength = rowLength * deviceHeight;
// set the target framebuffer to read
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
uint8* buffer = new uint8[dataLength];
glReadPixels(0, 0, deviceWidth, deviceHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
return buffer;
}
void Gfx::ScreenshotImage(CIwImage* img, uint8*& pbuffer) {
int deviceWidth, deviceHeight, dataLength;
pbuffer = ScreenshotBuffer(deviceWidth, deviceHeight, dataLength);
img->SetFormat(CIwImage::ABGR_8888);
img->SetWidth(deviceWidth);
img->SetHeight(deviceHeight);
img->SetBuffers(pbuffer, dataLength, 0, 0);
}
That is a driver bug. Simple as that.
The driver got the pitch of the surface in the video memory wrong. You can clearly see this in the upper lines. Also the garbage you see at the lower part of the image is the memory where the driver thinks the image is stored but there is different data there. Textures / Vertex data maybe.
And sorry, I know of no way to fix that. You may have better luck with a different surface-format or by enabling/disabling multisampling.
In the end, it was lack of memory. The "new uint8[dataLength];" never returned an existent pointer, thus the whole process went corrupted.
TomA, your idea of clearing the buffer actually helped me to solve the problem. Thanks.
I don't know about android or the SDK you're using, but on IOS when I take a screenshot I have to make the buffer the size of the next POT texture, something like this:
int x = NextPot((int)screenSize.x*retina);
int y = NextPot((int)screenSize.y*retina);
void *buffer = malloc( x * y * 4 );
glReadPixels(0,0,x,y,GL_RGBA,GL_UNSIGNED_BYTE,buffer);
The function NextPot just gives me the next POT size, so if the screen size was 320x480, the x,y would be 512x512.
Maybe what your seeing is the wrap around of the buffer because it's expecting a bigger buffer size ?
Also this could be a reason for it to work in the simulator and not on the device, my graphics card doesn't have the POT size limitation and I get similar (weird looking) result.
What I assume is happening is that you are trying to use glReadPixels on the window that is covered. If the view area is covered, then the result of glReadPixels is undefined.
See How do I use glDrawPixels() and glReadPixels()? and The Pixel Ownership Problem.
As said here :
The solution is to make an offscreen buffer (FBO) and render to the
FBO.
Another option is to make sure the window is not covered when you use glReadPixels.
I am getting screenshoot of my android game without any problems on android device using glReadPixels.
I am not sure yet what's the problem in your case, need more information.
So lets start:
I would recommend you not to specify PixelStore format. I am worried about your alignment in 1 byte, do you really "use it"/"know what does it do"? It seems you get exactly what you specify - one extra byte(look at your image, there one extra pixel all the time!) instead of fully packed image. SO try to remove this:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
I am not sure in C code, as I was working only in java, but this looks as possible point:
// width/height
deviceWidth = IwGxGetDeviceWidth();
deviceHeight = IwGxGetDeviceHeight();
Are you getting device size? You should use your OpenGL surface size, like this:
public void onSurfaceChanged(GL10 gl, int width, int height) {
int surfaceWidth = width;
int surfaceHeight = height;
}
What are you doing next with captured image? Are you aware that memory block you got from opengl is RGBA, but all not-opengl image operations expect ARGB?
For example here in your code you expect alpha to be first bit, not last:
img->SetFormat(CIwImage::ABGR_8888);
In case if 1, 2 and 3 did not help you might want to save the captured screen to phone sdcard to examine later. I have a program that converts opengl RGBA block to normal bitmap to examine on PC. I may share it with you.
I don't have a solution for fixing glReadPixels. My suggestion is that you change your algorithm to avoid the need to read the data back from the screen.
Take a look at this page. These guys have done a page flip effect all in Flash. It's all in 2D, the illusion is achieved just with shadow gradients.
I think you can use a similar approach, but a little better in 3D. Basically you have to split the effect into three parts: the front facing top page (clouds), the bottom page (the girl) and the back side of the front page. You have to draw each part separately. You can easily draw the front facing top page and the bottom page together in the same screen, you just need to invoke the drawing code for each with a preset clipping region that is aligned with the split line where the top page bends. After you have to top and back page sections drawn, you can draw the gray back facing portion on top, also aligned to the split line.
With this approach the only thing you lose is a little bit of deformation where the clouds image starts to bend up, of course no deformation will occur with my method. Hopefully that will not diminish the effect, I think the shadows are way more important to give the depth effect and will hide this minor inconsistency.