I am working on an app that will compare histograms in hopes to match faces.
The app allows the user to take a photo, select a few key points in the image and then the app draws circles around those points. I then detect the circles using the OpenCV Hough Circle Transform functions. Up to this point the app works great.
What I need to implement now is one of two options:
Detect the circles and create separate histograms for the area inside of each circle.
Detect the circles and blackout the area(s) around the circles and create one histogram.
I'm leaning towards method 2, but I'm not sure how mask/color/paint around the area outside of the circles after they are detected. Any input would be appreciated. Thanks.
Instead of painting the area outside the circles in the original image, why not create a new image and copy the content of the circles to it?
Another point is that histograms are independent of translation. So, it does not matter if you copy the circles to the exact locations in the new image.
Do clarify if I did not answer your question, or if you have other questions now.
So everyone, my first question on stackoverflow.
I have been working with android and openCV for a month and I was able to successfully implement template Matching. Now, the next task is to detect all the rectangles in the image and get the coordinates (I actually want the color of every rectangle) for research purposes. Kindly help. I tried using Hough transform with canny edge detection but unfortunately it doesn't detect the small rectangles which is the primary concern now.
Thank you!![![Have to detect all the rectangles, small and big ones
So I'm really proud to post an answer to my own question. Hope this helps someone in future. There are obviously a lot of ways to do this but the most accurate way was to use template matching on the main image to find the coordinates of the biggest rectangle and since all the other rectangles are equidistant to the corner points, center of every rectangle can be found which gives the desired colors.
The thin strip in the middle was also recognized by template matching and then a gradient operator represented the various rectangles, every peak in the gradient represents the rectangles.
Kindly comment for code. For research purposes I cannot post to anonymous.
Am trying to break image in shattered pieces, but am unable to catch the logic, please give me way how to achieve.
I hope the below image can give my idea, what I want, Breaking the bitmap into a shattered pieces like triangle or any shape. later i will shuffle those bitmap shapes and giving puzzle to enduser rearrange them in order.
OK, if you want to rearrange the pieces (like in a jigsaw) then each triangle/polygon will have to appear in a rectangular bitmap with a transparent background, because that's how drawing bitmaps works in Java/Android (and most other environments).
There is a way to do this sort of masking in Android, its called porter-duff compositing. The Android documentation is poor to non-existent, but there are many articles on its use in Java.
Basically you create a rectangular transparent bitmap just large enough to hold your cut-out. Then you draw onto this bitmap a filled triangle (with transparency non-zero) representing the cut-out. It can be any colour you like. Then draw the cutout on top of the source image at the correct location using the Porter-Duff mode which copies the transparency data but not the RGB data. You will be left with your cutout against a transparent background.
This is much easier if you make the cutout bitmap the same size as the source image. I would recommend getting this working first. The downsides of this are twofold. Firstly you will be moving around large bitmaps to move around small cutouts, so the UI will be slower. Secondly you will use a lot of memory for bitmaps, and on some versions of Android you may well run out of memory.
But once you have it working for bitmaps the same size as the source image, it should be pretty straightforward to change it to work for smaller bitmaps. Most of your "mucking about" will be in finding and using the correct Porter-Duff mode. As there are only 16 of them, its no great effort to try them all and see what they do. And they may suggest other puzzle ideas.
I note your cutout sections are all polygons. With only a tiny amount of extra complexity, you could make them any shape you like, including looking like regular jigsaw pieces. To do this, use the Path class to define the shapes used for cutouts. The Path class works fine with Porter-Duff compositing, allowing cutouts of almost any shape you can imagine. I use this extensively in one of my apps.
I am not sure what puzzle game you are trying to make, but if there is no special requirements of the shattered pieces,
only the total number of them which can span the whole rectangle, you may try doing the following steps,
the idea is basically by knowing that n non-intersecting lines with two end points lie on any of the 4 edges of the rectangle, n+1 disjoint areas is formed.
Create an array and store the line information
For n times, you randomly pick two end points which lie on those 4 edges of the rectangle
2a. Try to join these two points: start from either end point, if you get an intersection with another line you drew before, stop at the intersection, otherwise stop at the other end point
You will get n+1 disjoint areas with n lines drawn
You may constrain your lines choosing if you have some special requirements of the areas.
For implementation details, you may want to have a look of dot product and euler's theorem
I need pixel-perfect collision detection for my Android game. I've written some code to detect collision with "normal" bitmaps (not rotated); works fine. However, I don’t get it for rotated bitmaps. Unfortunately, Java doesn’t have a class for rotated rectangles, so I implemented one myself. It holds the position of the four corners in relation to the screen and describes the exact location/layer of its bitmap; called "itemSurface". My plan for solving the detection was to:
Detect intersection of the different itemSurfaces
Calculating the overlapping area
Set these areas in relation to its superior itemSurface/bitmap
Compare each single pixel with the corresponding pixel of the other bitmap
Well, I’m having trouble with the first one and the second one. Does anybody has an idea or got some code? Maybe there is already code in Java/Android libs and I just didn’t find it.
I understand that you want a collision detection between rectangles (rotated in different way). You don't need to calculate the overlapping area. Moreover, comparing every pixel will be ineffective.
Implement a static boolean isCollision function which will tell you is there a collision between one rectangle and another. Before you should take a piece of paper do some geometry to find out the exact formulas. For performance reasons do not wrap a rectangle in some Rectangle class, just use primitive types like doubles etc.
Then (pseudo code):
for (every rectangle a)
for (every rectangle b)
if (a != b && isCollision(a, b))
bounce(a, b)
This is O(n^2), where n is number of rectangles. There are better algorithms if you need more performance. bounce function changes vectors of moving rectangles so that imitates a collision. If the weight of objects was the same (you can aproximate weight with size of the rectangles), you just need to swap two speed vectors.
To bounce elements correctly you could need to store auxiliary table boolean alreadyBounced[][] to determine which rectangles do not need a change of their vectors after bounce (collision), because they were already bounced.
One more tip:
If you are making a game under Android you have to watch out to not allocate memory during gameplay, because it will faster invoke GC, which takes a long time and slow downs your game. I recommend you watching this video and related. Good luck.
I'm making an android app that takes an image of a billiards game in progress and detects the positions of the various balls. The image is taken from someone's phone, so of course I don't have a perfect overhead view of the table. Right now I'm using houghcircles to find the balls, and it's doing an ok job, but it seems to miss a few balls here and there, and then there are the false positives.
My biggest problem right now is, how do I cut down on the false positives found outside the table? I'm using an ROI to cut off the top portion of the image because it's mostly wasted space, but I can't make it any smaller or I risk cutting off portions of the table since it's a trapezoidal shape. My current idea is to overlay the guide that the user sees when taking the picture on top of the image, but the problem with that is that I don't know what the resolution of the their cameras would be, and therefore the overlay might cover up the wrong spots. Ideally I think I would want to use houghlines but when I tried it my app crashed from what I believe was a lack of memory. Any ideas?
Here is a link to the results I'm getting:
http://graphiquest.com/cvhoughcircles.html
Here is my code:
IplImage img = cvLoadImage("/sdcard/DCIM/test/picture"+i+".jpg",1);
IplImage gray = opencv_core.cvCreateImage( opencv_core.cvSize( img.width(), img.height() ), opencv_core.IPL_DEPTH_8U, 1);
cvCvtColor(img, gray, opencv_imgproc.CV_RGB2GRAY );
cvSetImageROI(gray, cvRect(0, (int)(img.height()*.15), (int)img.width(), (int)(img.height()-(img.height()*.20))));
cvSmooth(gray,gray,opencv_imgproc.CV_GAUSSIAN,9,9,2,2);
Pointer circles = CvMemStorage.create();
CvSeq seq = cvHoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2.5d, (double)gray.height()/30, 70d, 100d, 0, 80);
for(int j=0; j<seq.total(); j++){
CvPoint3D32f point = new CvPoint3D32f(cvGetSeqElem(seq, j));
float xyr[] = {point.x(),point.y(),point.z()};
CvPoint center = new CvPoint(Math.round(xyr[0]), Math.round(xyr[1]));
int radius = Math.round(xyr[2]);
cvCircle(gray, center, 3, CvScalar.GREEN, -1, 8, 0);
cvCircle(gray, center, radius, CvScalar.BLUE, 3, 8, 0);
}
String path = "/sdcard/DCIM/test/";
File photo=new File(path, "picture"+i+"_2.jpg");
if (photo.exists())
{
photo.delete();
}
cvSaveImage("/sdcard/DCIM/test/picture"+i+"_2.jpg", gray);
There are some very helpful constraints you could apply. In addition to doing a rectangular region of interest, you should mask your results with the actual trapezoidal shape of the pool table. Use the color information of the image to find the pool table region. You know that the pool table is a solid color. It doesn't have to be green - you can use some histogram techniques in HSV color space to find the most prevalent color in the image, perhaps favoring pixels toward the center. It's very likely to detect the color of the pool table. Select pixels matching this color, perform morphological operations to remove noise, and then you can treat the mask as a contour, and find its convexHull. Fill the hull to remove the holes created by the pool balls.
What I've said so far should suggest a different approach than Hough circles. Hough circles is probably not working too well since the billiard balls are not evenly illuminated. So, another way to find billiard balls is to subtract the pool table color mask from its convexHull. You'll be left with the areas of the table that are obscured by balls.
I've thought about working on this problem, too, since I play pool and snooker.
A few points:
Judging from the Hough circle fits, it looks like you're not filtering the edge points, or your threshold for edge strength isn't high enough. Are you simply using a binary indicator for edge points, or are you selecting edge points based on edge strength?
Can you work in RGB space? That'd help with detecting the table bed, the rails, and also in identifying the balls. A blue blob on the table bed could be the 2-ball, the 10-ball, or maybe a hunk of chalk.
In your parameter space, you should be able to limit the search for circles of a very limited radius. This would be helped in part if...
Detect the table surface and the rails. A Stroke Width Transform could help you find the rails, especially if you search in a color plane (green) in which the rails will have high contrast. You can also use the six pockets (or at least three pockets) to help identify the pose (position and orientation) of the table.
Once the rails are detected, you can use an affine transform to correct for perspective distortion. You'll need to do this anyway to place the balls with any sort of accuracy, especially if you want the ball placement to satisfy a serious pool player such as someone who plays One Pocket or Straight Pool. Once you have the affine transform, you can set fairly tight tolerances for radius in your Hough parameter space.
Once you've detected the table bed, you could perform an initial segmentation (that is, region labeling or blob finding) and search only for blobs of a certain area and roundness.
A strong, even, diffuse overhead light could help eliminate shadows.
You can help filter edge points by accepting (or at least favoring) edge points that have gradients that are pointed towards other edge points with parallel gradients. If a local collection of edge point pairs "point" at each other via their edge gradients, then they are good candidates for detection.
Once you've detected a candidate ball, perform further processing to accept/reject. A ball should be a relatively uniform hue (cue ball, 1 - 8, or a stripe viewed from the proper angle), or it should have a detectable color stripe and white. The ball surface will not be highly textured like the wood grain of the table.
Have an option that the user take two pictures from slightly different angles. You then have two chances to find balls, and could conceivably solve the correspondence problem of matching the tables and balls in the two images to help locate the balls in the 2D space of the table bed.
Consider having a second algorithm such as normalized cross-correlation (simple template matching) to help identify balls or at least likely ball locations.
Insist that the center point of the image be located somewhere within the table bed. This can help you identifying the positions of the rails since you can then search radially outward for the edges of the rails, and once four (or even just three) rails are found you can reject edge points at radial distances beyond them.
Good luck! It's a fun problem.
EDIT:
I was reading another StackOverflow post and read about this paper. The paper which will give you a much more thorough introduction to the technique I suggested to filter edge points (item 8).
"Fast Circle Detection Using Gradient Pair Vectors" by Rad, Faez, and Qaragozlou
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.121.9956
I haven't implemented their algorithm myself yet, but it looks promising. Here's the post where the paper was mentioned:
Three Dimensional Hough Space