I am trying to detect with OpenCv for Android hexagon fields of a map for a board game. The map looks something like that:
[Sample map]
I tried getting contours using only Value from HSV and I managed to get some of the hexagons, but unfortunately not all of them, I had mostly trouble detecting hexagons that had rivers or roads passing through them.
I managed to get something like that:
[Detected hexagons]
I even tried to make the borders thicker, but it didn't help a lot.
To detect all of the hexagons I thought of averaging the approximite size of the detected and then go on pixel by pixel trying to detect a change in the color (close to black). Later I would like to detect hexagons even on photos of a map, so then I couldn't really rely on the size of other hexagons.
What do you think would be the best way to solve this problem?
EDIT:
Thank you
I just began to implement your idea and it works great, for now i got the horizontal lines:
You deal with regular grid, so you just need to detect just a few, or even one, to compute all others. More will be better, because you'll be able to compute mean, and it will be more accurate. To find contours, it might be useful to find color gradient.
Related
So everyone, my first question on stackoverflow.
I have been working with android and openCV for a month and I was able to successfully implement template Matching. Now, the next task is to detect all the rectangles in the image and get the coordinates (I actually want the color of every rectangle) for research purposes. Kindly help. I tried using Hough transform with canny edge detection but unfortunately it doesn't detect the small rectangles which is the primary concern now.
Thank you!![![Have to detect all the rectangles, small and big ones
So I'm really proud to post an answer to my own question. Hope this helps someone in future. There are obviously a lot of ways to do this but the most accurate way was to use template matching on the main image to find the coordinates of the biggest rectangle and since all the other rectangles are equidistant to the corner points, center of every rectangle can be found which gives the desired colors.
The thin strip in the middle was also recognized by template matching and then a gradient operator represented the various rectangles, every peak in the gradient represents the rectangles.
Kindly comment for code. For research purposes I cannot post to anonymous.
i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)
I am trying to implement a map(not Google maps) like in the below image, which will have a hypothetical regions(not administrative) with different colors to indicate the population density. The regions will also be clickable and cliking on that will open a small info overlay.
For now I have the sliced images for each region with multiple colors (which color to be used is determined from an API request). But I am not exactly sure how can I implement this in Android? I've been doing some research for past couple of days but couldn't find anything satisfactory so far.
Things that I am having trouble to implement:
Put together all those images and form the map
How can I detect "tap/click" event in the regions
In one brief conversation with a guy, he mentioned something of "greyscale overlay-map, that is not visible to the user and which determines the right area by testing against the greyscale color index", frankly I didn't understand what he meant.
Here's what I am trying to achieve:
Any help or pointer to the right direction would be of great help.
Thanks for your time.
Check out www.trimaps.com, I think this is what you want. Sadly it isn't free.
What would be the best way to compare a gesture made on an Android device's screen with a stored gesture? For example, if in my application, I want it so that if I draw a triangle with my finger, the screen will turn blue, and if I draw a circle, the screen will turn red, how could that be done? The only thing I have been able to think of so far is to somehow generate an image file and then compare that to an image of a triangle or circle and check for similarities. But that wouldn't really account for different sized shapes or offset ones. Any ideas on how this could be implemented? Thanks!
There is no need to compare/match the shape of a gesture with an image. The better way is to mathematically guess which one of the recognized shapes did the user draw. http://developer.android.com/resources/articles/gestures.html provides a great reference for implementing gestures.
HTH,
Akshay
I'm fairly new to the Android platform and was wondering if I could get some advice for my current head scratcher:
I'm making an app which in one view will need an image, which can be scrolled on one axis, with a load of selectable points over the top of it. Each point needs to be positionable on the x and y (unlikely to change once the app is running, but I'll need to fine tune the positions whilst I'm developing it).
I'd like to be able to let the user select each point and have a graphic drawn on the point the user has selected or just draw a graphic on one/more points without user intervention.
I though for the selectable points I could extend the checkbox with a custom image for the selected state - does that sounds right, or is there a better way of doing this? Is there any thing I can read up on doing this, I can't seem to find anything on the net about replacing the default images?
I was going to use the absolute layout, but see that it's been depreciated and I can't find anything to replace it.
Can anyone give me some code or advice on where to read up on what I need to do?
Thank you in advance
This really feels like something you should be doing with the Canvas and 2D graphics, rather than trying to twist the widget framework to fit.