I am working on a project on image processing. I need to segment only the required object from the image and make the background white. For this I am detecting the edges in the image and then finding contours. I then draw the contour with the maximum area and finally copy it to a white background image.
But the issue with it is that while detecting edges, the edges are broken. On observing I realized that it is showing broken edges at that point where the intensity of foreground and background is the same. It doesn't find any change, thus no edge is detected. How do I fix this?
Actually, for this I am calculating the histogram of the image, and then finding the median from the histogram, but I am unable to find this.
Related
Problem that I am trying to solve is to detect cubes and get colours from them. I use live images from camera captured by Android phone. Recognition has to be fast (<1s) Example of a cube:
I also have differently coloured cubes. They can be placed randomly (for example when they touch each other).
I can easily detect one cube, in same cases even two cubes, but the problem is when I have 3 or more and for 2 cubes when they are really close to each other.
Currently processing looks like this:
blur image with Gaussian
convert to hsv and use only s channel
detect edges with Canny
dilate and erode edges
use HoughLinesP to get lines
from lines (I reject too long and too short lines) calculate intersection points and from that get corners of cubes
knowing corners (must be precise) get colours
nothing detected
2 cubes detected (red and orange points are corners and cyan points are intersection points, black lines are detected lines by hough lines)
nothing detected, some lines found
Basically what I need is to find correct corners of the cubes. I tried using Imgproc.goodFeaturesToTrack and Imgproc.cornerHarris, but it finds too many of them and usually not the most important ones.
I also tried using findContours with no success even for two objects. findContours was also crashing my app after minute of running. At some point I tried using Feature Matching + Homography to find matches to a grayscale image of a cube with the one from camera, but results were messy. Also Template Matching didn't give me good results.
Do you have any idea how to make detection more reliable and precise?
Thanks for help
What is the best way to detect blinks from an up close image of an eye? I am getting the frames from a head mounted camera as shown before.
I have tried:
Template Matching which doesnt always give accurate results.
Looking for frames in which the pupil is not visible - also not always accurate.
Hi you can use HoughTransform for circle detection.
it returns a list of circles found in the image.
the first circle in the list should be the pupil.
then if the list is empty or the circle found is smaller then x pixels the eye is closed.
I am developing an app which will change the color of your eye. I need some help about detecting eye ball. Currently i have a selector that will be used to reduce ROI. It look like
this
So who we can detect eye ball from that selected region. I was thinking about changing the image to grayscale and then detect big black spot from that and then change color of it which will be the next step. i'll really appreciate any help.
Your way of thinking about a returning pattern is a good start. I am doing some work on a pattern recognition chair as well, so here is some help for your task:
using a grayscale is a good start btw ;)
There are some "facts" that are always applicable to a non pathologic eye:
the center is dark
left and right side surrounding dark ball are almost white (depends on how open the eye is)
do not forget: you have 2 eyes. link them together in some way (usually they are on an approximately horizontal line)
there is usually motion in the eyes while the other regions of the picture are relatively calm
Of course I cannot provide any code here, this would blast this whole post, but I hope I could help you in some way.
I found some link these guys detecting pupil of the eye. May b this will help you . See here and here
you could use the template matching method from open cv. template matching
this will help you find the eye in most of the cases.
Another solution would be to convert your image into an edge image with e.g. canny edge detector from opencv. and then search for this pattern with the template matcher. Using the edges makes you independent of the color. Using grayscale images will also facilitate the procedure.
I want to implement red eye removal application on android. Is there any api or built in android method to do this? If no then please tell me how can we detect eyes from image? I know how to remove red color but Im having difficulty in detecting eyes from image.
Use the OpenCV to detect the eyes and then in the circular region where you expect the pupils to be, take the pixel value and set the Red value to, say, 20% of its original value while leaving the Green and Blue channels untouched.
There is also the FaceDetector.findFaces() which works for Bitmaps. However, it will just give you a Rectangle of the Face. But it should be easier to search in that rectangle for red-saturated pixels and desaturate the color as Alexander suggested. But this way you don't necessarly need another library.
I need a graphical needle gauge (like a speedometer etc) for my app but such a UI widget is not part of the SDK so I probably have to create it myself.
My idea is to have the background with the tickmarks and coloured fields (green, yellow, red) as one bitmap and the needle as another bitmap drawn on top of the background, but rotated in the appropriate angle.
In my book, Professional Android 2 Application Development, there is a somewhat similar example with a compass rose, although that one is drawn using line graphics, not pre-fabricated images like I will have to use to get the desired look.
However, in the compass example the whole canvas is rotated before drawing the tick marks. I cannot use this approach as it will also rotate the gauge background. So I need to somehow rotate the needle image (which should be transparent) before superimposing it. But I don't know how to do accomplish this.
Can anyone lead me in the right direction on how to proceed with the needle gauge? Also, if there is a better way to build the meter than sketched above, please let me know.
You can divide your guage into different layers. One for background, one for tick marks. Layer for tick marks can be rotated to draw marks and when turned back and combined with 'background' layer.
You can see the following example with layer technique described above: http://mindtherobot.com/blog/534/android-ui-making-an-analog-rotary-knob/
P.S. This is not my blog, i've just found this technique there.