OpenCV Convex Hull coordinates - android

I wanted to find the convex hull in order to even the edges of a hand-drawn triangle on paper. Smoothing using image processing was not enough because i needed to detect this triangle too and a hand drawn triangle tends to have more than three points if the approxPolyDP function is used. A convex hull of a triangle is correctly identified by the approxPolyDP function.
The problem is, i have other shapes in the image too on which a convex hull is created.
Before convex hull is used: Notice the contour labelled 3
After convex hull is used: the end points have been joined and the contour labelled 3 forms a triangle
Now i wanted to somehow exclude contour 3 from being detected as a triangle.
To do this my strategy was to remove this contour altogether from the ArrayList named hullMop. This is because my triangle detection function uses the contours from hullMop and so it wouldnt even check the contour labelled 3.
extcontours are the contours before convex hull is used.
This function checks if a point from hullMop is inside extcontours. If it isn't, then that must be removed from hullMop because they are the extra set of points generated because of the convex hull, or in other words, the red line in the second image.
Now at this point I feel there is a hole in my concept. The openCV documentation says that the convex Hull returns the subset of the points of the original array, in other words, subset of the points of extcontours.
My question is, how do i get the points of the red line created by the convexHull function. I dont want to use findContours because i feel there is a better way.
private void RemoveFalseHullTriangles(ArrayList<MatOfPoint> extcontours, ArrayList<MatOfPoint> hullMop, int width, int height) {
//if every single point of hullmop doesnt touch or isn't inside extcontours, then that point must be the red line
MatOfPoint2f Contours2f = new MatOfPoint2f();
double [] newA = new double[2];
int hullCounter = 0;
A: for(int i =0;i<extcontours.size();i++) {
MatOfPoint ExtCnt = extcontours.get(i);
MatOfPoint HullCnt = hullMop.get(hullCounter);
ExtCnt.convertTo(Contours2f, CvType.CV_32F);
B: for (int j = 0; j < HullCnt.rows(); j++) {
double[] pt = new double[2];
pt[0] = HullCnt.get(j,0)[0];
pt[1] = HullCnt.get(j,0)[1];
if (Math.abs(Imgproc.pointPolygonTest(Contours2f, new Point(pt), true)) > 40) {
//Remove index from HullMop
hullMop.remove(hullCounter);
hullCounter--;
break B;
}
}
hullCounter++;
}
}
Because the hullMop only has a subset of the points of extcontours, i may never know the points of the red line of the contour labelled 3 after convex hull is used.
Is there anyway to get coordinates of that red line generated by convex hull other than using findContours?

As referenced by Alexandar Reynolds, the problem really was detecting open contours first and excluding those contours before finding the convex hull.
The method to find open contours is explained here:
Recognize open and closed shapes opencv
Basically, if an outer contour has no child contour in the hierarchy, then it is an open contour and must be excluded before finding convex hull ( for my case).

Related

Take photo when pattern is detected in image with Android OpenCV

Hello stackoverflow community I would like if someone can guide me a little regarding my next question, I want to make an application that takes a photo when it detects a sheet with 3 marks (black squares in the corners) similar to what a QR would have. I have read a little about opencv that I think could help me more however I am not very clear yet.
Here my example
Once you obtain your binary image, you can find contours and filter using contour approximation and contour area. If the approximated contour has a length of four then it must be a square and if it is within a lower and upper area range then we have detected a mark. We keep a counter of the mark and if there are three marks in the image, we can take the photo. Here's the visualization of the process.
We Otsu's threshold to obtain a binary image with the objects to detect in white.
From here we find contours using cv2.findContours and filter using contour approximation cv2.approxPolyDP in addition to contour area cv2.contourArea.
Detected marks highlighted in teal
I implemented it in Python but you can adapt the same approach
Code
import cv2
# Load image, grayscale, Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours and filter using contour approximation and contour area
marks = 0
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
for c in cnts:
area = cv2.contourArea(c)
peri = cv2.arcLength(c, True)
approx = cv2.approxPolyDP(c, 0.04 * peri, True)
if len(approx) == 4 and area > 250 and area < 400:
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(image, (x, y), (x + w, y + h), (200,255,12), 2)
marks += 1
# Sheet has 3 marks
if marks == 3:
print('Take photo')
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()

Why is the drawContour() in OpenCV generating this strange Mask?

I started by reading in this Mat.
Then I converted it to Greyscale and applied Imgproc.canny() to it, getting the following mask.
Then I used Imgproc.findContours() to find the contours, Imgproc.drawContours(), and Core.putText() to label the contours with numbers:
Then I did Rect boundingRect = Imgproc.boundingRect(contours.get(0));
Mat submatrix = new Mat();
submatrix = originalMat.submat(boundingRect); to get following submatrix:
So far so good. The Problem starts hereafter:
NOW I NEEDED A MASK OF THE submatrix. So I decided to use Imgproc.drawContours() to get the mask:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);
List<MatOfPoint> contourList = new ArrayList<>();
contourList.add(contours.get(0));
Imgproc.drawContours(mask, contourList, 0, new Scalar(255), -1);
I got the following mask:
WHAT I WAS EXPECTING was a filled (in white color) diamond shape on black background.
WHy am I getting this unexpected result?
EDIT:
When I replaced Mat mask = new Mat(submatrix.rows(),
submatrix.cols(), CvType.CV_8UC1); by Mat mask =
Mat.zeros(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);,
the last mask with white colored garbage was replaced by an empty
black mask withOUT any white color on it. I got the following submat
and mask:
I was getting the first contour in the list of contours (named
contours) by contours.get(0), and using this first contour to
calculate Imgproc.boundingRect() as well as in
contourList.add(contours.get(0)); later (where contourList is
the list of just one contour which will be used in the last
drawContours()).
Then I went ahead to change contours.get(0) to
contours.get(1) in Imgproc.boundingRect() as well as in contourList.add(); (just before Imgproc.drawContours()). That
resulted in this submat and mask:
Then I changed back to contours.get(0) in
Imgproc.boundingRect(); and let
contourList.add(contours.get(1)); be there. Got the following
submat and mask:
NOW I am completely Unable to Understand what is happening here.
I am not sure how this is handle in JAVA (I usually use OpenCV in c++ or python), but there is an error in your code...
The contours list will have a list of list of points. This points will refer to the original image. So, this mean that if the figure one is in lets say, x=300, y= 300, width= 100, height=100 then when you get your submatrix it will try to draw those points in a smaller image... so when it tries to draw point (300,300) in a 100 x 100 image, it will simply fail... probably throws an error or simply doesn't draw anything...
A solution for this is, do a for loop and substract to each point of the contour the initial point of the bounding rect (in my example (300,300)).
As, why there is some garbage drawn... well you never initialize the matrix. Not sure in JAVA, but in c++ you have to set them to 0.
I think it should be something like this:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1, new Scalar(0));
I hope this helps :)
EDIT
I think I did not explain myself clearly before.
Your contours are an array of points (x,y). These are the coordinates of the points that represent each contour in the original image. This image has a size, and your submatrix has a smaller size. The points are outside of this small image boundaries....
you should do something like this to fix it:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j].x -= boundingrect.x;
contours[0][j].y -= boundingrect.y;
}
and then you can draw the contours, since they will be in boundaries of the submat.
I think in java it is also possible to subtract the opencv points directly:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j] -= boundingrect.tl();
}
but in this case I am not sure, since I have tried it in c++ only
boundingrect.tl() -> gives you the top left point of the rect

PDFTron : Drawing Ink Annotation programmatically

I am drawing ink annotation from points stored in db. Where those points were extracted from previously drawn shape over pdf. I have referred this example given by PDFTron but I am not able to see annotation drawn on page in proper manner.
Actual Image
Drawn Programmatically
Here is the code I have used for drawing annotation.
for (Integer integer : uniqueShapeIds) {
Config.debug("Shape Id's unique "+integer);
pdftron.PDF.Annots.Ink ink = pdftron.PDF.Annots.Ink.create(
mPDFViewCtrl.getDoc(),
getAnnotationRect(pointsArray, integer));
for (SaveAnnotationState annot : pointsArray) {
Config.debug("Draw "+annot.getxCord()+" "+annot.getyCord()+" "+annot.getPathIndex()+" "+annot.getPointIndex());
Point pt = new Point(annot.getxCord(), annot.getyCord());
ink.setPoint(annot.getPathIndex(), annot.getPointIndex(),pt);
ink.setColor(
new ColorPt(annot.getR()/255, annot.getG()/255, annot
.getB()/255), 3);
ink.setOpacity(annot.getOpacity());
BorderStyle border=ink.getBorderStyle();
border.setWidth(annot.getThickness());
ink.setBorderStyle(border);
}
ink.refreshAppearance();
Page page = mPDFViewCtrl.getDoc().getPage(mPDFViewCtrl.getCurrentPage());
Annot mAnnot=ink;
page.annotPushBack(mAnnot);
mPDFViewCtrl.update(mAnnot, mPDFViewCtrl.getCurrentPage());
}
can any one tell me what is going wrong here?
On a typical PDF page, the bottom left corner of the page is coordinate 0,0. However, for annotations the origin is the bottom left corner of the rectangle specified in the BBox entry. The BBox entry is the 3rd parameter of you call to Ink.Create, which is called pos unfortunately.
This means the Rect passed into Ink.Create, is supposed to be the minimum axis-aligned bounding box of the all the points that make up the Ink Annot.
I suspect in your call to getAnnotationRect you start with Rect(), which is really Rect(0,0,0,0), so when you union all the other points you end up with an inflated Rect.
What you should do is store the BBox in your database, by calling Annot.getRect().
If this is not possible, or too late, then initialize the Rect with the first point in your database.
Rect rect = new Rect(pt.x, pt.y, pt.x, pt.y);
API:
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron
http://www.pdftron.com/pdfnet/mobile/docs/Android/pdftron/PDF/Annot.html#getRect%28%29/PDF/Annot.html#create%28pdftron.SDF.Doc,%20int,%20pdftron.PDF.Rect%29

OpenCV - Detect hand-drawing shapes

Could OpenCV detect the geometric shapes which is drawn by hand as below? The shape can be a rectangle, triangle, circle, curve, arc,polygon,...
I am going to develop an android application which detect these shapes.
Well, I tried it in a harry. Normally you need to skeletonize the input. Anyway. You can reason about the shapes based on their points. Normally a square has 4, a triangle 3, etc.
Effort results:
Canny results:
Polygonal approximation:
Console output:
contour points:11
contour points:6
contour points:4
contour points:5
Here is the code:
Mat src=imread("WyoKM.png");
Mat src_gray(src.size(),CV_8UC1);
if (src.empty()) exit(-10);
imshow("img",src);
/// Convert image to gray and blur it
cvtColor( src, src_gray, CV_BGR2GRAY );
threshold(src_gray,src_gray,100,255,src_gray.type());
imshow("img2",src_gray);
Mat canny_output;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
/// Detect edges using canny
int thresh=100;
Canny( src_gray, canny_output, thresh, thresh*2, 3 );
imshow("canny",canny_output);
imwrite("canny.jpg",canny_output);
/// Find contours
findContours( canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0) );
// testing the approximate polygon
cv::Mat result(src_gray.size(),CV_8U,cv::Scalar(255));
for(int i=0;i<contours.size();i=i+4) //for testing reasons. Skeletonize input.
{
std::vector<cv::Point> poly;
poly.clear();
cv::approxPolyDP(cv::Mat(contours[i]),poly,
5, // accuracy of the approximation
true); // yes it is a closed shape
// Iterate over each segment and draw it
std::vector<cv::Point>::const_iterator itp= poly.begin();
cout<<"\ncontour points:"<<poly.size();
while (itp!=(poly.end()-1)) {
cv::line(result,*itp,*(itp+1),cv::Scalar(0),2);
++itp;
}
// last point linked to first point
cv::line(result,
*(poly.begin()),
*(poly.end()-1),cv::Scalar(20),2);
}
imshow("result",result);
imwrite("results.jpg",result);
cvWaitKey();

How can I tell if a closed path contains a given point?

In Android, I have a Path object which I happen to know defines a closed path, and I need to figure out if a given point is contained within the path. What I was hoping for was something along the lines of
path.contains(int x, int y)
but that doesn't seem to exist.
The specific reason I'm looking for this is because I have a collection of shapes on screen defined as paths, and I want to figure out which one the user clicked on. If there is a better way to be approaching this such as using different UI elements rather than doing it "the hard way" myself, I'm open to suggestions.
I'm open to writing an algorithm myself if I have to, but that means different research I guess.
Here is what I did and it seems to work:
RectF rectF = new RectF();
path.computeBounds(rectF, true);
region = new Region();
region.setPath(path, new Region((int) rectF.left, (int) rectF.top, (int) rectF.right, (int) rectF.bottom));
Now you can use the region.contains(x,y) method.
Point point = new Point();
mapView.getProjection().toPixels(geoPoint, point);
if (region.contains(point.x, point.y)) {
// Within the path.
}
** Update on 6/7/2010 **
The region.setPath method will cause my app to crash (no warning message) if the rectF is too large. Here is my solution:
// Get the screen rect. If this intersects with the path's rect
// then lets display this zone. The rectF will become the
// intersection of the two rects. This will decrease the size therefor no more crashes.
Rect drawableRect = new Rect();
mapView.getDrawingRect(drawableRect);
if (rectF.intersects(drawableRect.left, drawableRect.top, drawableRect.right, drawableRect.bottom)) {
// ... Display Zone.
}
The android.graphics.Path class doesn't have such a method. The Canvas class does have a clipping region that can be set to a path, there is no way to test it against a point. You might try Canvas.quickReject, testing against a single point rectangle (or a 1x1 Rect). I don't know if that would really check against the path or just the enclosing rectangle, though.
The Region class clearly only keeps track of the containing rectangle.
You might consider drawing each of your regions into an 8-bit alpha layer Bitmap with each Path filled in it's own 'color' value (make sure anti-aliasing is turned off in your Paint). This creates kind of a mask for each path filled with an index to the path that filled it. Then you could just use the pixel value as an index into your list of paths.
Bitmap lookup = Bitmap.createBitmap(width, height, Bitmap.Config.ALPHA_8);
//do this so that regions outside any path have a default
//path index of 255
lookup.eraseColor(0xFF000000);
Canvas canvas = new Canvas(lookup);
Paint paint = new Paint();
//these are defaults, you only need them if reusing a Paint
paint.setAntiAlias(false);
paint.setStyle(Paint.Style.FILL);
for(int i=0;i<paths.size();i++)
{
paint.setColor(i<<24); // use only alpha value for color 0xXX000000
canvas.drawPath(paths.get(i), paint);
}
Then look up points,
int pathIndex = lookup.getPixel(x, y);
pathIndex >>>= 24;
Be sure to check for 255 (no path) if there are unfilled points.
WebKit's SkiaUtils has a C++ work-around for Randy Findley's bug:
bool SkPathContainsPoint(SkPath* originalPath, const FloatPoint& point, SkPath::FillType ft)
{
SkRegion rgn;
SkRegion clip;
SkPath::FillType originalFillType = originalPath->getFillType();
const SkPath* path = originalPath;
SkPath scaledPath;
int scale = 1;
SkRect bounds = originalPath->getBounds();
// We can immediately return false if the point is outside the bounding rect
if (!bounds.contains(SkFloatToScalar(point.x()), SkFloatToScalar(point.y())))
return false;
originalPath->setFillType(ft);
// Skia has trouble with coordinates close to the max signed 16-bit values
// If we have those, we need to scale.
//
// TODO: remove this code once Skia is patched to work properly with large
// values
const SkScalar kMaxCoordinate = SkIntToScalar(1<<15);
SkScalar biggestCoord = std::max(std::max(std::max(bounds.fRight, bounds.fBottom), -bounds.fLeft), -bounds.fTop);
if (biggestCoord > kMaxCoordinate) {
scale = SkScalarCeil(SkScalarDiv(biggestCoord, kMaxCoordinate));
SkMatrix m;
m.setScale(SkScalarInvert(SkIntToScalar(scale)), SkScalarInvert(SkIntToScalar(scale)));
originalPath->transform(m, &scaledPath);
path = &scaledPath;
}
int x = static_cast<int>(floorf(point.x() / scale));
int y = static_cast<int>(floorf(point.y() / scale));
clip.setRect(x, y, x + 1, y + 1);
bool contains = rgn.setPath(*path, clip);
originalPath->setFillType(originalFillType);
return contains;
}
I know I'm a bit late to the party, but I would solve this problem by thinking about it like determining whether or not a point is in a polygon.
http://en.wikipedia.org/wiki/Point_in_polygon
The math computes more slowly when you're looking at Bezier splines instead of line segments, but drawing a ray from the point still works.
For completeness, I want to make a couple notes here:
As of API 19, there is an intersection operation for Paths. You could create a very small square path around your test point, intersect it with the Path, and see if the result is empty or not.
You can convert Paths to Regions and do a contains() operation. However Regions work in integer coordinates, and I think they use transformed (pixel) coordinates, so you'll have to work with that. I also suspect that the conversion process is computationally intensive.
The edge-crossing algorithm that Hans posted is good and quick, but you have to be very careful for certain corner cases such as when the ray passes directly through a vertex, or intersects a horizontal edge, or when round-off error is a problem, which it always is.
The winding number method is pretty much fool proof, but involves a lot of trig and is computationally expensive.
This paper by Dan Sunday gives a hybrid algorithm that's as accurate as the winding number but as computationally simple as the ray-casting algorithm. It blew me away how elegant it was.
See https://stackoverflow.com/a/33974251/338479 for my code which will do point-in-path calculation for a path consisting of line segments, arcs, and circles.

Categories

Resources