Perspective Transform OpenCV - android

I'm new to OpenCV on Android and try to do Perspective Transform but I don't know how to use getperspectivetransform() and warpperspective() functions.I could detect rectangle from an image, but don't know how to warp.
Here is the detect rectangle function:
Mat tempMat = new Mat();
Mat src = new Mat();
Utils.bitmapToMat(image, tempMat);
Imgproc.cvtColor(tempMat, src, Imgproc.COLOR_BGR2RGB);
Mat blurred = src.clone();
Imgproc.medianBlur(src, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U), gray = new Mat();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
List<Mat> blurredChannel = new ArrayList<Mat>();
blurredChannel.add(blurred);
List<Mat> gray0Channel = new ArrayList<Mat>();
gray0Channel.add(gray0);
MatOfPoint2f approxCurve = new MatOfPoint2f();
double maxArea = 0;
int maxId = -1;
for (int c = 0; c < 3; c++) {
int ch[] = { c, 0 };
Core.mixChannels(blurredChannel, gray0Channel, new MatOfInt(ch));
int thresholdLevel = 1;
for (int t = 0; t < thresholdLevel; t++) {
if (t == 0) {
Imgproc.Canny(gray0, gray, 50, 50, 3, true); // true ?
Imgproc.dilate(gray, gray, new Mat(), new Point(-1, -1), 1); // 1
// ?
} else {
Imgproc.adaptiveThreshold(gray0, gray, thresholdLevel,
Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C,
Imgproc.THRESH_BINARY,
(src.width() + src.height()) / 200, t);
}
Imgproc.findContours(gray, contours, new Mat(),
Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (MatOfPoint contour : contours) {
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve,
Imgproc.arcLength(temp, true) * 0.02, true);
if (approxCurve.total() == 4 && area >= maxArea) {
double maxCosine = 0;
List<Point> curves = approxCurve.toList();
for (int j = 2; j < 5; j++) {
double cosine = Math.abs(angle(curves.get(j % 4),
curves.get(j - 2), curves.get(j - 1)));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.45) {
maxArea = area;
maxId = contours.indexOf(contour);
}
}
}
}
}
I draw rectangle with this statement.
if (maxId >= 0) {
Rect rect = Imgproc.boundingRect(contours.get(maxId));
Imgproc.rectangle(src, rect.tl(), rect.br(), new Scalar(255, 0, 0,
.8), 4);
}
After that I convert mat to bitmap and show on an imageview.
Here is the screenshot
So my problem is warpping, How can I warp the rectangle and rotate it?
and If it is possible, how can I improve detecting rectangle? Any hints?
(OpenCV Android SDK Ver: 3.41, Android Studio Ver: 3.01)

If you are looking to warp the detected contour into rectangle,
Get the contours of the rectangle
find convex hull of the contour
Using approxPolyDP reduce the convex hull points into 4 points
fit line to consecutive points (ex, if pts is the array, lines are fit as follows l1 = line Between(pts[0], pts[1]), l2 = line Between(pts[1], pts[2]), l3 = line Between(pts[2], pts[3]), l4 = lineBetween(pts[3], pts[0])
find the intersection between these lines, you'll end up with four points
Order the points in clockwise order (inputCorners = TopLeft, TopRight, BottomRight, BottomLeft)
create an output image with needed resolution and make the corner points in the same clockwise order ((0,0), (0, cols), (rows, cols), (rows, 0))
find homography using the function
Mat homography = Calib3d.findHomography(inputCorners, imageCorners, Calib3d.RANSAC, 10);
using the output homography matrix, warp the input image using the function
Imgproc.warpPerspective(image, outputMat, homography, new Size(image.cols(), image.rows()));
you can refer to the following link

This is my kotlin extensin version you can use it in your projects.
fun Bitmap.perspectiveTransform(srcPoints: List<org.opencv.core.Point>) :
Bitmap{
val dstWidth = max(
srcPoints[0].distanceFrom(srcPoints[1]),
srcPoints[2].distanceFrom(srcPoints[3])
)
val dstHeight = max(
srcPoints[0].distanceFrom(srcPoints[2]),
srcPoints[1].distanceFrom(srcPoints[3])
)
val dstPoints: List<org.opencv.core.Point> = listOf(
org.opencv.core.Point(0.0, 0.0),
org.opencv.core.Point(dstWidth, 0.0),
org.opencv.core.Point(0.0, dstHeight),
org.opencv.core.Point(dstWidth, dstHeight)
)
return try {
val srcMat = Converters.vector_Point2d_to_Mat(srcPoints)
val dstMat = Converters.vector_Point2d_to_Mat(dstPoints)
val perspectiveTransformation =
Imgproc.getPerspectiveTransform(srcMat, dstMat)
val inputMat = Mat(this.height, this.width, CvType.CV_8UC1)
Utils.bitmapToMat(this, inputMat)
val outPutMat = Mat(dstHeight.toInt(), dstWidth.toInt(), CvType.CV_8UC1)
Imgproc.warpPerspective(
inputMat,
outPutMat,
perspectiveTransformation,
Size(dstWidth, dstHeight)
)
val outPut = Bitmap.createBitmap(
dstWidth.toInt(),
dstHeight.toInt(), Bitmap.Config.RGB_565
)
//Imgproc.cvtColor(outPutMat , outPutMat , Imgproc.COLOR_GRAY2BGR)
Utils.matToBitmap(outPutMat , outPut)
outPut
}
catch ( e : Exception){
e.printStackTrace()
this
}
}
To use distance from I write another extension function
fun org.opencv.core.Point.distanceFrom(srcPoint: org.opencv.core.Point):
Double {
val w1 = this.x - srcPoint.x
val h1 = this.y - srcPoint.y
val distance = w1.pow(2) + h1.pow(2)
return sqrt(distance)
}
Also in this answer the correct src Points indices are :
0 : topleft
1 : topRight
2 : bottomLeft
3 : bottomRight
Good luck

Related

opencv deskewing a contour

InputImage
ResultImage
I have been able to filter the largest contour in the image to detect the token.
I have applied warp perception but it is only cropping the image at the edges of the contour, nothing else.
I want the detected token to be cropped out of the rest of the image entireley, de-skew it while keeping proportions so the result image should be upright, straight. Then I will move forward with finding the blobs in the token to detect the dates marked inside it.
private Mat processMat(Mat srcMat) {
Mat processedMat = new Mat();
Imgproc.cvtColor(srcMat, processedMat, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(processedMat, processedMat, new Size(5, 5), 5);
Imgproc.threshold(processedMat, processedMat, 127, 255, Imgproc.THRESH_BINARY);
List<MatOfPoint> contours = new ArrayList<>();
Mat hierarchy = new Mat();
Imgproc.findContours(processedMat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
double maxVal = 0;
int maxValIdx = 0;
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++) {
double contourArea = Imgproc.contourArea(contours.get(contourIdx));
if (maxVal < contourArea) {
maxVal = contourArea;
maxValIdx = contourIdx;
}
}
if (!contours.isEmpty()) {
Imgproc.drawContours(srcMat, contours, maxValIdx, new Scalar(0,255,0), 3);
Rect rect = Imgproc.boundingRect(contours.get(maxValIdx));
Log.e("rect", "" + rect);
int top = srcMat.height();
int left = srcMat.width();
int right = 0;
int bottom = 0;
if(rect.x < left) {
left = rect.x;
}
if(rect.x+rect.width > right){
right = rect.x+rect.width;
}
if(rect.y < top){
top = rect.y;
}
if(rect.y+rect.height > bottom){
bottom = rect.y+rect.height;
}
Point topLeft = new Point(left, top);
Point topRight = new Point(right, top);
Point bottomRight = new Point(right, bottom);
Point bottomLeft = new Point(left, bottom);
return warp(srcMat, topLeft, topRight, bottomLeft, bottomRight);
}
return null;
}
Mat warp(Mat inputMat, Point topLeft, Point topRight, Point bottomLeft, Point bottomRight) {
int resultWidth = (int)(topRight.x - topLeft.x);
int bottomWidth = (int)(bottomRight.x - bottomLeft.x);
if(bottomWidth > resultWidth)
resultWidth = bottomWidth;
int resultHeight = (int)(bottomLeft.y - topLeft.y);
int bottomHeight = (int)(bottomRight.y - topRight.y);
if (bottomHeight > resultHeight) {
resultHeight = bottomHeight;
}
Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC1);
List<Point> source = new ArrayList<>();
source.add(topLeft);
source.add(topRight);
source.add(bottomLeft);
source.add(bottomRight);
Mat startM = Converters.vector_Point2f_to_Mat(source);
Point ocvPOut1 = new Point(0, 0);
Point ocvPOut2 = new Point(resultWidth, 0);
Point ocvPOut3 = new Point(0, resultHeight);
Point ocvPOut4 = new Point(resultWidth, resultHeight);
List<Point> dest = new ArrayList<>();
dest.add(ocvPOut1);
dest.add(ocvPOut2);
dest.add(ocvPOut3);
dest.add(ocvPOut4);
Mat endM = Converters.vector_Point2f_to_Mat(dest);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM);
Imgproc.warpPerspective(inputMat, outputMat, perspectiveTransform, new Size(resultWidth, resultHeight));
return outputMat;
}
UPDATE 1
Replaced This:
return warp(srcMat, topLeft, topRight, bottomLeft, bottomRight);
With This:
return warp(srcMat, topLeft, topRight, bottomRight, bottomLeft);
Result Update 1:
UPDATE 2
public Mat warp(Mat inputMat, MatOfPoint selectedContour) {
MatOfPoint2f new_mat = new MatOfPoint2f(selectedContour.toArray());
MatOfPoint2f approxCurve_temp = new MatOfPoint2f();
int contourSize = (int) selectedContour.total();
Imgproc.approxPolyDP(new_mat, approxCurve_temp, contourSize * 0.05, true);
double[] temp_double;
temp_double = approxCurve_temp.get(0,0);
Point p1 = new Point(temp_double[0], temp_double[1]);
temp_double = approxCurve_temp.get(1,0);
Point p2 = new Point(temp_double[0], temp_double[1]);
temp_double = approxCurve_temp.get(2,0);
Point p3 = new Point(temp_double[0], temp_double[1]);
temp_double = approxCurve_temp.get(3,0);
Point p4 = new Point(temp_double[0], temp_double[1]);
List<Point> source = new ArrayList<Point>();
source.add(p1);
source.add(p2);
source.add(p3);
source.add(p4);
Mat startM = Converters.vector_Point2f_to_Mat(source);
int resultWidth = 846;
int resultHeight = 2048;
Mat outputMat = new Mat(resultWidth, resultHeight, CvType.CV_8UC4);
Point ocvPOut1 = new Point(0, 0);
Point ocvPOut2 = new Point(0, resultHeight);
Point ocvPOut3 = new Point(resultWidth, resultHeight);
Point ocvPOut4 = new Point(resultWidth, 0);
List<Point> dest = new ArrayList<Point>();
dest.add(ocvPOut1);
dest.add(ocvPOut2);
dest.add(ocvPOut3);
dest.add(ocvPOut4);
Mat endM = Converters.vector_Point2f_to_Mat(dest);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(startM, endM);
Imgproc.warpPerspective(inputMat, outputMat, perspectiveTransform, new Size(resultWidth, resultHeight),
Imgproc.INTER_CUBIC);
return outputMat;
}
Result Update 2:
I have changed my warp function a bit and the code is attached.
However the resultant image is rotated somehow in the wrong direction. Can you guide me which is the correct way to do this.
Android device orientation is set to: portrait and the input image is in portrait as well.
UPDATE 3
I have managed to straighten the token by sorting the corners like so:
List<Point> source = new ArrayList<Point>();
source.add(p2);
source.add(p3);
source.add(p4);
source.add(p1);
Mat startM = Converters.vector_Point2f_to_Mat(source);
Result Update 3:
However the resultant image is cropped from the left side which I have no idea how to tackle that.
I have managed to straighten the input image if the token is tilted to the right or left and the output image is straight nonetheless. However if the input image already has the token centred and straight up. it rotates the token like so, using the same code:
Issue Update 3:
The transformation to deskew the ticket is close to an affine one. You can obtain it by approximating the outline with a parallelogram. You find the vertices of the parallelogram as the leftmost, topmost, rightmost and bottommost points.
Actually, you just need three vertices (and the fourth can be recomputed from these). Maybe a least-square fitting of the parallelogram is possible, I don't know.
Another option is to consider an homographic transform, which is defined from four points (but the computation is much more complex). It will take perspective into account. (You might get some insight here: https://www.codeproject.com/Articles/674433/Perspective-Projection-of-a-Rectangle-Homography.)
To straighten up the image, it suffice to apply the inverse transform and retrieve a rectangle. Anyway, you will notice that the size of this rectangle is unknown, so that you can scale it arbitrarily. The hardest issue is to find a suitable aspect ratio.

OpenCV, Java: Color Detection in a certain specified area

I currently have this image:
I managed to detect the black object. Now i want to detect the green object, but i only want the application to look for the green object below the black object. I already have the code to detect the green tape and its working. Just need to set it so that its only in the area below the black object.
Resulting image should still look like this:
P.S some of the variables are named "Blue", rest assured its using Green scalar values.
Code:
//Detect Black
private Bitmap findCombine(Bitmap sourceBitmap) {
Bitmap roiBitmap = null;
Scalar green = new Scalar(0, 255, 0, 255);
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
Mat roiTmp = sourceMat.clone();
bitmapWidth = sourceBitmap.getWidth();
Log.e("bitmapWidth", String.valueOf(bitmapWidth));
final Mat hsvMat = new Mat();
sourceMat.copyTo(hsvMat);
// convert mat to HSV format for Core.inRange()
Imgproc.cvtColor(hsvMat, hsvMat, Imgproc.COLOR_RGB2HSV);
Scalar lowerb = new Scalar(85, 50, 40); // lower color border for BLUE
Scalar upperb = new Scalar(135, 255, 255); // upper color border for BLUE
Scalar lowerblack = new Scalar(0, 0, 0); // lower color border for BLACK
Scalar upperblack = new Scalar(180, 255, 40); // upper color border for BLACK
Scalar testRunL = new Scalar(60, 50, 40); // lower Green 83 100 51
Scalar testRunU = new Scalar(90, 255, 255); // upper Green
Core.inRange(hsvMat, lowerblack, upperblack, roiTmp); // select only blue pixels
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<RotatedRect> boundingRects = new ArrayList<>();
Imgproc.findContours(roiTmp, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
double rectangleArea = boundingRect.size.area();
// test min ROI area in pixels
if (rectangleArea > 1300 && rectangleArea < 500000) {//400000
Point rotated_rect_points[] = new Point[4];
boundingRect.points(rotated_rect_points);
Rect rect3 = Imgproc.boundingRect(new MatOfPoint(rotated_rect_points));
Log.e("blackArea", String.valueOf(rect3.area()));
// test horizontal ROI orientation
if (rect3.height > rect3.width) {
Imgproc.rectangle(sourceMat, rect3.tl(), rect3.br(), green, 3);
xBlack = rect3.br().x;
xBlackCenter = (rect3.br().x+ rect3.tl().x) /2;
yBlack = rect3.br().y;//bottom
battHeight = (rect3.br().y - rect3.tl().y); //batt height in pixel
Log.e("BLACKBR, TL", String.valueOf(rect3.br().y) + "," + String.valueOf(rect3.tl().y));
}
}
}
roiBitmap = Bitmap.createBitmap(sourceMat.cols(), sourceMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(sourceMat, roiBitmap);
//Set area to detect green
Point leftPoint = new Point(0, yBlack); //far left, black object height
Point rightPoint = new Point(roiBitmap.getWidth(), roiBitmap.getHeight()); //btm right of entire bitmap
Rect bottomRect = new Rect(leftPoint, rightPoint);
double rectWidth = sourceBitmap.getWidth() - 0;
double rectHeight = sourceBitmap.getHeight() - yBlack;
Log.e("rectWidth", String.valueOf(rectWidth));
Log.e("rectHeight", String.valueOf(rectHeight));
Mat sourceMatT = new Mat(roiBitmap.getWidth(), roiBitmap.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(roiBitmap,sourceMatT);
Bitmap C = Bitmap.createBitmap(sourceMatT.cols(), sourceMatT.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(sourceMatT, C);
Mat dumbMat = sourceMatT.clone();
Log.e("sourceMatT, BottomRect","SMT "+ String.valueOf(sourceMatT.size()) + " bottomRect " + String.valueOf(bottomRect.size()));
Mat cropMat = new Mat(dumbMat, bottomRect);
ImageView imgCropped = (ImageView) findViewById(R.id.cropped_image_view);
//Utils.matToBitmap(cropMat,C);
imgCropped.setImageBitmap(C);
//Detect Green
Bitmap roiBitmap2 = null;
Mat sourceMat2 = new Mat(C.getWidth(), C.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(C, sourceMat2);
Mat roiTmp2 = sourceMat2.clone();
final Mat hsvMat2 = new Mat();
sourceMat.copyTo(hsvMat2);
// convert mat to HSV format for Core.inRange()
Imgproc.cvtColor(hsvMat2, hsvMat2, Imgproc.COLOR_RGB2HSV);
Core.inRange(hsvMat2, testRunL, testRunU, roiTmp2); // select only blue pixels
// find contours
List<MatOfPoint> contours2 = new ArrayList<>();
List<RotatedRect> boundingRects2 = new ArrayList<>();
Imgproc.findContours(roiTmp2, contours2, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour2 : contours2) {
MatOfPoint2f areaPoints2 = new MatOfPoint2f(contour2.toArray());
RotatedRect boundingRect2 = Imgproc.minAreaRect(areaPoints2);
double rectangleArea2 = boundingRect2.size.area();
// test min ROI area in pixels
if (rectangleArea2 > 40) { //214468.32402064091 // 20000
Point rotated_rect_points2[] = new Point[4];
boundingRect2.points(rotated_rect_points2);
Rect rect = Imgproc.boundingRect(new MatOfPoint(rotated_rect_points2));
Log.e("area green", String.valueOf(boundingRect2.size.area()));
// test vertical ROI orientation
if (rect.width > rect.height) {
if (numRect < 2) {
Imgproc.rectangle(sourceMat2, rect.tl(), rect.br(), green, 3);
xBlue = (rect.br().x + rect.tl().x) / 2; //center
yBlue = rect.br().y; //bottom
Log.e("GREEN br,tl", String.valueOf(rect.br().y) + " " + String.valueOf(rect.tl().y));
}
}
}
}
Point firstPoint = new Point(xBlackCenter, yBlack);
Point secondPoint = new Point(xBlackCenter, yBlue);
Point middlePoint = new Point(firstPoint.x,
firstPoint.y + 0.5 * (secondPoint.y - firstPoint.y));
Scalar lineColor = new Scalar(255, 0, 0, 255);
int lineWidth = 3;
Scalar textColor = new Scalar(255, 0, 0, 255);
//height of bounce = BattHeight IRL / battHeihgt Pixel * line Height Pixel
double lineHeightCm = (4.65 / battHeight) * findHeight(yBlack, yBlue);
Log.e("PixelBatt/PixelBounce", "BattH: " + battHeight + " find height " + String.valueOf(findHeight(xBlack, xBlue)) + "!");
Log.e("Blacky-blueY", String.valueOf(xBlue - xBlack));
Imgproc.line(sourceMat2, firstPoint, secondPoint, lineColor, lineWidth);
Imgproc.putText(sourceMat2, String.valueOf(lineHeightCm), middlePoint,
Core.FONT_HERSHEY_PLAIN, 3.5, textColor);
roiBitmap2 = Bitmap.createBitmap(sourceMat2.cols(), sourceMat2.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(sourceMat2, roiBitmap2);
TextView tvR = (TextView) findViewById(R.id.tvR);
tvR.setText("Bounce Height = " + lineHeightCm + "cm");
return roiBitmap2;
}
Error:
CvException [org.opencv.core.CvException: /build/master_pack-android/opencv/modules/java/generator/src/cpp/utils.cpp:97: error: (-215) src.dims == 2 && info.height == (uint32_t)src.rows && info.width == (uint32_t)src.cols in function void Java_org_opencv_android_Utils_nMatToBitmap2(JNIEnv*, jclass, jlong, jobject, jboolean)
There is no need to find green objects in specific area: You can find green contours on entire image, then just test it coordinates relative to black rectangle. Something like that:
At first - find black rectangle.
Rect blackRect = findBlackRect();
Then find contours of ALL green objects (same way as You find black):
// find green contours
List<MatOfPoint> greenContours = new ArrayList<>();
Imgproc.findContours(roiMat, greenContours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
Then test what of green contours laying below of black rect (has grater Y coord)
// find appropriate bounding rectangles
for (MatOfPoint contour : greenContours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
Point rotated_rect_points[] = new Point[4];
boundingRect.points(rotated_rect_points);
Rect rect = Imgproc.boundingRect(new MatOfPoint(rotated_rect_points));
// test top left Y coord of bounding rectangle of green contour grater than
// Y coord of top left of black rectangle
if (rect.tl().y > blackRect.tl().y) {
// that is green contour under black rectangle
// just draw it
Imgproc.rectangle(sourceMat, rect.tl(), rect.br(), green, 3);
}
}
And so on...

how to get the coordinates of a contour

I am using OpenCV4Android version 2.4.11 and I am detecting any rectangular shapes in frames retrieved by camera. As shown in image_1 below, i draw a contour of a black color around the detected object, and what I am trying to do
is, to get all the coordinates of the drawn contour the one that is ONLY drawn in black. What I attempted is, as shown in code_1 below, i get the largest contour and the index of the largest contour and save them in the
"largestContour" and "largest_contour_index" respectively. Then, I draw the contour using
Imgproc.drawContours(mMatInputFrame, contours, largest_contour_index, new Scalar(0, 0, 0), 2, 8, hierachy, 0, new Point());
and then I pass points of the largest contour to the class FindCorners because i want to find the specific coordinates of the contour drawn in black as follows:
this.mFindCorners = new FindCorners(largestContour.toArray());
double[] cords = this.mFindCorners.getCords();
the following line of code:
double[] cords = this.mFindCorners.getCords();
should gives me the smalle x-coordinates, smallest y-coordinates, largest x-coordinates and largest y-coordinates. But when i draw a circle around the coordinates i got from "this.mFindCorners.getCords();" i got something like
in image_2 below, which is just the corners of the BoundingRect.
Actually i do not want any coordinates from the boundingRect i want to have access to the coordinates of the contour that is drawn around the detected object in balck
please let me know how to get the coordinates of the contour itself?
code_1:
if (contours.size() > 0) {
for (int i = 0; i < contours.size(); i++) {
contour2f = new MatOfPoint2f(contours.get(i).toArray());
approxDistance = Imgproc.arcLength(contour2f, true) * .01;//.02
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
points = new MatOfPoint(approxCurve.toArray());
double area = Math.abs(Imgproc.contourArea(points, true));
if (points.total() >= 4 && area >= 40000 && area <= 200000) {
if (area > largest_area) {
largest_area = area;
largest_contour_index = i;
pointsOfLargestContour = points;
largestContour = contours.get(i);
}
}
}
if (largest_area > 0) {
Imgproc.drawContours(mMatInputFrame, contours, largest_contour_index, new Scalar(0, 0, 0), 2, 8, hierachy, 0, new Point());
this.mFindCorners = new FindCorners(largestContour.toArray());
double[] cords = this.mFindCorners.getCords();
Core.circle(mMatInputFrame, new Point(cords[0], cords[1]), 10, new Scalar(255, 0, 0));
Core.circle(mMatInputFrame, new Point(cords[2], cords[3]), 10, new Scalar(255, 255, 0));
}
FindCorners:
public class FindCorners {
private final static String TAG = FragOpenCVCam.class.getSimpleName();
private ArrayList<Double> mlistXCords = null;
private ArrayList<Double> mlistYCords = null;
private double mSmallestX;
private double mSmallestY;
private double mLargestX;
private double mLargestY;
private double[] mCords = null;
public FindCorners(Point[] points) {
this.mlistXCords = new ArrayList<>();
this.mlistYCords = new ArrayList<>();
this.mCords = new double[4];
Log.d(TAG, "points.length: " + points.length);
for (int i = 0; i < points.length; i++) {
this.mlistXCords.add(points[i].x);
this.mlistYCords.add(points[i].y);
}
//ascending
Collections.sort(this.mlistXCords);
Collections.sort(this.mlistYCords);
this.mSmallestX = this.mlistXCords.get(0);
this.mSmallestY = this.mlistYCords.get(0);
this.mLargestX = this.mlistXCords.get(this.mlistXCords.size() - 1);
this.mLargestY = this.mlistYCords.get(this.mlistYCords.size() - 1);
this.mCords[0] = this.mSmallestX;
this.mCords[1] = this.mSmallestY;
this.mCords[2] = this.mLargestX;
this.mCords[3] = this.mLargestY;
}
public double[] getCords() {
return this.mCords;
}
}
image_1:
image_2:
update
i do not want to have the coordinates of the bounding rect, what i want to have is, the exact coordinates of the black contour. as shown in image_3 the coordinates i am getting from my code are where the red and yellow circles are..but i am looking for having access to the coordinates of the black line "contour" so i can draw some circles on it as shown in image_3. the spots in green are just to show you where i want to have coordinates.
image_3:
Your problem is that you have sorted x's and y's separately and it is obvious that your algorithm will find the red and yellow points.
I can suggest the following algorithm:
int min_x=INF, min_x_index, min_y=1000, min_y_index;
int max_x=-1, max_x_index, max_y=-1, max_y_index;
for (int i = 0; i < points.length; i++)
{
if(points[i].x < min_x) { min_x = points[i].x; min_x_index = i; }
if(points[i].x > max_x) { max_x = points[i].x; max_x_index = i; }
if(points[i].y < min_y) { min_y = points[i].y; min_y_index = i; }
if(points[i].y > max_y) { max_y = points[i].y; max_y_index = i; }
}
Point corner1(points[min_x_index]);
Point corner2(points[min_y_index]);
Point corner3(points[max_x_index]);
Point corner4(points[max_y_index]);

Opencv - select rectangle and apply transformation from image in android?

I want to detect paper sheet from image.I applied median Blur, Canny ,dilate,threshold,Etc. algorithms to find.i am able to find sheet but don't know how to crop rectangle and apply transformation
This is my code :-
Mat blurred = new Mat();
Imgproc.medianBlur(src, blurred, 9);
// Set up images to use.
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U);
Mat gray = new Mat();
// For Core.mixChannels.
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
List<MatOfPoint2f> rectangles = new ArrayList<MatOfPoint2f>();
List<Mat> sources = new ArrayList<Mat>();
sources.add(blurred);
List<Mat> destinations = new ArrayList<Mat>();
destinations.add(gray0);
// To filter rectangles by their areas.
int srcArea = src.rows() * src.cols();
// Find squares in every color plane of the image.
for (int c = 0; c < 3; c++) {
int[] ch = {c, 0};
MatOfInt fromTo = new MatOfInt(ch);
Core.mixChannels(sources, destinations, fromTo);
// Try several threshold levels.
for (int l = 0; l < N; l++) {
if (l == 0) {
// HACK: Use Canny instead of zero threshold level.
// Canny helps to catch squares with gradient shading.
// NOTE: No kernel size parameters on Java API.
Imgproc.Canny(gray0, gray, 0, CANNY_THRESHOLD);
// Dilate Canny output to remove potential holes between edge segments.
Imgproc.dilate(gray, gray, Mat.ones(new Size(3, 3), 0));
} else {
int threshold = (l + 1) * 255 / N;
Imgproc.threshold(gray0, gray, threshold, 255, Imgproc.THRESH_BINARY);
}
// Find contours and store them all as a list.
Imgproc.findContours(gray, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
int i=0;
for (MatOfPoint contour : contours) {
MatOfPoint2f contourFloat = GeomUtils.toMatOfPointFloat(contour);
double arcLen = Imgproc.arcLength(contourFloat, true) * 0.02;
// Approximate polygonal curves.
MatOfPoint2f approx = new MatOfPoint2f();
Imgproc.approxPolyDP(contourFloat, approx, arcLen, true);
if (isRectangle(approx, srcArea)) {
Imgproc.drawContours(src, contours, i, new Scalar(255, 0, 0), 3);
//rectangles.add(approx);
/*Rect rect = Imgproc.boundingRect(contour);
Log.e("Rectangle Finder:-" + i, "Height:-" + rect.height + ", Width:-" + rect.width + " and Area:-" + rect.area() + "\nX:-" + rect.x + ",Y:-" + rect.y);*/
}
i++;
}
}
I want to select only white papersheet.please help me
Thanks in advance

Why is pointPolygonTest() method of OpenCV4Android returning -1 for every pixel?

In the following code, I have carried out the following steps:
Loaded an image from sdcard.
Converted it to HSV format.
Used inRange function to mask out the red color.
Used findContours to find the contours.
Find the largest contour from those contours.
Created an ROI around the largest contour using boundingRect and submat functions.
Converted this ROI Mat to HSV format.
Iterated through the ROI Mat, and check for each pixel if it lies within the largest contour. I used the method pointPolygonTest to find this out, but it returns -1 for every pixel, as can be seen from the Log.i output I have pasted here. The question is why? How can I correct this.
private Scalar detectColoredBlob() {
rgbaFrame = Highgui.imread("/mnt/sdcard/DCIM/rgbaMat4Mask.bmp");
Mat hsvImage = new Mat();
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_BGR2HSV);
Highgui.imwrite("/mnt/sdcard/DCIM/hsvImage.bmp", hsvImage);// check
Mat maskedImage = new Mat();
Core.inRange(hsvImage, new Scalar(0, 100, 100), new Scalar(10, 255, 255), maskedImage);
Highgui.imwrite("/mnt/sdcard/DCIM/maskedImage.bmp", maskedImage);// check
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(maskedImage, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// \/ We will use only the largest contour. Other contours (any other possible blobs of this color range) will be ignored.
MatOfPoint largestContour = contours.get(0);
double largestContourArea = Imgproc.contourArea(largestContour);
for (int i = 1; i < contours.size(); ++i) {// NB Notice the prefix increment.
MatOfPoint currentContour = contours.get(i);
double currentContourArea = Imgproc.contourArea(currentContour);
if (currentContourArea > largestContourArea) {
largestContourArea = currentContourArea;
largestContour = currentContour;
}
}
MatOfPoint2f largestContour2f = new MatOfPoint2f(largestContour.toArray());// Required on Line 289. See http://stackoverflow.com/questions/11273588/how-to-convert-matofpoint-to-matofpoint2f-in-opencv-java-api
Rect detectedBlobRoi = Imgproc.boundingRect(largestContour);
Mat detectedBlobRgba = rgbaFrame.submat(detectedBlobRoi);
Highgui.imwrite("/mnt/sdcard/DCIM/detectedBlobRgba.bmp", detectedBlobRgba);// check
Mat detectedBlobHsv = new Mat();
Imgproc.cvtColor(detectedBlobRgba, detectedBlobHsv, Imgproc.COLOR_BGR2HSV);
Highgui.imwrite("/mnt/sdcard/DCIM/roiHsv.bmp", detectedBlobHsv);// check
for (int firstCoordinate = 0; firstCoordinate < detectedBlobHsv.rows(); firstCoordinate++) {
for (int secondCoordinate = 0; secondCoordinate < detectedBlobHsv.cols(); secondCoordinate++) {
Log.i(TAG, "HAPPY " + Arrays.toString(detectedBlobHsv.get(firstCoordinate, secondCoordinate)));
if (Imgproc.pointPolygonTest(largestContour2f, new Point(firstCoordinate, secondCoordinate), false) == -1) {
Log.i(TAG, "HAPPY ....................... OUTSIDE");
}
}
}
Highgui.imwrite("/mnt/sdcard/DCIM/processedcontoured.bmp", detectedBlobHsv);// check
EDIT:
I am doing this because I need to compute the average HSV color of pixels lying inside the contour (i.e. the average HSV color of the biggest red colored blob). If I computed the average color of the ROI detectedBlobHsv by the normal formula, I would do something like
Scalar averageHsvColor= new Scalar(256);
Scalar sumHsvOfPixels = new Scalar(256);
sumHsvOfPixels = Core.sumElems(detectedBlobHsv);
int numOfPixels = detectedBlobHsv.width() * detectedBlobHsv.height();
for (int channel=0; channel<sumHsvOfPixels.val.length; channel++) {
averageHsvColor = sumHsvOfPixels.val[channel]/numOfPixels;
}
So somebody here on SO (probably you?) had suggested me a way to exclude pixels outside my contour a while back. I'd implement that like:
//Giving pixels outside contour of interest an HSV value of `double[]{0,0,0}`, so that they don't affect the computation of `sumHsvOfPixels` while computing average,
//and while keeping track of the number of pixels removed from computation this way, so we can subtract that number from the `$numOfPixels` during computation of average.
int pixelsRemoved = 0;
for (int row=0; row<detectedBlobHsv.rows(); row++) {
for (int col=0; col<detectedBlobHsv.cols(); col++) {
if (Imgproc.pointPolygonTest(largestContour2f, new Point(row, col), false) == -1) {
detectedBlobHsv.put(row, col, new double[]{0,0,0});
pixelsRemoved++;
}
}
}
Then compute the average like
Scalar averageHsvColor= new Scalar(256);
Scalar sumHsvOfPixels = new Scalar(256);
sumHsvOfPixels = Core.sumElems(detectedBlobHsv); //This will now exclude pixels outside the contour
int numOfPixels = ( detectedBlobHsv.width()*detectedBlobHsv.height() )-pixelsRemoved;
for (int channel=0; channel<sumHsvOfPixels.val.length; channel++) {
averageHsvColor = sumHsvOfPixels.val[channel]/numOfPixels;
}
EDIT 1:
Towards the end of the following method, I have created the mask with a list of MatOfPoints which contains the largest contour only. When I wrote it to SDCard, I got
I don't know where I messed up!
private Scalar detectColoredBlob() {
//Highgui.imwrite("/mnt/sdcard/DCIM/rgbaFrame.jpg", rgbaFrame);// check
rgbaFrame = Highgui.imread("/mnt/sdcard/DCIM/rgbaMat4Mask.bmp");
//GIVING A UNIFORM VALUE OF 255 TO THE V CHANNEL OF EACH PIXEL (255 IS THE MAXIMUM VALUE OF V ALLOWED - Simulating a maximum light condition)
for (int firstCoordinate = 0; firstCoordinate < rgbaFrame.rows(); firstCoordinate++) {
for (int secondCoordinate = 0; secondCoordinate < rgbaFrame.cols(); secondCoordinate++) {
double[] pixelChannels = rgbaFrame.get(firstCoordinate, secondCoordinate);
pixelChannels[2] = 255;
rgbaFrame.put(firstCoordinate, secondCoordinate, pixelChannels);
}
}
Mat hsvImage = new Mat();
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_BGR2HSV);
Highgui.imwrite("/mnt/sdcard/DCIM/hsvImage.bmp", hsvImage);// check
Mat maskedImage = new Mat();
Core.inRange(hsvImage, new Scalar(0, 100, 100), new Scalar(10, 255, 255), maskedImage);
Highgui.imwrite("/mnt/sdcard/DCIM/maskedImage.bmp", maskedImage);// check
// Mat dilatedMat = new Mat();
// Imgproc.dilate(maskedImage, dilatedMat, new Mat());
// Highgui.imwrite("/mnt/sdcard/DCIM/dilatedMat.jpg", dilatedMat);// check
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(maskedImage, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
//FINDING THE BIGGEST CONTOUR
// \/ We will use only the largest contour. Other contours (any other possible blobs of this color range) will be ignored.
MatOfPoint largestContour = contours.get(0);
double largestContourArea = Imgproc.contourArea(largestContour);
for (int i = 1; i < contours.size(); ++i) {// NB Notice the prefix increment.
MatOfPoint currentContour = contours.get(i);
double currentContourArea = Imgproc.contourArea(currentContour);
if (currentContourArea > largestContourArea) {
largestContourArea = currentContourArea;
largestContour = currentContour;
}
}
Rect detectedBlobRoi = Imgproc.boundingRect(largestContour);
Mat detectedBlobRgba = rgbaFrame.submat(detectedBlobRoi);
Highgui.imwrite("/mnt/sdcard/DCIM/detectedBlobRgba.bmp", detectedBlobRgba);// check
Mat detectedBlobHsv = new Mat();
Imgproc.cvtColor(detectedBlobRgba, detectedBlobHsv, Imgproc.COLOR_BGR2HSV);
Highgui.imwrite("/mnt/sdcard/DCIM/roiHsv.bmp", detectedBlobHsv);// check
List<MatOfPoint> largestContourList = new ArrayList<>();
largestContourList.add(largestContour);
Mat roiWithMask = new Mat(detectedBlobHsv.rows(), detectedBlobHsv.cols(), CvType.CV_8UC3);
roiWithMask.setTo(new Scalar(0,0,0));
Imgproc.drawContours(roiWithMask, largestContourList, 0, new Scalar(0, 255, 255), -1);//TODO Using -1 instead of CV_FILLED.
Highgui.imwrite("/mnt/sdcard/DCIM/roiWithMask.bmp", roiWithMask);// check
// CALCULATING THE AVERAGE COLOR OF THE DETECTED BLOB
// STEP 1:
double [] averageHsvColor = new double[]{0,0,0};
int numOfPixels = 0;
for (int firstCoordinate = 0; firstCoordinate < detectedBlobHsv.rows(); ++firstCoordinate) {
for (int secondCoordinate = 0; secondCoordinate < detectedBlobHsv.cols(); ++secondCoordinate) {
double hue = roiWithMask.get(firstCoordinate, secondCoordinate)[0];
double saturation = roiWithMask.get(firstCoordinate, secondCoordinate)[1];
double value = roiWithMask.get(firstCoordinate, secondCoordinate)[2];
averageHsvColor[0] += hue;
averageHsvColor[1] += saturation;
averageHsvColor[2] += value;
numOfPixels++;
}
}
averageHsvColor[0] /= numOfPixels;
averageHsvColor[1] /= numOfPixels;
averageHsvColor[1] /= numOfPixels;
return new Scalar(averageHsvColor);
}
EDIT 2:
I corrected my 3 channel mask and made a single channel mask
Mat roiMask = new Mat(rgbaFrame.rows(), rgbaFrame.cols(), CvType.CV_8UC1);
roiMask.setTo(new Scalar(0));
Imgproc.drawContours(roiMask, largestContourList, 0, new Scalar(255), -1);
and this resulted in the correct roiMask:
Then, before the comment // CALCULATING THE AVERAGE COLOR OF THE DETECTED BLOB, I added:
Mat newImageWithRoi = new Mat(rgbaFrame.rows(), rgbaFrame.cols(), CvType.CV_8UC3);
newImageWithRoi.setTo(new Scalar(0, 0, 0));
rgbaFrame.copyTo(newImageWithRoi, roiMask);
Highgui.imwrite("/mnt/sdcard/DCIM/newImageWithRoi.bmp", newImageWithRoi);//check
This resulted in:
Now again I don't know how to proceed. :s
You don't need to use pointPolygonTest, because you already have the mask.
You can simply sum up the values that lies on the mask. Something along the lines of (not able to test this):
// Initialize at 0!!!
Scalar averageHsvColor= new Scalar(0,0,0);
int numOfPixels = 0;
for(int r=0; r<detectedBlobHsv.height(); ++r)
{
for(int c=0; c<detectedBlobHsv.width(); ++c)
{
if( /* value of mask(r,c) > 0 */)
{
int H = // get H value of pixel at (r, c)
int S = // get S value of pixel at (r, c)
int V = // get V value of pixel at (r, c)
// Sum values
averageHsvColor[0] += H;
averageHsvColor[1] += S;
averageHsvColor[2] += V;
// Increment number of pixels inside mask
numOfPixels ++;
}
}
}
// Compute average
averageHsvColor[0] /= numOfPixels ;
averageHsvColor[1] /= numOfPixels ;
averageHsvColor[2] /= numOfPixels ;

Categories

Resources