I am using following code to detect edges from given document.
private Mat edgeDetection(Mat src) {
Mat edges = new Mat();
Imgproc.cvtColor(src, edges, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(edges, edges, new Size(5, 5), 0);
Imgproc.Canny(edges, edges, 10, 30);
return edges;
}
And then I can find the document from this edges by finding largest contour from this.
My problem is I can find the document from following pic:
but not from following pic:
How can I improve this edge detection?
I use Python, but the main idea is the same.
If you directly do cvtColor: bgr -> gray for img2, then you must fail. Because the gray becames difficulty to distinguish the regions:
Related answers:
How to detect colored patches in an image using OpenCV?
Edge detection on colored background using OpenCV
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
In your image, the paper is white, while the background is colored. So, it's better to detect the paper is Saturation(饱和度) channel in HSV color space. For HSV, refer to https://en.wikipedia.org/wiki/HSL_and_HSV#Saturation.
Main steps:
Read into BGR
Convert the image from bgr to hsv space
Threshold the S channel
Then find the max external contour(or do Canny, or HoughLines as you like, I choose findContours), approx to get the corners.
This is the first result:
This is the second result:
The Python code(Python 3.5 + OpenCV 3.3):
#!/usr/bin/python3
# 2017.12.20 10:47:28 CST
# 2017.12.20 11:29:30 CST
import cv2
import numpy as np
##(1) read into bgr-space
img = cv2.imread("test2.jpg")
##(2) convert to hsv-space, then split the channels
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
##(3) threshold the S channel using adaptive method(`THRESH_OTSU`) or fixed thresh
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
##(4) find all the external contours on the threshed S
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
#cv2.drawContours(canvas, cnts, -1, (0,255,0), 1)
## sort and choose the largest contour
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
## approx the contour, so the get the corner points
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
## Ok, you can see the result as tag(6)
cv2.imwrite("detected.png", canvas)
In OpenCV there is function called dilate this will darker the lines. so try the code like below.
private Mat edgeDetection(Mat src) {
Mat edges = new Mat();
Imgproc.cvtColor(src, edges, Imgproc.COLOR_BGR2GRAY);
Imgproc.dilate(edges, edges, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(10, 10)));
Imgproc.GaussianBlur(edges, edges, new Size(5, 5), 0);
Imgproc.Canny(edges, edges, 15, 15 * 3);
return edges;
}
Related
I use the OpenCV library for detecting a white page from a book. Finding 4 corners on dark surfaces, but the white background finds only 3 corners. How can I find a way to find 4 corners or read a page on a white background?
Or could you suggest another library that I can use outside of Opencv?
I use the following code to find the contours.
Mat grayImage = new Mat(imageMat.size(), CvType.CV_8UC4);
Mat cannedImage = new Mat(imageMat.size(), CvType.CV_8UC4);
Mat dilate = new Mat(imageMat.size(), CvType.CV_8UC4);
Imgproc.cvtColor(imageMat, imageMat, Imgproc.COLOR_RGBA2GRAY);
Imgproc.GaussianBlur(imageMat, imageMat, new Size(3, 3), 0);
Imgproc.cvtColor(imageMat, grayImage, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur(grayImage, grayImage, new Size(5.0, 5.0), 0.0);
Imgproc.threshold(grayImage, grayImage, 20.0, 255.0, Imgproc.THRESH_TRIANGLE);
Imgproc.Canny(grayImage, cannedImage, 75.0, 200.0);
Imgproc.dilate(cannedImage, dilate, Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(10.0, 10.0)));
How can I change this code?
I test my code with these photos.
Thanks,
Have a nice day.
import numpy as np
import cv2
if __name__ == '__main__':
image = cv2.imread('image.jpg', cv2.IMREAD_UNCHANGED);
lab = cv2.cvtColor(image, cv2.COLOR_BGR2Lab)
lab = cv2.split(lab)
binary = cv2.adaptiveThreshold(lab[2], 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 7, 7)
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
binary = cv2.morphologyEx(binary, cv2.MORPH_DILATE, kernel,iterations=3)
contours = cv2.findContours(binary, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)[1]
points = np.concatenate(contours)
(x,y,w,h) = cv2.boundingRect(points)
cv2.rectangle(image, (x,y), (x+w,y+h), (0,0,255))
cv2.imshow('image', image)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am creating a project, where I have to remove the background from the image and detect the object.
I am using canny edge detection for detecting edges and than finding contours and than draw contours on a masked image, but after canny edge detection, I am getting broken edges ,how to fix that.
For Canny edge detection, for Threshold parameter, I have tried using thresholding with otsu's method for higher and lower threshold, but it doesn't seem to give appropriate result. Further, I have tried finding the mean of pixel values, and finding
double high_threshold = 1.33 * d;
double low_threshold = 0.66 * d;
it is also not giving accurate result. what else I can do
Mat rgba = new Mat();
Utils.bitmapToMat(bitmap, rgba);
Mat edges = new Mat(rgba.size(), CvType.CV_8UC1);Imgproc.cvtColor(rgba, edges, Imgproc.COLOR_RGB2GRAY, 4);
Imgproc.GaussianBlur(edges, edges, new Size(3,3), 2); Mat thresh=new Mat();
double upper_threshold = Imgproc.threshold(edges,thresh,0,255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C| Imgproc.THRESH_OTSU);
double lower_threshold = 0.1*upper_threshold;Imgproc.Canny(edges,edges,upper_threshold,lower_threshold,3,false);Mat mDilatedMat = new Mat();
Mat Meroded = new Mat();
double erosion_size=5;
double dilation_size=4;
Mat e= Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*erosion_size + 1, 2*erosion_size+1));
Mat f= Imgproc.getStructuringElement(Imgproc.MORPH_RECT, new Size(2*dilation_size + 1, 2*dilation_size+1));
Imgproc.dilate(edges, mDilatedMat,e);
Imgproc.erode(mDilatedMat, Meroded,f);
You can improve your image extracted by sobel , canny or a different algorithm by applying edge linking algorithm.
Many edge linking algorithms are avaliable to use such as hough transform,
ant colony algorithm etc.
I started with the following image, named rgbaMat4Mask.bmp:
Then I converted it to HSV, and then did inRange() to find contours, and got the following Mat named maskedMat:
Then I went on to draw the first contour (the bigger one), on a newly created empty Mat named newMatWithMask, which has been given the same size as that of the first image I started with:
So far so good, but the problem starts now. I created a new Mat and gave it the same size as that of the first contour (the bigger one), and then set its background color to new Scalar(120, 255, 255). Then I copied the newMat4MaskFinished to it using copyTo function. But neither is the size of the resulting Mat same as that of the contour, nor is its background color set to new Scalar(120, 255, 255) which is blue.
It is rather an image with size same as that of the entire mask, and has a black background. why? What am I doing wrong?
public void doProcessing(View view) {
// READING THE RGBA MAT
Mat rgbaMat4Mask = Highgui.imread("/mnt/sdcard/DCIM/rgbaMat4Mask.bmp");
// CONVERTING TO HSV
Mat hsvMat4Mask = new Mat();
Imgproc.cvtColor(rgbaMat4Mask, hsvMat4Mask, Imgproc.COLOR_BGR2HSV);
Highgui.imwrite("/mnt/sdcard/DCIM/hsvMat4Mask.bmp", hsvMat4Mask);//check
// CREATING A FILTER/MASK FOR RED COLORED BLOB
Mat maskedMat = new Mat();
Core.inRange(hsvMat4Mask, new Scalar(0, 100, 100), new Scalar(10, 255, 255), maskedMat);
Highgui.imwrite("/mnt/sdcard/DCIM/maskedMat.bmp", maskedMat);// check
// COPYING THE MASK TO AN EMPTY MAT
// STEP 1:
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(maskedMat, contours, new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE);
//STEP 2:
Mat newMat4Mask = new Mat(rgbaMat4Mask.rows(), rgbaMat4Mask.cols(), CvType.CV_8UC1);
newMat4Mask.setTo(new Scalar(0));
Imgproc.drawContours(newMat4Mask, contours, 0, new Scalar(255), -1);//TODO Using -1 instead of CV_FILLED.
Highgui.imwrite("/mnt/sdcard/DCIM/newMatWithMask.bmp", newMat4Mask);// check
//STEP 3
Log.i(TAG, "HAPPY rows:"+contours.get(0).rows()+" columns:"+contours.get(0).cols());
Mat newMatwithMaskFinished = new Mat(contours.get(0).rows(), contours.get(0).cols(), CvType.CV_8UC3);
newMatwithMaskFinished.setTo(new Scalar(120, 255, 255));
rgbaMat4Mask.copyTo(newMatwithMaskFinished, newMat4Mask);
Highgui.imwrite("/mnt/sdcard/DCIM/newMatwithMaskFinished.bmp", newMatwithMaskFinished);//check*/
}
Your newMatwithMaskFinished should have the same size as rgbaMat4Mask and newMat4Mask.
Mat newMatwithMaskFinished = new Mat(rgbaMat4Mask.rows(), rgbaMat4Mask.cols(), CvType.CV_8UC3);
If you want to have a Mat of the bigger circle only, with transparent background, then you need to:
1) create newMatwithMaskFinished with type CV_8UC4
Mat newMatwithMaskFinished = new Mat(rgbaMat4Mask.rows(), rgbaMat4Mask.cols(), CvType.CV_8UC4);
2) set a transparent background:
newMatwithMaskFinished.setTo(new Scalar(0, 0, 0, 0));
3) Compute the bounding box box of the contour you're interested in, with boundingRect.
4) Convert rgbaMat4Mask to 4 channels (unless it's already), with cvtColor(..., COLOR_BGR2BGRA), let's call this rgba
5) Copy rgba to newMatwithMaskFinished, with mask newMat4Mask.
6) Crop newMatwithMaskFinished on box, using submat method
Using OpenCV4Android, how can I get the HSV channels of the first pixel of the masked region in a masked image (dilatedMat in the following snippet)? I know that we'd get the HSV channel values of first pixel by hsvMat.get(0,0) but I don't know how to apply this to the masked region only, rather than the entire Mat.
For example, following is a function to which a camera frame is passed as an argument, and I have generated a mask, but how should I proceed from there?
NOTE: Please keep in mind that the masked region is Not a rectangle, it has an irregular shape.
private void detectColoredBlob (Mat rgbaFrame) {
Mat hsvImage = new Mat();
Imgproc.cvtColor(rgbaFrame, hsvImage, Imgproc.COLOR_RGB2HSV_FULL);
Mat maskedImage = new Mat();
Scalar lowerThreshold = new Scalar(85, 50, 20);
Scalar upperThreshold = new Scalar(135, 255, 77);
Core.inRange(hsvImage, lowerThreshold, upperThreshold, maskedImage);
Mat dilatedMat= new Mat();
Imgproc.dilate(maskedImage, dilatedMat, new Mat() );
//****************WHAT NOW???**************
}
I am developing application in which I have to detect rectangular object and draw outline I am using Open cv android library....
I succesfully detect Circle and draw outline inside image but repeatedly fail to detect Square or rectangle and draw....Here is my code to for circle..
Bitmap imageBmp = BitmapFactory.decodeResource(MainActivityPDF.this.getResources(),R.drawable.loadingplashscreen);
Mat imgSource = new Mat(), imgCirclesOut = new Mat();
Utils.bitmapToMat(imageBmp , imgSource);
//grey opencv
Imgproc.cvtColor(imgSource, imgSource, Imgproc.COLOR_BGR2GRAY);
Imgproc.GaussianBlur( imgSource, imgSource, new Size(9, 9), 2, 2 );
Imgproc.HoughCircles( imgSource, imgCirclesOut, Imgproc.CV_HOUGH_GRADIENT, 1, imgSource.rows()/8, 200, 100, 0, 0 );
float circle[] = new float[3];
for (int i = 0; i < imgCirclesOut.cols(); i++)
{
imgCirclesOut.get(0, i, circle);
org.opencv.core.Point center = new org.opencv.core.Point();
center.x = circle[0];
center.y = circle[1];
Core.circle(imgSource, center, (int) circle[2], new Scalar(255,0,0,255), 4);
}
Bitmap bmp = Bitmap.createBitmap(imageBmp.getWidth(), imageBmp.getHeight(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(imgSource, bmp);
ImageView frame = (ImageView) findViewById(R.id.imageView1);
//Bitmap bmp = Bitmap.createBitmap(100, 100, Bitmap.Config.ARGB_8888);
frame.setImageBitmap(bmp);
any help for detect square/rectangle for android ......I am wondering from 2 days ..every example are in either C++ or in C++ and I can't get through that languages...
Thanks.
There are many ways of detecting a rectangle using opencv, the most appropriate way of doing this is by finding the contours after applying Canny Edge Detection.
Steps are as follows :-
1.Convert the image to MAT
Grayscale the image
3.Apply Gausian Blur
4.Apply Morphology for filling the holes if any
5.Apply Canny Detection
6.Find Contours of the image
7.Find the largest contour of the rest
8.Draw the largest contour.
Code is as follows -
1.Convert the image to MAT
Utils.bitmapToMat(image,src)
Grayscale the image
val gray = Mat(src.rows(), src.cols(), src.type())
Imgproc.cvtColor(src, gray, Imgproc.COLOR_BGR2GRAY)
3.Apply Gausian Blur
Imgproc.GaussianBlur(gray, gray, Size(5.0, 5.0), 0.0)
4.Apply Morphology for filling the holes if any and also dilate the image
val kernel = Imgproc.getStructuringElement(
Imgproc.MORPH_ELLIPSE, Size(
5.0,
5.0
)
)
Imgproc.morphologyEx(
gray,
gray,
Imgproc.MORPH_CLOSE,
kernel
) // fill holes
Imgproc.morphologyEx(
gray,
gray,
Imgproc.MORPH_OPEN,
kernel
) //remove noise
Imgproc.dilate(gray, gray, kernel)
5.Apply Canny Detection
val edges = Mat(src.rows(), src.cols(), src.type())
Imgproc.Canny(gray, edges, 75.0, 200.0)
6.Find Contours of the image
val contours = ArrayList<MatOfPoint>()
val hierarchy = Mat()
Imgproc.findContours(
edges, contours, hierarchy, Imgproc.RETR_LIST,
Imgproc.CHAIN_APPROX_SIMPLE
)
7.Find the largest contour of the rest
public int findLargestContour(ArrayList<MatOfPoint> contours) {
double maxVal = 0;
int maxValIdx = 0;
for (int contourIdx = 0; contourIdx < contours.size(); contourIdx++) {
double contourArea = Imgproc.contourArea(contours.get(contourIdx));
if (maxVal < contourArea) {
maxVal = contourArea;
maxValIdx = contourIdx;
}
}
return maxValIdx;
}
8.Draw the largest contour which is the rectangle
Imgproc.drawContours(src, contours, idx, Scalar(0.0, 255.0, 0.0), 3)
There you go you have found the rectangle .
If any error persist in getting the process .Try resizing the source Image to half of its height and width.
Have a look at the below link for proper Java code of the above explained
https://github.com/dhananjay-91/DetectRectangle
Also,
https://github.com/aashari/android-opencv-rectangle-detector
You are on the right way by using the Houghtransformation. Instead of using Houghcircles you have to use Houghlines and check the obtained lines for intersections. If you really have to find rectangles (and not 4 edged polygones) - you should look for lines with the same angle(+- a small offset) and if you found at least a pair of these lines you have to look for lines that lay perpendicular to this, find a pair as well and check for intersections. It should not be a big deal using vectors(endpoint - startpoint) and lines to perform the angle and intersection tests.