Not sure if this is the right way to ask, but please help. I have an image of a dented car. I have to process it and highlight the dents and return the number of dents. I was able to do it reasonably well with the following result:
The matlab code is:
img2=rgb2gray(i1);
imshow(img2);
img3=imtophat(img2,strel('disk',15));
img4=imadjust(img3);
layer=img4(:,:,1);
img5=layer>100 & layer<250;
img6=imfill(img5,'holes');
img7=bwareaopen(img6,5);
[L,ans]=bwlabeln(img7);
imshow(img7);
I=imread(i1);
Ians=CarDentIdentification(I);
However, when I try to do this using opencv, I get this:
With the following code:
Imgproc.cvtColor(source, middle, Imgproc.COLOR_RGB2GRAY);
Imgproc.equalizeHist(middle, middle);
Imgproc.threshold(middle, middle, 150, 255, Imgproc.THRESH_OTSU);
Please tell me how can I obtain better results in opencv, and also how to count the dents? I tried findcontour() but it gives a very large number. I tried on other images as well, but I'm not getting proper results.
Please help.
So you basically from the MATLAB site, imtophat does - Top-hat filtering computes the morphological opening of the image (using imopen) and then subtracts the result from the original image.
You could do this in OpenCV with the following steps:
Step 1: Get the disk structuring element
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
Step 2: Compute opening of the image and then subtract the result from the original image
tophat = cv2.morphologyEx(v, cv2.MORPH_TOPHAT, kernel)
This gives following result -
Step 3 - Now you could just manually threshold it or use Otsu -
ret, thresh = cv2.threshold(tophat, 17, 255, 0)
which gives you the following image -
Since the OP wants the code in Java, here is the probable code in Java:
private Mat topHat(Mat image)
{
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(15, 15), new Point (0, 0));
Mat dst = new Mat;
Imgproc.morphologyEx(image, dst, Imgproc.MORPH_TOPHAT, element, new Point(0, 0));
return dst;
}
Make sure you do this on a gray scale image (CvType.8UC1) and then you can threshold suitably.
Related
I'm developing an Android app which uses a background Service to programmatically capture a screenshot of whatever is on the screen currently. I obtain the screenshot as a Bitmap.
Next, I successfully imported OpenCV into my Android project.
What I need to do now is blur a subset of this image, i.e. not the entire image itself, but a [rectangular] area or sub-region within the image. I have an array of Rect objects representing the rectangular regions that I need to blur within the screenshot.
I've been looking around for a tutorial on doing this with OpenCV in Java, and I haven't found a clear answer. The Mat and Imgproc classes are obviously the ones of interest, and there's the Mat.submat() method, but I've been unable to find a clear, straightforward tutorial on getting this done.
I've googled a lot, and none of the examples I've found are complete. I need to do this in Java, within the Android runtime.
What I need is: Bitmap >>> Mat >>> Imgproc>>> Rect >>> Bitmap with ROI
blurred.
Any experienced OpenCV devs out here, can you point me in the right direction? This is the only thing I'm stuck at.
Related:
Gaussian blurring with OpenCV: only blurring a subregion of an image?.
How to blur a rectagle with OpenCv.
How to blur some portion of Image in Android?.
The C++ code to achieve this task is shared below with comments and sample images:
// load an input image
Mat img = imread("C:\\elon_tusk.png");
img:
// extract subimage
Rect roi(113, 87, 100, 50);
Mat subimg = img(roi);
subimg:
// blur the subimage
Mat blurred_subimage;
GaussianBlur(subimg, blurred_subimage, Size(0, 0), 5, 5);
blurred_subimage:
// copy the blurred subimage back to the original image
blurred_subimage.copyTo(img(roi));
img:
Android equivalent:
Mat img = Imgcodecs.imread("elon_tusk.png");
Rect roi = new Rect(113, 87, 100, 50);
Mat subimage = img.submat(roi).clone();
Imgproc.GaussianBlur(subimg, subimg, new Size(0,0), 5, 5);
subimg.copyTo(img.submat(roi));
You could just implement your own helper function, let's call it roi (region of interest).
Since images in opencv are numpy ndarrays, you can do something like this:
def roi(image: np.ndarray, region: QRect) -> np.ndarray:
a1 = region.upperLeft().x()
b1 = region.bottomRight().y()
a2 = region.upperLeft().x()
b2 = region.bottomRight().y()
return image[a1:a2, b1:b2]
And just use this helper function to extract the subregions of the image that you are interested, blur them and put the result back on the original picture.
I am kind of stuck with this problem, and I know there are so many questions about it on stack overflow but in my case. Nothing gives the expected result.
The Context:
Am using Android OpenCV along with Tesseract so I can read the MRZ area in the passport. When the camera is started I pass the input frame to an AsyncTask, the frame is processed, the MRZ area is extracted succesfully, I pass the extracted MRZ area to a function prepareForOCR(inputImage) that takes the MRZ area as gray Mat and Will output a bitmap with the thresholded image that I will pass to Tesseract.
The problem:
The problem is while thresholding the Image, I use adaptive thresholding with blockSize = 13 and C = 15, but the result given is not always the same depending on the lighting of the image and the conditions in general from which the frame is taken.
What I have tried:
First I am resizing the image to a specific size (871,108) so the input image is always the same and not dependant on which phone is used.
After resizing, I try with different BlockSize and C values
//toOcr contains the extracted MRZ area
Bitmap toOCRBitmap = Bitmap.createBitmap(bitmap);
Mat inputFrame = new Mat();
Mat toOcr = new Mat();
Utils.bitmapToMat(toOCRBitmap, inputFrame);
Imgproc.cvtColor(inputFrame, inputFrame, Imgproc.COLOR_BGR2GRAY);
TesseractResult lastResult = null;
for (int B = 11; B < 70; B++) {
for (int C = 11; C < 70; C++){
if (IsPrime(B) && IsPrime(C)){
Imgproc.adaptiveThreshold(inputFrame, toOcr, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, B ,C);
Bitmap toOcrBitmap = OpenCVHelper.getBitmap(toOcr);
TesseractResult result = TesseractInstance.extractFrame(toOcrBitmap, "ocrba");
if (result.getMeanConfidence()> 70) {
if (MrzParser.tryParse(result.getText())){
Log.d("Main2Activity", "Best result with " + B + " : " + C);
return result;
}
}
}
}
}
Using the code below, the thresholded result image is a black on white image which gives a confidence greater than 70, I can't really post the whole image for privacy reasons, but here's a clipped one and a dummy password one.
Using the MrzParser.tryParse function which adds checks for the character position and its validity within the MRZ, am able to correct some occurences like a name containing a 8 instead of B, and get a good result but it takes so much time, which is normal because am thresholding almost 255 images in the loop, adding to that the Tesseract call.
I already tried getting a list of C and B values which occurs the most but the results are different.
The question:
Is there a way to define a C and blocksize value so that it s always giving the same result, maybe adding more OpenCV calls so The input image like increasing contrast and so on, I searched the web for 2 weeks now I can't find a viable solution, this is the only one that is giving accurate results
You can use a clustering algorithm to cluster the pixels based on color. The characters are dark and there is a good contrast in the MRZ region, so a clustering method will most probably give you a good segmentation if you apply it to the MRZ region.
Here I demonstrate it with MRZ regions obtained from sample images that can be found on the internet.
I use color images, apply some smoothing, convert to Lab color space, then cluster the a, b channel data using kmeans (k=2). The code is in python but you can easily adapt it to java. Due to the randomized nature of the kmeans algorithm, the segmented characters will have label 0 or 1. You can easily sort it out by inspecting cluster centers. The cluster-center corresponding to characters should have a dark value in the color space you are using.
I just used the Lab color space here. You can use RGB, HSV or even GRAY and see which one is better for you.
After segmenting like this, I think you can even find good values for B and C of your adaptive-threshold using the properties of the stroke width of the characters (if you think the adaptive-threshold gives a better quality output).
import cv2
import numpy as np
im = cv2.imread('mrz1.png')
# convert to Lab
lab = cv2.cvtColor(cv2.GaussianBlur(im, (3, 3), 1), cv2.COLOR_BGR2Lab)
im32f = np.array(im[:, :, 1:3], dtype=np.float32)
k = 2 # 2 clusters
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(im32f.reshape([im.shape[0]*im.shape[1], -1]),
k, None, term_crit, 10, 0)
# segmented image
labels = labels.reshape([im.shape[0], im.shape[1]]) * 255
Some results:
I started by reading in this Mat.
Then I converted it to Greyscale and applied Imgproc.canny() to it, getting the following mask.
Then I used Imgproc.findContours() to find the contours, Imgproc.drawContours(), and Core.putText() to label the contours with numbers:
Then I did Rect boundingRect = Imgproc.boundingRect(contours.get(0));
Mat submatrix = new Mat();
submatrix = originalMat.submat(boundingRect); to get following submatrix:
So far so good. The Problem starts hereafter:
NOW I NEEDED A MASK OF THE submatrix. So I decided to use Imgproc.drawContours() to get the mask:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);
List<MatOfPoint> contourList = new ArrayList<>();
contourList.add(contours.get(0));
Imgproc.drawContours(mask, contourList, 0, new Scalar(255), -1);
I got the following mask:
WHAT I WAS EXPECTING was a filled (in white color) diamond shape on black background.
WHy am I getting this unexpected result?
EDIT:
When I replaced Mat mask = new Mat(submatrix.rows(),
submatrix.cols(), CvType.CV_8UC1); by Mat mask =
Mat.zeros(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);,
the last mask with white colored garbage was replaced by an empty
black mask withOUT any white color on it. I got the following submat
and mask:
I was getting the first contour in the list of contours (named
contours) by contours.get(0), and using this first contour to
calculate Imgproc.boundingRect() as well as in
contourList.add(contours.get(0)); later (where contourList is
the list of just one contour which will be used in the last
drawContours()).
Then I went ahead to change contours.get(0) to
contours.get(1) in Imgproc.boundingRect() as well as in contourList.add(); (just before Imgproc.drawContours()). That
resulted in this submat and mask:
Then I changed back to contours.get(0) in
Imgproc.boundingRect(); and let
contourList.add(contours.get(1)); be there. Got the following
submat and mask:
NOW I am completely Unable to Understand what is happening here.
I am not sure how this is handle in JAVA (I usually use OpenCV in c++ or python), but there is an error in your code...
The contours list will have a list of list of points. This points will refer to the original image. So, this mean that if the figure one is in lets say, x=300, y= 300, width= 100, height=100 then when you get your submatrix it will try to draw those points in a smaller image... so when it tries to draw point (300,300) in a 100 x 100 image, it will simply fail... probably throws an error or simply doesn't draw anything...
A solution for this is, do a for loop and substract to each point of the contour the initial point of the bounding rect (in my example (300,300)).
As, why there is some garbage drawn... well you never initialize the matrix. Not sure in JAVA, but in c++ you have to set them to 0.
I think it should be something like this:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1, new Scalar(0));
I hope this helps :)
EDIT
I think I did not explain myself clearly before.
Your contours are an array of points (x,y). These are the coordinates of the points that represent each contour in the original image. This image has a size, and your submatrix has a smaller size. The points are outside of this small image boundaries....
you should do something like this to fix it:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j].x -= boundingrect.x;
contours[0][j].y -= boundingrect.y;
}
and then you can draw the contours, since they will be in boundaries of the submat.
I think in java it is also possible to subtract the opencv points directly:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j] -= boundingrect.tl();
}
but in this case I am not sure, since I have tried it in c++ only
boundingrect.tl() -> gives you the top left point of the rect
I am searching for how to extract a digital number from the image in android. I have to take a picture then i need to get numbers from image. OpenCV is a option . can we convert opencv into android ? Kindly suggest me any proper way. I will be grateful to you.
There are many OCR for Android
check there links
https://github.com/rmtheis/android-ocr
https://github.com/GautamGupta/Simple-Android-OCR
http://www.abbyy.com/mobileocr/android/
best OCR (Optical character recognition) example in android
OpenCV supports Android platform. You have to set up OpenCV4Android, it's instructions step by step here.
http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/O4A_SDK.html
However OpenCV is not an option but only a step. Then you have to use a character recognition engine. Most popular one is Tesseract-ocr. But it is really not easy task.
Also, they often recognize all characters. If you could achieve it, extracting the digits will be the easiest part in Java.
this works for me you just need to specify the number size` ArrayList output=new ArrayList<>();
cvtColor(input,input,COLOR_BGRA2GRAY);
Mat img_threshold = new Mat();
threshold(input, img_threshold, 60, 255,THRESH_BINARY_INV);
Mat img_contours =copy(img_threshold);
//Find contours of possibles characters
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
findContours(img_contours, contours,new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE); // all pixels of each contours
contours=sort(contours);
// Draw blue contours on a white image
Mat result=copy(img_threshold);
cvtColor(result, result, COLOR_GRAY2BGR);
drawContours(result,contours,
-1, // draw all contours
new Scalar(0,0,255), // in blue
1); // with a thickness of 1
//Start to iterate to each contour founded
ListIterator<MatOfPoint> itc = contours.listIterator();
//Remove patch that are no inside limits of aspect ratio and area.
while (itc.hasNext())
{
//Create bounding rect of object
MatOfPoint mp = new MatOfPoint(itc.next().toArray());
Rect mr = boundingRect(mp);
rectangle(result,new Point(mr.x,mr.y),new Point(mr.x+mr.width,mr.y+mr.height),new Scalar(0,255,0));
Mat auxRoi=new Mat(img_threshold,mr);
if (OCR_verifySizes(auxRoi))
{
output.add(preprocessChar(auxRoi));
}
}
return output;`
I stumbled upon a weird problem with OpenCV drawContours on android.
Sometimes, (without apparent pattern) function drawContours produces this:
drawContours http://img17.imageshack.us/img17/9031/screenshotgps.png
while it should obviously produce just the white part.
To put it in context:
I detect edges using canny algorithm and then I find contours with
Imgproc.findContours(dil, contours, dummy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
Then i select several contours that fit some requirements and I add them to a list:
List<MatOfPoint> goodContours = new ArrayList<MatOfPoint>();
After that I randomly select one contour and I draw it (filled with white) on mat and convert it to android Bitmap:
Mat oneContour = new Mat(orig.rows(), orig.cols(), CvType.CV_8UC1);
int index = (int) (Math.random() * goodContours.size());
Imgproc.drawContours(oneContour, goodContours, index, new Scalar(255, 255, 255), -1);
Bitmap oneContourBitmap = Bitmap.createBitmap(oneContour.cols(), oneContour.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(oneContour, oneContourBitmap);
Most of the times I get what I expect: white patch on a pure black background, but sometimes I get the above. I'm totally at a loss here. I suspect there could be some memory leakage but I try hard to release all Mat's immediately after they are of no use anymore (I also tried to release them at the end of a function where it all happens but without effect) but I'm unable to pinpoint the source of the problem.
Has anyone had similar issues?
I first discovered this on OpenCV 2.4.0 but it stays the same on 2.4.3.
Any suggestion is appreciated.