Why I'm Getting wrong values with Colour detection (OpenCV) - android

I have a problem with conversion from BGR to HSV.
I'm programming with Android Studio and testing with my Xperia Z5.
In my code snippet, I'm getting totally wrong colour values:
Scalar LOWER_RED = (0,0,0);
Scalar HIGHER_RED = (30,255,255);
Mat src = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC4);
Mat hsv = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC4);
Utils bitmapToMat(Bitmap, src);
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
Core.inRange(hsv, LOWER_RED, HIGHER_RED, hsv);
Utils.matToBitmap(hsv,Bitmap);
I want to capture red colour. What did I do wrong?
Edit:
I tried with all advices and my Code Snippet looks now this way:
Scalar LOWER_RED = (0,10,100);
Scalar HIGHER_RED = (10,255,255);
Mat src = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC3);
Mat hsv = new Mat(Bitmap.getHeight(), Bitmap.getWidth(),CvType.CV_8UC3);
Utils bitmapToMat(Bitmap, src);
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
Core.inRange(hsv, LOWER_RED, HIGHER_RED, hsv);
Utils.matToBitmap(hsv,Bitmap);
The Outcome is a black screen ( no matches )
with
Core.inRange(hsv,New Scalar(0,0,0),New Scalar(10,255,255),HighRedRange);
Core.inRange(hsv,New Scalar(160,100,100),New Scalar(179,255,255),LowRedRange);
Core.addWeighted(LowRedRange,1.0,HighredRange,1.0,0.0,hsv);
The vegetables are black and the white background is white in hsv
0,0,0 - 10,255,255 AND 160,100,100 - 179,255,255
If I use a Scalar from 110,100,100 until 135,255,255, then the red pepper is white and the back ground black ( correctly detected ).
Source Picture:
And I dont understand all this...

There is a good tutorial here. It's for C++ but the general idea is the same. I tried the general idea and it surely works. The problem is that your range is too broad. In OpenCV, Hue range is in 0-180. Meaning that your higher limit goes to 30*2 = 60 which includes nearly all yellow range too.
I set the range from 0 to 10 for Hue, but remember you may also want to get 160 - 179 range which also includes some part of red. For this, you just need a second mask and then combine them with simple addition.
The example code in Python:
import cv2
import numpy as np
img = cv2.imread('peppers.jpg',1)
im_hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
thresh_low = np.array([0,100,100])
thresh_high = np.array([10,255,255])
mask = cv2.inRange(im_hsv, thresh_low, thresh_high)
im_masked = cv2.bitwise_and(img,img, mask= mask)
cv2.imshow('Masked',im_masked)
cv2.waitKey(0)
Original image:
Result:

I know now my problem
it's this:
Imgproc.cvtColor(src,hsv,Imgproc.COLOR_BGR2HSV);
With RGB2HSV all values are correct.
I thought on Android Smartphones there is BGR used ?
However, Big thanks for all answers.
I wish all of you a great day :)

Related

Improving threshold result for Tesseract

I am kind of stuck with this problem, and I know there are so many questions about it on stack overflow but in my case. Nothing gives the expected result.
The Context:
Am using Android OpenCV along with Tesseract so I can read the MRZ area in the passport. When the camera is started I pass the input frame to an AsyncTask, the frame is processed, the MRZ area is extracted succesfully, I pass the extracted MRZ area to a function prepareForOCR(inputImage) that takes the MRZ area as gray Mat and Will output a bitmap with the thresholded image that I will pass to Tesseract.
The problem:
The problem is while thresholding the Image, I use adaptive thresholding with blockSize = 13 and C = 15, but the result given is not always the same depending on the lighting of the image and the conditions in general from which the frame is taken.
What I have tried:
First I am resizing the image to a specific size (871,108) so the input image is always the same and not dependant on which phone is used.
After resizing, I try with different BlockSize and C values
//toOcr contains the extracted MRZ area
Bitmap toOCRBitmap = Bitmap.createBitmap(bitmap);
Mat inputFrame = new Mat();
Mat toOcr = new Mat();
Utils.bitmapToMat(toOCRBitmap, inputFrame);
Imgproc.cvtColor(inputFrame, inputFrame, Imgproc.COLOR_BGR2GRAY);
TesseractResult lastResult = null;
for (int B = 11; B < 70; B++) {
for (int C = 11; C < 70; C++){
if (IsPrime(B) && IsPrime(C)){
Imgproc.adaptiveThreshold(inputFrame, toOcr, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, B ,C);
Bitmap toOcrBitmap = OpenCVHelper.getBitmap(toOcr);
TesseractResult result = TesseractInstance.extractFrame(toOcrBitmap, "ocrba");
if (result.getMeanConfidence()> 70) {
if (MrzParser.tryParse(result.getText())){
Log.d("Main2Activity", "Best result with " + B + " : " + C);
return result;
}
}
}
}
}
Using the code below, the thresholded result image is a black on white image which gives a confidence greater than 70, I can't really post the whole image for privacy reasons, but here's a clipped one and a dummy password one.
Using the MrzParser.tryParse function which adds checks for the character position and its validity within the MRZ, am able to correct some occurences like a name containing a 8 instead of B, and get a good result but it takes so much time, which is normal because am thresholding almost 255 images in the loop, adding to that the Tesseract call.
I already tried getting a list of C and B values which occurs the most but the results are different.
The question:
Is there a way to define a C and blocksize value so that it s always giving the same result, maybe adding more OpenCV calls so The input image like increasing contrast and so on, I searched the web for 2 weeks now I can't find a viable solution, this is the only one that is giving accurate results
You can use a clustering algorithm to cluster the pixels based on color. The characters are dark and there is a good contrast in the MRZ region, so a clustering method will most probably give you a good segmentation if you apply it to the MRZ region.
Here I demonstrate it with MRZ regions obtained from sample images that can be found on the internet.
I use color images, apply some smoothing, convert to Lab color space, then cluster the a, b channel data using kmeans (k=2). The code is in python but you can easily adapt it to java. Due to the randomized nature of the kmeans algorithm, the segmented characters will have label 0 or 1. You can easily sort it out by inspecting cluster centers. The cluster-center corresponding to characters should have a dark value in the color space you are using.
I just used the Lab color space here. You can use RGB, HSV or even GRAY and see which one is better for you.
After segmenting like this, I think you can even find good values for B and C of your adaptive-threshold using the properties of the stroke width of the characters (if you think the adaptive-threshold gives a better quality output).
import cv2
import numpy as np
im = cv2.imread('mrz1.png')
# convert to Lab
lab = cv2.cvtColor(cv2.GaussianBlur(im, (3, 3), 1), cv2.COLOR_BGR2Lab)
im32f = np.array(im[:, :, 1:3], dtype=np.float32)
k = 2 # 2 clusters
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(im32f.reshape([im.shape[0]*im.shape[1], -1]),
k, None, term_crit, 10, 0)
# segmented image
labels = labels.reshape([im.shape[0], im.shape[1]]) * 255
Some results:

Thresholding in Android using opencv

Not sure if this is the right way to ask, but please help. I have an image of a dented car. I have to process it and highlight the dents and return the number of dents. I was able to do it reasonably well with the following result:
The matlab code is:
img2=rgb2gray(i1);
imshow(img2);
img3=imtophat(img2,strel('disk',15));
img4=imadjust(img3);
layer=img4(:,:,1);
img5=layer>100 & layer<250;
img6=imfill(img5,'holes');
img7=bwareaopen(img6,5);
[L,ans]=bwlabeln(img7);
imshow(img7);
I=imread(i1);
Ians=CarDentIdentification(I);
However, when I try to do this using opencv, I get this:
With the following code:
Imgproc.cvtColor(source, middle, Imgproc.COLOR_RGB2GRAY);
Imgproc.equalizeHist(middle, middle);
Imgproc.threshold(middle, middle, 150, 255, Imgproc.THRESH_OTSU);
Please tell me how can I obtain better results in opencv, and also how to count the dents? I tried findcontour() but it gives a very large number. I tried on other images as well, but I'm not getting proper results.
Please help.
So you basically from the MATLAB site, imtophat does - Top-hat filtering computes the morphological opening of the image (using imopen) and then subtracts the result from the original image.
You could do this in OpenCV with the following steps:
Step 1: Get the disk structuring element
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
Step 2: Compute opening of the image and then subtract the result from the original image
tophat = cv2.morphologyEx(v, cv2.MORPH_TOPHAT, kernel)
This gives following result -
Step 3 - Now you could just manually threshold it or use Otsu -
ret, thresh = cv2.threshold(tophat, 17, 255, 0)
which gives you the following image -
Since the OP wants the code in Java, here is the probable code in Java:
private Mat topHat(Mat image)
{
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(15, 15), new Point (0, 0));
Mat dst = new Mat;
Imgproc.morphologyEx(image, dst, Imgproc.MORPH_TOPHAT, element, new Point(0, 0));
return dst;
}
Make sure you do this on a gray scale image (CvType.8UC1) and then you can threshold suitably.

How to remove black borders around License Plate using opencv for android app

I want to remove black borders around License Plate. I am using opencv + android.
Please reply with code using which i can remove the borders.
I have also attached the image.image 1
You can perform (DoG) Difference of Gaussians to detect the high frequency details in your image. By high frequency in an image I mean distinct edges and corners.
Here is the code as requested. The explanations are placed as comments by the side:
import cv2
img = cv2.imread('number_plate.jpg') #---Reading the image---
img1 = img.copy() #----The final contour will be drawn on the copy of the original image---
gray_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) #---converting to gray scale---
Before performing DoG, I enhanced the gray sale image by applying Adaptive histogram equalization:
clahe = cv2.createCLAHE(clipLimit=3.0, tileGridSize=(8,8))
enhanced = clahe.apply(gray_img)
cv2.imshow(enhanced_gray_img', enhanced)
Now I performed Gaussian blur using two separate kernels and subtracted the resulting images as follows:
blur1 = cv2.GaussianBlur(enhanced, (15, 15), 0)
blur2 = cv2.GaussianBlur(enhanced, (25, 25), 0)
difference = blur2 - blur1
cv2.imshow('Difference_of_Gaussians', difference)
Then I performed binary threshold on the image above and found contours. I drew the contour having the largest area:
ret, th = cv2.threshold(difference, 127,255, 0) #---performed binary threshold ---
_, contours, hierarchy = cv2.findContours(th, cv2.RETR_EXTERNAL, 1) #---Find contours---
cnts = contours
max = 0 #----Variable to keep track of the largest area----
c = 0 #----Variable to store the contour having largest area---
for i in range(len(contours)):
if (cv2.contourArea(cnts[i]) > max):
max = cv2.contourArea(cnts[i])
c = i
rep = cv2.drawContours(img1, contours[c], -1, (0,255,0), 3) #----Draw the contour having the largest area on the image---
cv2.imshow('Final_Image.jpg', rep)
And voila!!! There you go.
Now you can obtain bounding rectangles for the contours you found and fed those coordinates as regions to the OCR to extract the text present

Detect black ink blob on paper - Opencv Android

I'm new to openCV, I've been getting into the samples provided for Android.
My goals is to detect color-blobs so I started with color-blob-detection sample.
I'm converting color image to grayscale and then thresholding using a binary threshold.
The background is white, blobs are black. I want to detect those black blobs. Also, I would like to draw their contour in color but I'm not able to do it because image is black and white.
I've managed to accomplish this in grayscale but I don't prefer how the contours are drawn, it's like color tolerance is too high and the contour is bigger than the actual blob (maybe blobs are too small?). I guess this 'tolerance' I talk about has something to do with setHsvColor but I don't quite understand that method.
Thanks in advance! Best Regards
UPDATE MORE INFO
The image I want to track is of ink splits. Imagine a white piece of paper with black ink splits. Right now I'm doing it in real-time (camera view). The actual app would take a picture and analyse that picture.
As I said above, I took color-blob-detection sample (android) from openCV GitHub repo. And I add this code in the onCameraFrame method (in order to convert it to black and white in real-time) The convertion is made so I don't mind if ink is black, blue, red:
mRgba = inputFrame.rgba();
/**************************************************************************/
/** BLACK AND WHITE **/
// Convert to Grey
Imgproc.cvtColor(inputFrame.gray(), mRgba, Imgproc.COLOR_GRAY2RGBA, 4);
Mat blackAndWhiteMat = new Mat ( H, W, CvType.CV_8U, new Scalar(1));
double umbral = 100.0;
Imgproc.threshold(mRgba, blackAndWhiteMat , umbral, 255, Imgproc.THRESH_BINARY);
// convert back to bitmap for displaying
Bitmap resultBitmap = Bitmap.createBitmap(mRgba.cols(), mRgba.rows(), Bitmap.Config.ARGB_8888);
blackAndWhiteMat.convertTo(blackAndWhiteMat, CvType.CV_8UC1);
Utils.matToBitmap(blackAndWhiteMat, resultBitmap);
/**************************************************************************/
This may not be the best way but it works.
Now I want to detect black blobs (ink splits). I guess they are detected because the Logcat (log entry of sample app) throws the number of contours detected, but I'm not able to see them because the image is black and white and I want the contour to be red, for example.
Here's an example image:-
And here is what I get using RGB (color-blob-detection as is, not black and white image). Notice how small blobs are not detected. (Is it possible to detect them? or are they too small?)
Thanks for your help! If you need more info I would gladly update this question
UPDATE: GitHub repo of color-blob-detection sample (second image)
GitHub Repo of openCV sample for Android
The solution is based on a combination of adaptive Image thresholding and use of the connected-component algorithm.
Assumption - The paper is the most lit area of the image whereas the ink spots on the paper are darkest regions.
from random import Random
import numpy as np
import cv2
def random_color(random):
"""
Return a random color
"""
icolor = random.randint(0, 0xFFFFFF)
return [icolor & 0xff, (icolor >> 8) & 0xff, (icolor >> 16) & 0xff]
#Read as Grayscale
img = cv2.imread('1-input.jpg', 0)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
# Gaussian to remove noisy region, comment to see its affect.
img = cv2.medianBlur(img,5)
#Find average intensity to distinguish paper region
avgPixelIntensity = cv2.mean( img )
print "Average intensity of image: ", avgPixelIntensity[0]
# Generate mask to distinguish paper region
#0.8 - used to ignore ill-illuminated region of paper
mask = cv2.inRange(img, avgPixelIntensity[0]*0.8, 255)
mask = 255 - mask
cv2.imwrite('2-maskedImg.jpg', mask)
#Approach 1
# You need to choose 4 or 8 for connectivity type(border pixels)
connectivity = 8
# Perform the operation
output = cv2.connectedComponentsWithStats(mask, connectivity, cv2.CV_8U)
# The first cell is the number of labels
num_labels = output[0]
# The second cell is the label matrix
labels = output[1]
# The third cell is the stat matrix
stats = output[2]
# The fourth cell is the centroid matrix
centroids = output[3]
cv2.imwrite("3-connectedcomponent.jpg", labels)
print "Number of labels", num_labels, labels
# create the random number
random = Random()
for i in range(1, num_labels):
print stats[i, cv2.CC_STAT_LEFT], stats[i, cv2.CC_STAT_TOP], stats[i, cv2.CC_STAT_WIDTH], stats[i, cv2.CC_STAT_HEIGHT]
cv2.rectangle(cimg, (stats[i, cv2.CC_STAT_LEFT], stats[i, cv2.CC_STAT_TOP]),
(stats[i, cv2.CC_STAT_LEFT] + stats[i, cv2.CC_STAT_WIDTH], stats[i, cv2.CC_STAT_TOP] + stats[i, cv2.CC_STAT_HEIGHT]), random_color(random), 2)
cv2.imwrite("4-OutputImage.jpg", cimg)
The Input Image
Masked Image from thresholding and invert operation.
Use of connected component.
Overlaying output of connected component on input image.

Why is the drawContour() in OpenCV generating this strange Mask?

I started by reading in this Mat.
Then I converted it to Greyscale and applied Imgproc.canny() to it, getting the following mask.
Then I used Imgproc.findContours() to find the contours, Imgproc.drawContours(), and Core.putText() to label the contours with numbers:
Then I did Rect boundingRect = Imgproc.boundingRect(contours.get(0));
Mat submatrix = new Mat();
submatrix = originalMat.submat(boundingRect); to get following submatrix:
So far so good. The Problem starts hereafter:
NOW I NEEDED A MASK OF THE submatrix. So I decided to use Imgproc.drawContours() to get the mask:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);
List<MatOfPoint> contourList = new ArrayList<>();
contourList.add(contours.get(0));
Imgproc.drawContours(mask, contourList, 0, new Scalar(255), -1);
I got the following mask:
WHAT I WAS EXPECTING was a filled (in white color) diamond shape on black background.
WHy am I getting this unexpected result?
EDIT:
When I replaced Mat mask = new Mat(submatrix.rows(),
submatrix.cols(), CvType.CV_8UC1); by Mat mask =
Mat.zeros(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);,
the last mask with white colored garbage was replaced by an empty
black mask withOUT any white color on it. I got the following submat
and mask:
I was getting the first contour in the list of contours (named
contours) by contours.get(0), and using this first contour to
calculate Imgproc.boundingRect() as well as in
contourList.add(contours.get(0)); later (where contourList is
the list of just one contour which will be used in the last
drawContours()).
Then I went ahead to change contours.get(0) to
contours.get(1) in Imgproc.boundingRect() as well as in contourList.add(); (just before Imgproc.drawContours()). That
resulted in this submat and mask:
Then I changed back to contours.get(0) in
Imgproc.boundingRect(); and let
contourList.add(contours.get(1)); be there. Got the following
submat and mask:
NOW I am completely Unable to Understand what is happening here.
I am not sure how this is handle in JAVA (I usually use OpenCV in c++ or python), but there is an error in your code...
The contours list will have a list of list of points. This points will refer to the original image. So, this mean that if the figure one is in lets say, x=300, y= 300, width= 100, height=100 then when you get your submatrix it will try to draw those points in a smaller image... so when it tries to draw point (300,300) in a 100 x 100 image, it will simply fail... probably throws an error or simply doesn't draw anything...
A solution for this is, do a for loop and substract to each point of the contour the initial point of the bounding rect (in my example (300,300)).
As, why there is some garbage drawn... well you never initialize the matrix. Not sure in JAVA, but in c++ you have to set them to 0.
I think it should be something like this:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1, new Scalar(0));
I hope this helps :)
EDIT
I think I did not explain myself clearly before.
Your contours are an array of points (x,y). These are the coordinates of the points that represent each contour in the original image. This image has a size, and your submatrix has a smaller size. The points are outside of this small image boundaries....
you should do something like this to fix it:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j].x -= boundingrect.x;
contours[0][j].y -= boundingrect.y;
}
and then you can draw the contours, since they will be in boundaries of the submat.
I think in java it is also possible to subtract the opencv points directly:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j] -= boundingrect.tl();
}
but in this case I am not sure, since I have tried it in c++ only
boundingrect.tl() -> gives you the top left point of the rect

Categories

Resources