digit detection from the image in android - android

I am searching for how to extract a digital number from the image in android. I have to take a picture then i need to get numbers from image. OpenCV is a option . can we convert opencv into android ? Kindly suggest me any proper way. I will be grateful to you.

There are many OCR for Android
check there links
https://github.com/rmtheis/android-ocr
https://github.com/GautamGupta/Simple-Android-OCR
http://www.abbyy.com/mobileocr/android/
best OCR (Optical character recognition) example in android

OpenCV supports Android platform. You have to set up OpenCV4Android, it's instructions step by step here.
http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/O4A_SDK.html
However OpenCV is not an option but only a step. Then you have to use a character recognition engine. Most popular one is Tesseract-ocr. But it is really not easy task.
Also, they often recognize all characters. If you could achieve it, extracting the digits will be the easiest part in Java.

this works for me you just need to specify the number size` ArrayList output=new ArrayList<>();
cvtColor(input,input,COLOR_BGRA2GRAY);
Mat img_threshold = new Mat();
threshold(input, img_threshold, 60, 255,THRESH_BINARY_INV);
Mat img_contours =copy(img_threshold);
//Find contours of possibles characters
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
findContours(img_contours, contours,new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE); // all pixels of each contours
contours=sort(contours);
// Draw blue contours on a white image
Mat result=copy(img_threshold);
cvtColor(result, result, COLOR_GRAY2BGR);
drawContours(result,contours,
-1, // draw all contours
new Scalar(0,0,255), // in blue
1); // with a thickness of 1
//Start to iterate to each contour founded
ListIterator<MatOfPoint> itc = contours.listIterator();
//Remove patch that are no inside limits of aspect ratio and area.
while (itc.hasNext())
{
//Create bounding rect of object
MatOfPoint mp = new MatOfPoint(itc.next().toArray());
Rect mr = boundingRect(mp);
rectangle(result,new Point(mr.x,mr.y),new Point(mr.x+mr.width,mr.y+mr.height),new Scalar(0,255,0));
Mat auxRoi=new Mat(img_threshold,mr);
if (OCR_verifySizes(auxRoi))
{
output.add(preprocessChar(auxRoi));
}
}
return output;`

Related

Blurring a Rect within a screenshot

I'm developing an Android app which uses a background Service to programmatically capture a screenshot of whatever is on the screen currently. I obtain the screenshot as a Bitmap.
Next, I successfully imported OpenCV into my Android project.
What I need to do now is blur a subset of this image, i.e. not the entire image itself, but a [rectangular] area or sub-region within the image. I have an array of Rect objects representing the rectangular regions that I need to blur within the screenshot.
I've been looking around for a tutorial on doing this with OpenCV in Java, and I haven't found a clear answer. The Mat and Imgproc classes are obviously the ones of interest, and there's the Mat.submat() method, but I've been unable to find a clear, straightforward tutorial on getting this done.
I've googled a lot, and none of the examples I've found are complete. I need to do this in Java, within the Android runtime.
What I need is: Bitmap >>> Mat >>> Imgproc>>> Rect >>> Bitmap with ROI
blurred.
Any experienced OpenCV devs out here, can you point me in the right direction? This is the only thing I'm stuck at.
Related:
Gaussian blurring with OpenCV: only blurring a subregion of an image?.
How to blur a rectagle with OpenCv.
How to blur some portion of Image in Android?.
The C++ code to achieve this task is shared below with comments and sample images:
// load an input image
Mat img = imread("C:\\elon_tusk.png");
img:
// extract subimage
Rect roi(113, 87, 100, 50);
Mat subimg = img(roi);
subimg:
// blur the subimage
Mat blurred_subimage;
GaussianBlur(subimg, blurred_subimage, Size(0, 0), 5, 5);
blurred_subimage:
// copy the blurred subimage back to the original image
blurred_subimage.copyTo(img(roi));
img:
Android equivalent:
Mat img = Imgcodecs.imread("elon_tusk.png");
Rect roi = new Rect(113, 87, 100, 50);
Mat subimage = img.submat(roi).clone();
Imgproc.GaussianBlur(subimg, subimg, new Size(0,0), 5, 5);
subimg.copyTo(img.submat(roi));
You could just implement your own helper function, let's call it roi (region of interest).
Since images in opencv are numpy ndarrays, you can do something like this:
def roi(image: np.ndarray, region: QRect) -> np.ndarray:
a1 = region.upperLeft().x()
b1 = region.bottomRight().y()
a2 = region.upperLeft().x()
b2 = region.bottomRight().y()
return image[a1:a2, b1:b2]
And just use this helper function to extract the subregions of the image that you are interested, blur them and put the result back on the original picture.

Thresholding in Android using opencv

Not sure if this is the right way to ask, but please help. I have an image of a dented car. I have to process it and highlight the dents and return the number of dents. I was able to do it reasonably well with the following result:
The matlab code is:
img2=rgb2gray(i1);
imshow(img2);
img3=imtophat(img2,strel('disk',15));
img4=imadjust(img3);
layer=img4(:,:,1);
img5=layer>100 & layer<250;
img6=imfill(img5,'holes');
img7=bwareaopen(img6,5);
[L,ans]=bwlabeln(img7);
imshow(img7);
I=imread(i1);
Ians=CarDentIdentification(I);
However, when I try to do this using opencv, I get this:
With the following code:
Imgproc.cvtColor(source, middle, Imgproc.COLOR_RGB2GRAY);
Imgproc.equalizeHist(middle, middle);
Imgproc.threshold(middle, middle, 150, 255, Imgproc.THRESH_OTSU);
Please tell me how can I obtain better results in opencv, and also how to count the dents? I tried findcontour() but it gives a very large number. I tried on other images as well, but I'm not getting proper results.
Please help.
So you basically from the MATLAB site, imtophat does - Top-hat filtering computes the morphological opening of the image (using imopen) and then subtracts the result from the original image.
You could do this in OpenCV with the following steps:
Step 1: Get the disk structuring element
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
Step 2: Compute opening of the image and then subtract the result from the original image
tophat = cv2.morphologyEx(v, cv2.MORPH_TOPHAT, kernel)
This gives following result -
Step 3 - Now you could just manually threshold it or use Otsu -
ret, thresh = cv2.threshold(tophat, 17, 255, 0)
which gives you the following image -
Since the OP wants the code in Java, here is the probable code in Java:
private Mat topHat(Mat image)
{
Mat element = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, new Size(15, 15), new Point (0, 0));
Mat dst = new Mat;
Imgproc.morphologyEx(image, dst, Imgproc.MORPH_TOPHAT, element, new Point(0, 0));
return dst;
}
Make sure you do this on a gray scale image (CvType.8UC1) and then you can threshold suitably.

Why is the drawContour() in OpenCV generating this strange Mask?

I started by reading in this Mat.
Then I converted it to Greyscale and applied Imgproc.canny() to it, getting the following mask.
Then I used Imgproc.findContours() to find the contours, Imgproc.drawContours(), and Core.putText() to label the contours with numbers:
Then I did Rect boundingRect = Imgproc.boundingRect(contours.get(0));
Mat submatrix = new Mat();
submatrix = originalMat.submat(boundingRect); to get following submatrix:
So far so good. The Problem starts hereafter:
NOW I NEEDED A MASK OF THE submatrix. So I decided to use Imgproc.drawContours() to get the mask:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);
List<MatOfPoint> contourList = new ArrayList<>();
contourList.add(contours.get(0));
Imgproc.drawContours(mask, contourList, 0, new Scalar(255), -1);
I got the following mask:
WHAT I WAS EXPECTING was a filled (in white color) diamond shape on black background.
WHy am I getting this unexpected result?
EDIT:
When I replaced Mat mask = new Mat(submatrix.rows(),
submatrix.cols(), CvType.CV_8UC1); by Mat mask =
Mat.zeros(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1);,
the last mask with white colored garbage was replaced by an empty
black mask withOUT any white color on it. I got the following submat
and mask:
I was getting the first contour in the list of contours (named
contours) by contours.get(0), and using this first contour to
calculate Imgproc.boundingRect() as well as in
contourList.add(contours.get(0)); later (where contourList is
the list of just one contour which will be used in the last
drawContours()).
Then I went ahead to change contours.get(0) to
contours.get(1) in Imgproc.boundingRect() as well as in contourList.add(); (just before Imgproc.drawContours()). That
resulted in this submat and mask:
Then I changed back to contours.get(0) in
Imgproc.boundingRect(); and let
contourList.add(contours.get(1)); be there. Got the following
submat and mask:
NOW I am completely Unable to Understand what is happening here.
I am not sure how this is handle in JAVA (I usually use OpenCV in c++ or python), but there is an error in your code...
The contours list will have a list of list of points. This points will refer to the original image. So, this mean that if the figure one is in lets say, x=300, y= 300, width= 100, height=100 then when you get your submatrix it will try to draw those points in a smaller image... so when it tries to draw point (300,300) in a 100 x 100 image, it will simply fail... probably throws an error or simply doesn't draw anything...
A solution for this is, do a for loop and substract to each point of the contour the initial point of the bounding rect (in my example (300,300)).
As, why there is some garbage drawn... well you never initialize the matrix. Not sure in JAVA, but in c++ you have to set them to 0.
I think it should be something like this:
Mat mask = new Mat(submatrix.rows(), submatrix.cols(), CvType.CV_8UC1, new Scalar(0));
I hope this helps :)
EDIT
I think I did not explain myself clearly before.
Your contours are an array of points (x,y). These are the coordinates of the points that represent each contour in the original image. This image has a size, and your submatrix has a smaller size. The points are outside of this small image boundaries....
you should do something like this to fix it:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j].x -= boundingrect.x;
contours[0][j].y -= boundingrect.y;
}
and then you can draw the contours, since they will be in boundaries of the submat.
I think in java it is also possible to subtract the opencv points directly:
for (int j = 0; j < contours[0].length; j++) {
contours[0][j] -= boundingrect.tl();
}
but in this case I am not sure, since I have tried it in c++ only
boundingrect.tl() -> gives you the top left point of the rect

how to reduce wrong recognition using cascade classifier

Hello I'm tring to recognize a car using cascade classifier, android and opencv library. My problem is that my phone is marking almoust everything as a car.
I've created my code based on:
https://www.youtube.com/watch?v=WEzm7L5zoZE
and face detection sample. My app behave very strange cause marking looks like random. I even don't know if marking car is correct or maybe it is just some random behaviour. At the moment it is even marking my keyboard as a car. I'm not sure what can I improve. I don't see any progress between training it up to 5 or 14 stages
I've trained my file up to 14 stages
my code looks like this:
#Override
public Mat onCameraFrame(Mat aInputFrame) {
// return FrameAnalyzer.analyzeFrame(aInputFrame);
// Create a grayscale image
Imgproc.cvtColor(aInputFrame, grayscaleImage, Imgproc.COLOR_RGBA2RGB);
MatOfRect objects = new MatOfRect();
// Use the classifier to detect faces
if (cascadeClassifier != null) {
cascadeClassifier.detectMultiScale(grayscaleImage, objects, 1.1, 1,
2, new Size(absoluteObjectSize, absoluteObjectSize),
new Size());
}
Rect[] dataArray = objects.toArray();
for (int i = 0; i < dataArray.length; i++) {
Core.rectangle(aInputFrame, dataArray[i].tl(), dataArray[i].br(),
new Scalar(0, 255, 0, 255), 3);
}
return aInputFrame;
}
Try changing the below.
Using COLOR_RGBA2RGB with cvtColor as in sample code will not give a gray scale image. Try RGBA2GRAY
Increase the number of neighbors in detectMultiScale. Now it's 2. More neighbors means more confidence in result.
Hope there are enough samples to train with. A quick search and reading through books, gives an impression like thousands of images are needed for training. For e.g. around 10000 images are used for OCR haar training. For face training, 3000 to 5000 samples are used.
More importantly, decide if you really want to go with haar training for identifying a car. There could be better methods of vehicle identification. For e.g. for a moving vehicle we could use optical flow based techniques.

can't find correct FAST-SURF matches when using openCV for android

I'm using openCV for android to implement a logo detection algorithm. my goal now is to find a predefined logo in a picture I've taken with the android camera.
I can't get ANY right matches.. I think this is very weird considering I'm almost only using openCV library functions.
First I detect keypoints using FAST detector, my images are 500x500 in size
afterwards I use SURF to describe these keypoints.
with knn I ask for the 2 best matches, and elliminate those who don't have A ratio smaller than 0.6 (first.distance/ second.distance).
I'm getting around 10 matches, but they are all wrong, when I draw every match (100+), they all seem to be wrong
I can't see what I'm doing wrong here, does anyone have the same problem, or know what I'm doing wrong?
FeatureDetector FAST = FeatureDetector.create(FeatureDetector.FAST);
// extract keypoints
FAST.detect(image1, keypoints);
FAST.detect(image2, logoKeypoints);
DescriptorExtractor SurfExtractor = DescriptorExtractor
.create(DescriptorExtractor.SURF);
Mat descriptors = new Mat();
Mat logoDescriptors = new Mat();
SurfExtractor.compute(image1, keypoints, descriptors);
SurfExtractor.compute(image2, logoKeypoints, logoDescriptors);
List<DMatch> matches = new ArrayList<DMatch>();
matches = knn(descriptors, logoDescriptors);
Scalar blue = new Scalar(0, 0, 255);
Scalar red = new Scalar(255, 0, 0);
Features2d.drawMatches(image2, logoKeypoints, image1, keypoints,
matches, rgbout, blue, red);
I think the problem is the matcher you are using. For floatbased such as (SURF)descriptors use FLANN as a matcher or BRUTEFORCE as a matcher. Also strive to use the same feature descriptor for both extraction and matching...i.e SURF features on SURF keypoints.
Read this post on stackoverflow,and the articles linked in it for better understanding.
How Does OpenCV ORB Feature Detector Work?

Categories

Resources