Image Matching on Android Game's Board - android

The question comes 1st: I am looking for FAST approach to match images.
Now, the use case: I am developing a detector to detect orb on a 6x5 Match-3 game board for android platform. I have an array of the orb icon with transparent background, but the orb on the screen (screenshot) has different background color, probably different size too. I have to compare each orb on the screen with my array of icons (69 icons specifically) so it's a 69x30=2070 steps. I tried lazy implementation and group almost similar icon together to reduce the steps but still take a long time (10s at most) for computation. I also tried checking the channel and depth of image, resizing the images to have same size and tweaking the threshold value but still no luck.
I have tried Histogram Matching (seperate channel, grayscale), Template Matching (CCOEFF, SQDIFF, CCORR), AKAZE, ORB(unbounded, bounded), PHash all using OpenCV but histogram matching and PHash give me erroneous result (too much false positive), Template Matching consume 10s+ (considered too slow for user to wait) while AKAZE and ORB give better result than all other methods but still needs 6s+ per try. Is there any other method that can helps me cut down the computation time down to somewhere near 1s and can give better result considering the worst case scenario is 2070 steps?
Referrences that I have read that compares the performances of different feature matching algorithms:
A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. It shows that ORB and BRISK should be averagely better than the other approach compared while AKAZE is moderately good for most cases. I deleted my Histogram comparison code as it is not really helpful but you may find the rest of it below.
Mat source = Utils.loadResource(this, R.drawable.orb_icon, Imgcodecs.CV_LOAD_IMAGE_UNCHANGED);
Mat tmp = new Mat();
Bitmap cropped_img = Bitmap.createBitmap(screenshot, x, y, width, height);
Utils.bitmapToMat(cropped_img, tmp);
//template matching code
int r_rows = source.rows() - tmp.rows() + 1;
int r_cols = source.cols() - tmp.cols() + 1;
Mat result = new Mat();
result.create(r_rows, r_cols, CvType.CV_32F);
Imgproc.matchTemplate(source, tmp, result, Imgproc.TM_CCOEFF_NORMED);
Core.MinMaxLocResult mmr = Core.minMaxLoc(result);
double maxVal = mmr.maxVal;
return maxVal;
//AKAZE
MatOfKeyPoint kp1 = new MatOfKeyPoint();
MatOfKeyPoint kp2 = new MatOfKeyPoint();
Mat desc1 = new Mat();
Mat desc2 = new Mat();
AKAZE akaze = AKAZE.create();
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
akaze.detectAndCompute(source, new Mat(), kp1, desc1);
akaze.detectAndCompute(tmp, new Mat(), kp2, desc2);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(desc1, desc2, knnMatches, 2);
float threshold = 0.7f;
int count = 0;
for(int i=0; i<knnMatches.size(); i++) {
if(knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if(matches[0].distance < threshold * matches[1].distance) {
count++;
}
}
}
//ORB
ORB orb = ORB.create();
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfKeyPoint kp1 = new MatOfKeyPoint();
MatOfKeyPoint kp2 = new MatOfKeyPoint();
Mat desc1 = new Mat();
Mat desc2 = new Mat();
orb.detectAndCompute(source, new Mat(), kp1, desc1);
orb.detectAndCompute(tmp, new Mat(), kp2, desc2);
List<MatOfDMatch> knnMatches = new ArrayList<>();
matcher.knnMatch(desc1, desc2, knnMatches, 2);
float threshold = 0.8f;
int count = 0;
for(int i=0; i<knnMatches.size(); i++) {
if(knnMatches.get(i).rows() > 1) {
DMatch[] matches = knnMatches.get(i).toArray();
if(matches[0].distance < threshold * matches[1].distance) {
count++;
}
}
}
//PHash
Mat hash_source = new Mat();
Mat hash_tmp = new Mat();
Img_hash.pHash(tmp, hash_tmp);
Img_hash.pHash(source, hash_source);
Core.norm(source, tmp, Core.NORM_HAMMING);
Edit: As suggested, below is the game board, icon image, and orb screenshot sample.
ICON vs orb screenshot
Also, you may observe the simulation result of each approach by comparing the result(overlay smaller icon) on top of the orb on board:
Histogram Matching
,
Template Matching
and
AKAZE (similar to ORB)
After moving the variable initialization out of my comparison function to base class, detect keypoint and PHash of source icon images on class initialization, run detect and compute function in batch using List to reduce individual function call. It still takes up 4s+ for the image matching process. Time consumption is reduced but accuracy is still a major problem. You may observe my heap stack on below.

Related

Command .copyto is failing

I'm a student and I'm working on application that will scann a sudoku and solve it. I'm taking picture, than finding biggest contour. This is what is working.Problem begin when I want to extract that biggest counter on the empty mat (it has white backround), application don't show activity with picture that should (in other images it does) but it return to my mainactivity. I was using this tutorial for the extraction: https://bytefish.de/blog/extracting_contours_with_opencv/.
mat4=mat1; // mat 1 is current frame on camera
transpose(mat4, mat4);
flip(mat4, mat4, +1);
mat5=mat4;
Mat okraje = new Mat();
Mat hiearchy = new Mat();
Imgproc.cvtColor(mat5,mat5,Imgproc.COLOR_BGR2GRAY);
List<MatOfPoint> contourList = new ArrayList<MatOfPoint>();
Imgproc.Canny(mat5,okraje,80,100);
Imgproc.findContours(okraje,contourList,hiearchy,Imgproc.RETR_CCOMP, Imgproc.CHAIN_APPROX_SIMPLE); // TOPKA JEBNE PLNY STVOREC A KRATKO!!
for(int ab=0;ab < contourList.size(); ab++ ){
a = contourArea(contourList.get(ab),false);
if(a>largest_area){
b=ab;
largest_area = a;
largest_contour_index=ab;
bounding_rect=boundingRect(contourList.get(ab));
}
}
Mat len_sudkoku = new Mat();
len_sudkoku.create(mat5.rows(), mat5.cols(),CvType.CV_8UC3);
len_sudkoku.setTo(new Scalar(255,255,255));
Mat lskere = new Mat();
lskere.create(okraje.cols(), okraje.rows(), CvType.CV_8UC1);
Random r = new Random();
Imgproc.drawContours( lskere, contourList,largest_contour_index, new Scalar(r.nextInt(255), r.nextInt(255), r.nextInt(255)), -1);
mat5.copyTo(len_sudkoku,lskere); // pada to!!
Bitmap bm = Bitmap.createBitmap(len_sudkoku.cols(),len_sudkoku.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(len_sudkoku,bm);
setContentView(R.layout.activity_fotka_ukaz);
ImageView IMW = findViewById(R.id.imageView);
IMW.setImageBitmap(bm);
I expected to be like in tutorial that I posted here where a man extracted an apple and place it on another backround. Thing that I notice is that application return to main activity (it should display image), when I'm using a command
mat5.copyTo(len_sudkoku, lskere)
Ok guys i solved it. The problem wasn't in command copy to (obviously) but in declaration size of different mat and bitmap. The problem was that in declaration first input was sometimes row and column so it crashed. Now I add size to Mat hierarchy and okraje, and edit lskere.create(okraje.cols(), okraje.rows(), to lskere.create(okraje.rows(), okraje.cols()..

Android: Images alignment

I need to align different images in my android application, using the OpenCV library. I found a solution in this thread.
public static Bitmap alignImagesHomography(Bitmap A, Bitmap B)
{
final int warp_mode = MOTION_HOMOGRAPHY;
Mat matA = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8UC3);
Mat matAgray = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8U);
Mat matB = new Mat(B.getHeight(), B.getWidth(), CvType.CV_8UC3);
Mat matBgray = new Mat(B.getHeight(), B.getWidth(), CvType.CV_8U);
Mat matBaligned = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8UC3);
Mat warpMatrix = Mat.eye(3, 3, CV_32F);
Utils.bitmapToMat(A, matA);
Utils.bitmapToMat(B, matB);
Imgproc.cvtColor(matA, matAgray, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(matB, matBgray, Imgproc.COLOR_BGR2GRAY);
int numIter = 5;
double terminationEps = 1e-10;
TermCriteria criteria = new TermCriteria(TermCriteria.COUNT + TermCriteria.EPS, numIter, terminationEps);
findTransformECC(matAgray, matBgray, warpMatrix, warp_mode, criteria, matBgray);
Imgproc.warpPerspective(matA, matBaligned, warpMatrix, matA.size(), Imgproc.INTER_LINEAR + Imgproc.WARP_INVERSE_MAP);
Bitmap alignedBMP = Bitmap.createBitmap(A.getWidth(), A.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(matBaligned, alignedBMP);
return alignedBMP;
}
public static Bitmap alignImagesEuclidean(Bitmap A, Bitmap B)
{
final int warp_mode = MOTION_EUCLIDEAN;
Mat matA = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8UC3);
Mat matAgray = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8U);
Mat matB = new Mat(B.getHeight(), B.getWidth(), CvType.CV_8UC3);
Mat matBgray = new Mat(B.getHeight(), B.getWidth(), CvType.CV_8U);
Mat matBaligned = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8UC3);
Mat warpMatrix = Mat.eye(2,3,CV_32F);
Utils.bitmapToMat(A, matA);
Utils.bitmapToMat(B, matB);
Imgproc.cvtColor(matA, matAgray, Imgproc.COLOR_BGR2GRAY);
Imgproc.cvtColor(matB, matBgray, Imgproc.COLOR_BGR2GRAY);
int numIter = 5;
double terminationEps = 1e-10;
TermCriteria criteria = new TermCriteria(TermCriteria.COUNT + TermCriteria.EPS, numIter, terminationEps);
findTransformECC(matAgray, matBgray, warpMatrix, warp_mode, criteria, matBgray);
Imgproc.warpAffine(matA, matBaligned, warpMatrix, matA.size(), Imgproc.INTER_LINEAR + Imgproc.WARP_INVERSE_MAP);
Bitmap alignedBMP = Bitmap.createBitmap(A.getWidth(), A.getHeight(), Bitmap.Config.RGB_565);
Utils.matToBitmap(matBaligned, alignedBMP);
return alignedBMP;
}
public static Bitmap alignExposures(Bitmap A, Bitmap B) {
Mat matA = new Mat(A.getHeight(), A.getWidth(), CvType.CV_8UC3);
Mat matB = new Mat(B.getHeight(), B.getWidth(), CvType.CV_8UC3);
Utils.bitmapToMat(A, matA);
Utils.bitmapToMat(B, matB);
List<Mat> src = new ArrayList<>();
src.add(matA);
src.add(matB);
Bitmap output = Bitmap.createBitmap(A.getWidth(),A.getHeight(), Bitmap.Config.RGB_565);
AlignMTB align = createAlignMTB(8, 4, false);
align.process(src,src);
for(int i = 1; i < src.size(); i++) {
add(src.get(0),src.get(i),src.get(0));
}
Utils.matToBitmap(src.get(0),output);
return output;
}
I tried all the three methods written by the user wegenerEDV. Anyway, the first two methods return the same picture as the "Bitmap A" given as input; the third method actually aligns the pictures, but the resulting image is overexposed:
original: https://i.imgur.com/cknHM23.jpg
aligned: https://i.imgur.com/kXCQl6x.jpg
Has anybody found a different solution? Or do these methods actually work and I am doing something wrong?
The best solution to me is to correct the alignImagesHomography method. It actually does something, because it takes around 30 seconds to process the final picture, but then it is exactly equal to the input image.
I've never used findTransformECC(), used by your first and second methods, and I'm not familiar with that algorithm. The only difference between those two methods is the type of transform findTransformECC() is asked to find; homographic transforms are a superset of Euclidean transforms so the first method (using MOTION_HOMOGRAPHY) would be most robust for your use case, though it might also be slower.
the first two methods return the same picture as the "Bitmap A" given as input
If these two methods are working correctly, the result should look almost identical to Image A, even though it is produced from pixels of Image B. Have you checked that the result is bitwise identical to Bitmap A, and does not just look similar?
I think I can see the same bug in both methods: findTransformECC() finds a mapping from matBgray onto matAgray (see the docs), but warpPerspective() and warpAffine(), respectively, are used to apply the resulting transform to matA, storing the result in matBaligned; they should be applied to matB. The only way I can see that you would get a result that is bitwise identical to Image A is if your homography calculation fails, so that the result homography ("warpMatrix") still contains its initial state, which is the identity matrix. The identity homography, incorrectly applied to matA (because of the above bug), will of course give another exact copy of matA in matBaligned. You could check this by printing warpMatrix after calculation and seeing if it is the identity matrix. You should also add some error checking, because at the moment you don't know whether the methods you call are failing at all, or why (eg. bad input parameters, unable to find any correspondences, etc etc).
Your third method, alignExposures(), uses AlignMTB which is intended for HDR imaging, and I don't know how it does alignment. It may only handle 2D translation. The loop in that method is adding the output images back onto one of the source images, so it will saturate to white. If the result you want is the aligned images averaged together (is that what you want?) you should create a new output matrix with a larger datatype (eg. CV_16UC3) to acclumulate the result images in and calculate the average, then use cv::convertTo() to reduce that buffer back to the original datatype (eg. CV_8UC3).
Another algorithm I have used successfully for homographic alignment is:
Create a feature detector, eg. SIFT, from the features2d or xfeatures2d library. I used AKAZE, because SIFT is patented.
Detect features and descriptors in both your images using detectAndCompute().
Create a BFMatcher and perform brute force matching of feature points from both images using match().
If you want to at this point, you can print the matching feature points, draw them onto the images and view them, etc etc.
Organize the found correspondences into two lists (source and destination points) suitable for findHomography().
Call findHomography() using the RANSAC flag.
Apply the homography using warpPerspective().
With this method there are more opportunities to inspect and validate your data, which will help with debugging.

Detection of four corners of a document under different circumstances

I have tried 2 methodologies as follows:-
conversion of image to Mat
apply gaussian blur
then canny edge detection
find contours
The problem with this method is:
too many contours are detected
mostly open contours
doesn't detect what I want to detect
Then I changed my approach and tried adaptive thresholding after gaussian blur/median blur and it is much better and I am able to detect the corners in 50% cases
The current problem I am facing is that the page detection requires contrasting and plain background without any reflections. I think it's too idealistic for real world use.
This is where I would like some help. Even a direction towards the solution is highly appreciated especially in java. Thanks in anticipation
works absolutely fine with a significant contrasting background like this
Detected 4 corners
This picture gives troubles because the background isn't exactly the most contrasting
Initial largest contour found
Update: median blur did not help much so I traced the cause and found that the page boundary was detected in bits and pieces and not a single contour so it detected the biggest contour as a part of the page boundary Therefore performed some morphological operations to close relatively small gaps and the resultant largest contour is definitely improved but its its not optimum. Any ideas how I can improve the big gaps?
morphed original picture
largest contour found in the morphed image
PS morphing the image in ideal scenarios has led to detection of false contour boundaries. Any condition which can be checked before morphing an image is also a bonus. Thank you
If you use methods like that:
public static RotatedRect getBestRectByArea(List<RotatedRect> boundingRects) {
RotatedRect bestRect = null;
if (boundingRects.size() >= 1) {
RotatedRect boundingRect;
Point[] vertices = new Point[4];
Rect rect;
double maxArea;
int ixMaxArea = 0;
// find best rect by area
boundingRect = boundingRects.get(ixMaxArea);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
maxArea = rect.area();
for (int ix = 1; ix < boundingRects.size(); ix++) {
boundingRect = boundingRects.get(ix);
boundingRect.points(vertices);
rect = Imgproc.boundingRect(new MatOfPoint(vertices));
if (rect.area() > maxArea) {
maxArea = rect.area();
ixMaxArea = ix;
}
}
bestRect = boundingRects.get(ixMaxArea);
}
return bestRect;
}
private static Bitmap findROI(Bitmap sourceBitmap) {
Bitmap roiBitmap = Bitmap.createBitmap(sourceBitmap.getWidth(), sourceBitmap.getHeight(), Bitmap.Config.ARGB_8888);
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
final Mat mat = new Mat();
sourceMat.copyTo(mat);
Imgproc.cvtColor(mat, mat, Imgproc.COLOR_RGB2GRAY);
Imgproc.threshold(mat, mat, 146, 250, Imgproc.THRESH_BINARY);
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<RotatedRect> boundingRects = new ArrayList<>();
Imgproc.findContours(mat, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
boundingRects.add(boundingRect);
}
RotatedRect documentRect = getBestRectByArea(boundingRects);
if (documentRect != null) {
Point rect_points[] = new Point[4];
documentRect.points(rect_points);
for (int i = 0; i < 4; ++i) {
Imgproc.line(sourceMat, rect_points[i], rect_points[(i + 1) % 4], ROI_COLOR, ROI_WIDTH);
}
}
Utils.matToBitmap(sourceMat, roiBitmap);
return roiBitmap;
}
you can achieve for your source images results like this:
or that:
If you adjust threshold values and apply filters you can achieve even better results.
You can pick a single contour by using one or both of:
Use BoundingRect and ContourArea to evaluate the squareness of each contour. boundingRect() returns orthogonal rects., to handle arbitrary rotation better use minAreaRect() which returns optimally rotated ones.
Use Cv.ApproxPoly iteratively to reduce to a 4 sided shape
var approxIter = 1;
while (true)
{
var approxCurve = Cv.ApproxPoly(largestContour, 0, null, ApproxPolyMethod.DP, approxIter, true);
var approxCurvePointsTmp = new[] { approxCurve.Select(p => new CvPoint2D32f((int)p.Value.X, (int)p.Value.Y)).ToArray() }.ToArray();
if (approxCurvePointsTmp[0].Length == 4)
{
corners = approxCurvePointsTmp[0];
break;
}
else if (approxCurvePointsTmp[0].Length < 4) throw new InvalidOperationException("Failed to decimate corner points");
approxIter++;
}
However neither of these will help if the contour detection gives you two separate contours due to noise / contrast.
I think it would be possible to use the hough line transformation to help detect cases where a line has been split into two contours.
If so the search could be repeated for all combinations of joined contours to see if a bigger / more rectangular match is found.
Stop relying on edge detection, the worst methodology in the universe, and switch to some form of image segmentation.
The paper is white, the background is contrasted, this is the information that you should use.

How can i improve openCV people detecting algorithm

I am trying to write human detector, it works now, but sometimes it reacts on cats/boxes etc., also i got like 5 fps. So the question is, how can i improve my algorithm for better fps and detection accuracy.
I have tried to use this one:
http://www.pyimagesearch.com/2015/11/09/pedestrian-detection-opencv/
But i couldnt find any way i could use this on android.
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
List<MatOfPoint> list = new ArrayList<>();
Mat frame = new Mat();
Mat gray = new Mat();
Mat hierarchy = new Mat();
Mat originalFrame = inputFrame.rgba();
Imgproc.medianBlur(originalFrame,originalFrame,3);
Imgproc.cvtColor(originalFrame, gray, Imgproc.COLOR_RGB2GRAY, 0);
HOGDescriptor hog = new HOGDescriptor();
//Получаем стандартный определитель людей и устанавливаем его нашему дескриптору
MatOfFloat descriptors = HOGDescriptor.getDefaultPeopleDetector();
hog.setSVMDetector(descriptors);
MatOfRect locations = new MatOfRect();
MatOfDouble weights = new MatOfDouble();
hog.detectMultiScale(gray, locations, weights);
Point rectPoint1 = new Point();
Point rectPoint2 = new Point();
Point fontPoint = new Point();
if (locations.rows() > 0) {
List<Rect> rectangles = locations.toList();
for (Rect rect : rectangles) {
rectPoint1.x = rect.x;
rectPoint1.y = rect.y;
fontPoint.x = rect.x;
fontPoint.y = rect.y - 4;
rectPoint2.x = rect.x + rect.width;
rectPoint2.y = rect.y + rect.height;
final Scalar rectColor = new Scalar( 0 , 0 , 0 );
// Добавляем на изображения найденную информацию
Imgproc.rectangle(originalFrame, rectPoint1, rectPoint2, rectColor, 2);
}
}
frame.release();
gray.release();
hierarchy.release();
list.clear();
return originalFrame;
}
You're using the HOG+SVM approach to detect people; it is inherently going to be quite slow. Never the less, you can use some of the suggestions in this question How to speed up svm.predict?
Depending on your problem, i.e. if the camera is static and the pedestrians are moving you could opt for a background subtraction approach this is probably the most efficient way but bear in mind that this will pick up any objects that are moving in the scene, so you could include thresholds to remove small objects. Some background subtraction algorithms include mixture of gaussian (MOG) or MOG2 or GMG. Also, an important thing to note is that these approaches rely on creating a background model of the scene, i.e. they assume static pixels over time to be part of the background, hence, when a pedestrian stands still for a while in the scene they get embedded into the background resulting in miss detection. There are many papers out there that provide potential solutions to that problem so you might want to have a look at them, here is one that produces decent results: Static and Moving Object Detection Using Flux Tensor with Split Gaussian Models
Additionally, you could opt for a data driven approach, either get a good pre-trained model and do your detection using that or train one yourself using TensorFlow, Caffe or Torch and use the dnn opencv_contrib module to do the detection.

Performance Issues in OpenCV for Android Keypoint Matching and threshold using ORB and RANSAC

I recently started developing an app on Android studio and I just finished writing the code. The accuracy which I get is more than satisfactory but the time taken by the device is a lot. {}I followed some tutorials on how to monitor the performance on android studio and I saw that one small part of my code is taking 6 seconds, which half the time my app takes to display the entire result. I have seen a lot of posts Java OpenCV - extracting good matches from knnMatch , OpenCV filtering ORB matches on OpenCV/JavaCV but haven't come across anyone asking for this problem. The OpenCV link http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html does provide a good tutorial but the RANSAC function in OpenCV takes different arguments for keypoints as compared to C++.
Here is my code
public Mat ORB_detection (Mat Scene_image, Mat Object_image){
/*This function is used to find the reference card in the captured image with the help of
* the reference card saved in the application
* Inputs - Captured image (Scene_image), Reference Image (Object_image)*/
FeatureDetector orb = FeatureDetector.create(FeatureDetector.DYNAMIC_ORB);
/*1.a Keypoint Detection for Scene Image*/
//convert input to grayscale
channels = new ArrayList<Mat>(3);
Core.split(Scene_image, channels);
Scene_image = channels.get(0);
//Sharpen the image
Scene_image = unsharpMask(Scene_image);
MatOfKeyPoint keypoint_scene = new MatOfKeyPoint();
//Convert image to eight bit, unsigned char
Scene_image.convertTo(Scene_image, CvType.CV_8UC1);
orb.detect(Scene_image, keypoint_scene);
channels.clear();
/*1.b Keypoint Detection for Object image*/
//convert input to grayscale
Core.split(Object_image,channels);
Object_image = channels.get(0);
channels.clear();
MatOfKeyPoint keypoint_object = new MatOfKeyPoint();
Object_image.convertTo(Object_image, CvType.CV_8UC1);
orb.detect(Object_image, keypoint_object);
//2. Calculate the descriptors/feature vectors
//Initialize orb descriptor extractor
DescriptorExtractor orb_descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat Obj_descriptor = new Mat();
Mat Scene_descriptor = new Mat();
orb_descriptor.compute(Object_image, keypoint_object, Obj_descriptor);
orb_descriptor.compute(Scene_image, keypoint_scene, Scene_descriptor);
//3. Matching the descriptors using Brute-Force
DescriptorMatcher brt_frc = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfDMatch matches = new MatOfDMatch();
brt_frc.match(Obj_descriptor, Scene_descriptor, matches);
//4. Calculating the max and min distance between Keypoints
float max_dist = 0,min_dist = 100,dist =0;
DMatch[] for_calculating;
for_calculating = matches.toArray();
for( int i = 0; i < Obj_descriptor.rows(); i++ )
{ dist = for_calculating[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
System.out.print("\nInterval min_dist: " + min_dist + ", max_dist:" + max_dist);
//-- Use only "good" matches (i.e. whose distance is less than 2.5*min_dist)
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");
/*Necessary type conversion for drawing matches
MatOfDMatch goodMatches = new MatOfDMatch();
goodMatches.fromList(good_matches);
Mat matches_scn_obj = new Mat();
Features2d.drawKeypoints(Object_image, keypoint_object, new Mat(Object_image.rows(), keypoint_object.cols(), keypoint_object.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawKeypoints(Scene_image, keypoint_scene, new Mat(Scene_image.rows(), Scene_image.cols(), Scene_image.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawMatches(Object_image, keypoint_object, Scene_image, keypoint_scene, goodMatches, matches_scn_obj);
SaveImage(matches_scn_obj,"drawing_good_matches.jpg");
*/
if(good_matches.size() <= 6){
ph_value = "7";
System.out.println("Wrong Detection");
return Scene_image;
}
else{
//5. RANSAC thresholding for finding the optimum homography
Mat outputImg = new Mat();
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
List<org.opencv.core.KeyPoint> keypoints_objectList = keypoint_object.toList();
List<org.opencv.core.KeyPoint> keypoints_sceneList = keypoint_scene.toList();
//getting the object and scene points from good matches
for(i = 0; i<good_matches.size(); i++){
objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
}
good_matches.clear();
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
objList.clear();
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
sceneList.clear();
float RANSAC_dist=(float)2.0;
Mat hg = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, RANSAC_dist);
for(i = 0;i<hg.cols();i++) {
String tmp = "";
for ( int j = 0; j < hg.rows(); j++) {
Point val = new Point(hg.get(j, i));
tmp= tmp + val.x + " ";
}
}
Mat scene_image_transformed_color = new Mat();
Imgproc.warpPerspective(original_image, scene_image_transformed_color, hg, Object_image.size(), Imgproc.WARP_INVERSE_MAP);
processing(scene_image_transformed_color, template_match);
return outputImg;
}
} }
and this part is what is taking 6 seconds to implement on runtime -
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");}
I was thinking may be I can write this part of the code in C++ using NDK but I just wanted to be sure that the language is the problem and not the code itself.
Please don't be strict, first question! Any criticism is much appreciated!
So the problem was the logcat was giving me false timing results. The lag was due to a Huge Gaussian Blur later on in the code. Instead of System.out.print, I used System.currentTimeMillis, which showed me the bug.

Categories

Resources