OpenCV Homography - The 4 line not around the object - android

I try to do this, and I found this .his question is not what I want to ask, but I want to do the same thing. I can find features and draw feature descriptors on image , but the bounding box around the object was very weird .Sorry, I can't post my result on here ,The line came out is not a rectangle , is not around the object here is my result, what I doing wrong or it have another way to do it?
Sorry for my poor English,Thanks for the help
private void Featrue_found(){
MatOfKeyPoint templateKeypoints = new MatOfKeyPoint();
MatOfKeyPoint keypoints = new MatOfKeyPoint();
MatOfDMatch matches = new MatOfDMatch();
Object = new Mat(CvType.CV_32FC2);
Object = Highgui.imread(Environment.getExternalStorageDirectory()+ "/Android/data/" + getApplicationContext().getPackageName() + "/Files/Object.jpg", Highgui.CV_LOAD_IMAGE_UNCHANGED);
Resource = new Mat(CvType.CV_32FC2);
Resource = Highgui.imread(Environment.getExternalStorageDirectory()+ "/Android/data/" + getApplicationContext().getPackageName() + "/Files/Resource.jpg", Highgui.CV_LOAD_IMAGE_UNCHANGED);
Mat imageOut = Resource.clone();
FeatureDetector myFeatures = FeatureDetector.create(FeatureDetector.ORB);
myFeatures.detect(Resource, keypoints);
myFeatures.detect(Object, templateKeypoints);
DescriptorExtractor Extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat descriptors1 = new Mat();
Mat descriptors2 = new Mat();
Extractor.compute(Resource, keypoints, descriptors1);
Extractor.compute(Resource, templateKeypoints, descriptors2);
//add Feature descriptors
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
matcher.match(descriptors1, descriptors2, matches);
List<DMatch> matches_list = matches.toList();
MatOfDMatch good_matches = new MatOfDMatch();
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors1.rows(); i++ )
{
double dist = matches_list.get(i).distance;
if( dist < min_dist )
min_dist = dist;
if( dist > max_dist )
max_dist = dist;
}
//-- Draw only "good" matches (i.e. whose distance is less than 3*min_dist )
for( int i = 0; i < descriptors1.rows(); i++ )
{
if( matches_list.get(i).distance < 3*min_dist ){
MatOfDMatch temp = new MatOfDMatch();
temp.fromArray(matches.toArray()[i]);
good_matches.push_back(temp);
}
}
MatOfByte drawnMatches = new MatOfByte();
Features2d.drawMatches(Resource, keypoints, Object, templateKeypoints, good_matches, imageOut, Scalar.all(-1), Color_Red, drawnMatches, Features2d.NOT_DRAW_SINGLE_POINTS);
//no Feature descriptors
//Features2d.drawMatches(Resource, keypoints, Object, templateKeypoints, matches, imageOut);
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
List<DMatch> good_matches_list = good_matches.toList();
List<KeyPoint> keypoints_objectList = templateKeypoints.toList();
List<KeyPoint> keypoints_sceneList = keypoints.toList();
for(int i = 0; i<good_matches_list.size(); i++)
{
objList.addLast(keypoints_objectList.get(good_matches_list.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(good_matches_list.get(i).trainIdx).pt);
}
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
//findHomography
Mat hg = Calib3d.findHomography(obj, scene);
Mat obj_corners = new Mat(4,1,CvType.CV_32FC2);
Mat scene_corners = new Mat(4,1,CvType.CV_32FC2);
obj_corners.put(0, 0, new double[] {0,0});
obj_corners.put(1, 0, new double[] {Object.cols(),0});
obj_corners.put(2, 0, new double[] {Object.cols(),Object.rows()});
obj_corners.put(3, 0, new double[] {0,Object.rows()});
//obj_corners:input
Core.perspectiveTransform(obj_corners, scene_corners, hg);
Core.line(imageOut, new Point(scene_corners.get(0,0)), new Point(scene_corners.get(1,0)), new Scalar(0, 255, 0),4);
Core.line(imageOut, new Point(scene_corners.get(1,0)), new Point(scene_corners.get(2,0)), new Scalar(0, 255, 0),4);
Core.line(imageOut, new Point(scene_corners.get(2,0)), new Point(scene_corners.get(3,0)), new Scalar(0, 255, 0),4);
Core.line(imageOut, new Point(scene_corners.get(3,0)), new Point(scene_corners.get(0,0)), new Scalar(0, 255, 0),4);
Highgui.imwrite(Environment.getExternalStorageDirectory()+ "/Android/data/" + getApplicationContext().getPackageName() + "/Files/result_match.jpg", imageOut);
}

Number of matches are not enough in your case. Also matching is not correct. For best result I would suggest using RANSAC in homography calculation and change the threshold of good matches.
if( matches_list.get(i).distance < 3*min_dist ){
MatOfDMatch temp = new MatOfDMatch();
temp.fromArray(matches.toArray()[i]);
good_matches.push_back(temp);
}
make 3*min_dist to be 4*min_dist and check if good feature matching is there. If not then you should opt for some other feature detector(SURF, SIFT,harrison) Harrison might give noisy result, but it will produce maximum number of feature points. Or if none of these works then work with better image quality image.

Related

Unexpected results in attempt to perform Image Alignment | Image Registeration | OpenCV | Android

I attempted to implement Image Alignment using OpenCV for android which worked on one of my test case but it failed to work on another test case.
Test Case #1 - for which my code worked:
Given two Images, like for example Scene Image and Object Image,
I want output object Image to be aligned to Scene Image. Check out the Final Output.
Test Case #2 - for which my code didn't worked:
Given Images, Scene Image and Object Image,
I got this feature matching result and this as my output to warpPerspective.
My question is, can anyone please tell me why I am unable to align Images in second test case.
Mat mObjectMat = new Mat();
Mat mSceneMat = new Mat();
Mat img3 = mSceneMat.clone();
//Bitmap to Mat
Utils.bitmapToMat(inputImage2, mObjectMat);
Utils.bitmapToMat(inputImage1, mSceneMat);
//rgb to gray
Imgproc.cvtColor(mObjectMat, mObjectMat, Imgproc.COLOR_RGBA2GRAY);
Imgproc.cvtColor(mSceneMat, mSceneMat, Imgproc.COLOR_RGBA2GRAY);
//find interest points/keypoints in an image
MatOfKeyPoint keypoints_object = new MatOfKeyPoint();
MatOfKeyPoint keypoints_scene = new MatOfKeyPoint();
FeatureDetector fd = FeatureDetector.create(FeatureDetector.ORB);
fd.detect(mObjectMat, keypoints_object);
fd.detect(mSceneMat, keypoints_scene);
//extract descriptor
Mat descriptors_object = new Mat();
Mat descriptors_scene = new Mat();
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
extractor.compute(mObjectMat, keypoints_object, descriptors_object);
extractor.compute(mSceneMat, keypoints_scene, descriptors_scene);
//match keypoint descriptors
MatOfDMatch matches = new MatOfDMatch();
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
matcher.match( descriptors_object, descriptors_scene, matches);
List<DMatch> matchesList = matches.toList();
//Calculate max and min distances between keypoints
matchesList = matches.toList();
Double maxDistance = 0.0;
Double minDistance = 100.0;
for( int i = 0; i < descriptors_object.rows(); i++ )
{
Double dist = (double) matchesList.get(i).distance;
if( dist < minDistance ) minDistance = dist;
if( dist > maxDistance ) maxDistance = dist;
}
//display
Toast.makeText(getApplicationContext(), "[ Max dist : " + maxDistance + " ] [ Min dist : " + minDistance + " ]", Toast.LENGTH_LONG).show();
//find good matches
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
MatOfDMatch gm = new MatOfDMatch();
//Draw only good matches
for(int i = 0; i < descriptors_object.rows(); i++){
if(matchesList.get(i).distance < 3*minDistance){
good_matches.addLast(matchesList.get(i));
}
}
gm.fromList(good_matches);
//display matches on imageView
Mat opt = new Mat();
Scalar RED = new Scalar(255,0,0);
Scalar GREEN = new Scalar(0,255,0);
MatOfByte drawnMatches = new MatOfByte();
Features2d.drawMatches(mObjectMat,keypoints_object,mSceneMat,keypoints_scene,gm,opt,GREEN, RED, drawnMatches, Features2d.NOT_DRAW_SINGLE_POINTS);
List<KeyPoint> keypoints_objectList = keypoints_object.toList();
List<KeyPoint> keypoints_sceneList = keypoints_scene.toList();
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
MatOfPoint2f obj = new MatOfPoint2f();
MatOfPoint2f scene = new MatOfPoint2f();
//Localize the object & find the keypoints from the good matches
//separate corresponding points for both images
for(int i = 0; i<good_matches.size(); i++){
objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
}
obj.fromList(objList);
scene.fromList(sceneList);
//Find homography - perspective transformation between two planes
Mat H = Calib3d.findHomography(obj, scene, Calib3d.RANSAC);
//perform perspective warp
Mat imgWarped = new Mat();
Imgproc.warpPerspective(mObjectMat, imgWarped, H, mSceneMat.size());
//warp and scene images
Mat finalImage = new Mat();
Core.add(mSceneMat, imgWarped,finalImage);
//add image to imageView
Bitmap imageMatched = Bitmap.createBitmap(finalImage.cols(), finalImage.rows(), Bitmap.Config.RGB_565);//need to save bitmap
Utils.matToBitmap(finalImage, imageMatched);
imageView1.setImageBitmap(imageMatched);

Android OpenCv people count

I am developing an Android application to count number of people in real time video. I used OpenCv to detect people but not finding a way to count them. If any one knows how to do it then please help me.
Here is the code to detect people in coming video frame
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
List<MatOfPoint> list = new ArrayList<>();
Mat frame = new Mat();
Mat gray = new Mat();
Mat hierarchy = new Mat();
Mat originalFrame = inputFrame.rgba();
Core.transpose(originalFrame,mRgbaT);
Imgproc.resize(mRgbaT,mRgbaF,mRgbaF.size(),0,0,0);
Core.flip(mRgbaF,originalFrame,1);
Imgproc.medianBlur(originalFrame,originalFrame,3);
Imgproc.cvtColor(originalFrame, gray, Imgproc.CV_BGR2GRAY, 0);
HOGDescriptor hog = new HOGDescriptor();
//Получаем стандартный определитель людей и устанавливаем его нашему дескриптору
MatOfFloat descriptors = HOGDescriptor.getDefaultPeopleDetector();
// MatOfFloat dfdfdf=HOGDescriptor.getDefaultPeopleDetector();
hog.setSVMDetector(descriptors);
MatOfRect locations = new MatOfRect();
MatOfDouble weights = new MatOfDouble();
hog.detectMultiScale(gray, locations, weights);
Point rectPoint1 = new Point();
Point rectPoint2 = new Point();
Point fontPoint = new Point();
if (locations.rows() > 0) {
List<Rect> rectangles = locations.toList();
for (Rect rect : rectangles) {
rectPoint1.x = rect.x;
rectPoint1.y = rect.y;
fontPoint.x = rect.x;
fontPoint.y = rect.y - 4;
rectPoint2.x = rect.x + rect.width;
rectPoint2.y = rect.y + rect.height;
final Scalar rectColor = new Scalar( 0 , 0 , 0 );
// Добавляем на изображения найденную информацию
Imgproc.rectangle(originalFrame, rectPoint1, rectPoint2, rectColor, 2);
}
}
frame.release();
gray.release();
hierarchy.release();
int i=list.size();
list.clear();
return originalFrame;
}

Performance Issues in OpenCV for Android Keypoint Matching and threshold using ORB and RANSAC

I recently started developing an app on Android studio and I just finished writing the code. The accuracy which I get is more than satisfactory but the time taken by the device is a lot. {}I followed some tutorials on how to monitor the performance on android studio and I saw that one small part of my code is taking 6 seconds, which half the time my app takes to display the entire result. I have seen a lot of posts Java OpenCV - extracting good matches from knnMatch , OpenCV filtering ORB matches on OpenCV/JavaCV but haven't come across anyone asking for this problem. The OpenCV link http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html does provide a good tutorial but the RANSAC function in OpenCV takes different arguments for keypoints as compared to C++.
Here is my code
public Mat ORB_detection (Mat Scene_image, Mat Object_image){
/*This function is used to find the reference card in the captured image with the help of
* the reference card saved in the application
* Inputs - Captured image (Scene_image), Reference Image (Object_image)*/
FeatureDetector orb = FeatureDetector.create(FeatureDetector.DYNAMIC_ORB);
/*1.a Keypoint Detection for Scene Image*/
//convert input to grayscale
channels = new ArrayList<Mat>(3);
Core.split(Scene_image, channels);
Scene_image = channels.get(0);
//Sharpen the image
Scene_image = unsharpMask(Scene_image);
MatOfKeyPoint keypoint_scene = new MatOfKeyPoint();
//Convert image to eight bit, unsigned char
Scene_image.convertTo(Scene_image, CvType.CV_8UC1);
orb.detect(Scene_image, keypoint_scene);
channels.clear();
/*1.b Keypoint Detection for Object image*/
//convert input to grayscale
Core.split(Object_image,channels);
Object_image = channels.get(0);
channels.clear();
MatOfKeyPoint keypoint_object = new MatOfKeyPoint();
Object_image.convertTo(Object_image, CvType.CV_8UC1);
orb.detect(Object_image, keypoint_object);
//2. Calculate the descriptors/feature vectors
//Initialize orb descriptor extractor
DescriptorExtractor orb_descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat Obj_descriptor = new Mat();
Mat Scene_descriptor = new Mat();
orb_descriptor.compute(Object_image, keypoint_object, Obj_descriptor);
orb_descriptor.compute(Scene_image, keypoint_scene, Scene_descriptor);
//3. Matching the descriptors using Brute-Force
DescriptorMatcher brt_frc = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfDMatch matches = new MatOfDMatch();
brt_frc.match(Obj_descriptor, Scene_descriptor, matches);
//4. Calculating the max and min distance between Keypoints
float max_dist = 0,min_dist = 100,dist =0;
DMatch[] for_calculating;
for_calculating = matches.toArray();
for( int i = 0; i < Obj_descriptor.rows(); i++ )
{ dist = for_calculating[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
System.out.print("\nInterval min_dist: " + min_dist + ", max_dist:" + max_dist);
//-- Use only "good" matches (i.e. whose distance is less than 2.5*min_dist)
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");
/*Necessary type conversion for drawing matches
MatOfDMatch goodMatches = new MatOfDMatch();
goodMatches.fromList(good_matches);
Mat matches_scn_obj = new Mat();
Features2d.drawKeypoints(Object_image, keypoint_object, new Mat(Object_image.rows(), keypoint_object.cols(), keypoint_object.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawKeypoints(Scene_image, keypoint_scene, new Mat(Scene_image.rows(), Scene_image.cols(), Scene_image.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawMatches(Object_image, keypoint_object, Scene_image, keypoint_scene, goodMatches, matches_scn_obj);
SaveImage(matches_scn_obj,"drawing_good_matches.jpg");
*/
if(good_matches.size() <= 6){
ph_value = "7";
System.out.println("Wrong Detection");
return Scene_image;
}
else{
//5. RANSAC thresholding for finding the optimum homography
Mat outputImg = new Mat();
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
List<org.opencv.core.KeyPoint> keypoints_objectList = keypoint_object.toList();
List<org.opencv.core.KeyPoint> keypoints_sceneList = keypoint_scene.toList();
//getting the object and scene points from good matches
for(i = 0; i<good_matches.size(); i++){
objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
}
good_matches.clear();
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
objList.clear();
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
sceneList.clear();
float RANSAC_dist=(float)2.0;
Mat hg = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, RANSAC_dist);
for(i = 0;i<hg.cols();i++) {
String tmp = "";
for ( int j = 0; j < hg.rows(); j++) {
Point val = new Point(hg.get(j, i));
tmp= tmp + val.x + " ";
}
}
Mat scene_image_transformed_color = new Mat();
Imgproc.warpPerspective(original_image, scene_image_transformed_color, hg, Object_image.size(), Imgproc.WARP_INVERSE_MAP);
processing(scene_image_transformed_color, template_match);
return outputImg;
}
} }
and this part is what is taking 6 seconds to implement on runtime -
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");}
I was thinking may be I can write this part of the code in C++ using NDK but I just wanted to be sure that the language is the problem and not the code itself.
Please don't be strict, first question! Any criticism is much appreciated!
So the problem was the logcat was giving me false timing results. The lag was due to a Huge Gaussian Blur later on in the code. Instead of System.out.print, I used System.currentTimeMillis, which showed me the bug.

Feature Matching Match Rate Between Two Images

I'm making an app which will match the input image with the images from the database.
I'm using this code anyway:
String path = Environment.getExternalStorageDirectory().getAbsolutePath();
Bitmap objectbmp = BitmapFactory.decodeFile(path+"/Sample/Template.jpg");
Bitmap scenebmp = BitmapFactory.decodeFile(path+"/Sample/Input.jpg");
Mat object = new Mat(); //from the database
Mat scene = new Mat(); //user's input image
// convert bitmap to MAT
Utils.bitmapToMat(objectbmp, object);
Utils.bitmapToMat(scenebmp, scene);
//Feature Detection
FeatureDetector orbDetector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor orbextractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
MatOfKeyPoint keypoints_object = new MatOfKeyPoint();
MatOfKeyPoint keypoints_scene = new MatOfKeyPoint();
Mat descriptors_object = new Mat();
Mat descriptors_scene = new Mat();
//Getting the keypoints
orbDetector.detect( object, keypoints_object );
orbDetector.detect( scene, keypoints_scene );
//Compute descriptors
orbextractor.compute( object, keypoints_object, descriptors_object );
orbextractor.compute( scene, keypoints_scene, descriptors_scene );
//Match with Brute Force
MatOfDMatch matches = new MatOfDMatch();
DescriptorMatcher matcher;
matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
matcher.match( descriptors_object, descriptors_scene, matches );
double max_dist = 0;
double min_dist = 100;
List<DMatch> matchesList = matches.toList();
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_object.rows(); i++ )
{ double dist = matchesList.get(i).distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
for( int i = 0; i < descriptors_object.rows(); i++ )
{ if( matchesList.get(i).distance <= 3*min_dist )
{ good_matches.addLast( matchesList.get(i));
}
}
I am able to produce and count good matches though, what I want is to know the match rate between two matched images like:
Input - Template1 = 35%
Input - Template2 = 12%
.....................
How to do this?
You could compute a matching rate like goodMatches/totMatches, that is the accuracy of the matching.
Actually there are different ways to do that. The common are:
Cross check: if T1 matches with T2, check if T2 checks with T1
Ratio check: as in SIFT, if the best template that matches with T1 is T2, consider the second best match template T2_2 and accept the first match only if the ratio between the matchings is good enough.
Geometric validation: you should compute the homography between templates and discard the matchings that doesn't agree
I've implemented the first two in Java in an Android application (I used ORB as features).
private List<MatOfDMatch> crossCheck(List<DMatch> matches12, List<DMatch> matches21, List<MatOfDMatch> knn_matches) {
List<MatOfDMatch> good_matches = new ArrayList<MatOfDMatch>();
for(int i=0; i<matches12.size(); i++)
{
DMatch forward = matches12.get(i);
DMatch backward = matches21.get(forward.trainIdx);
if(backward.trainIdx == forward.queryIdx)
good_matches.add(knn_matches.get(i)); //k=2
}
return good_matches;
}
private List<MatOfDMatch> ratioCheck(List<MatOfDMatch> knn_matches, float ratio) {
List<MatOfDMatch> good_matches = new ArrayList<MatOfDMatch>();
for(int i=0; i<knn_matches.size(); i++)
{
List<DMatch> subList = knn_matches.get(i).toList();
if(subList.size()>=2)
{
Float first_distance = subList.get(0).distance;
Float second_distance = subList.get(1).distance;
if((first_distance/second_distance) <= ratio)
good_matches.add(knn_matches.get(i));
}
}
return good_matches;
}

Android OpenCV FeatureDetection -> runFeatureHomography giving messed up result image after matching

I'm pretty new to openCV, and at the moment im tryin to get an object detection running on an android device. What i basically do, is displaying a a camera preview in my app, and when i click on it it captures a picture. Then this picture is given to the runFeatureHomography - method, which first grabs the second image to which the taken image has to be compared. Then the method finds the keypoints in both pictures, computes them and matches them into one Mat called img_matches.
As basic as it can be i guess.
The object im trying to detect here at the moment is some kind of card, just like the format of a credit card. The card is blue and has lots of white and yellow text on it. I can only post one link, thats why i cant show pictures of them.
I dont know why, but when i display the result in the end / or save the result as a bitmap to my phone, it always looks kinda like this:
http://oi44.tinypic.com/oaqel0.jpg <-- result image after everything is done.
This shows me that the object i wanted to detect was indeed detected, but i dont know why there is a black background and not the pictures of the cards. Why doesn't it show my two images the way the are, just with all the lines on them?
In my code im using those three:
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
DescriptorMatchermatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE);
Here is some of my code:
private void runFeatureHomography(Bitmap image)
{
Mat img_object = getObjectImage();
Mat img_scene = newEmptyMat();
Bitmap myimg = image.copy(Bitmap.Config.ARGB_8888, true);
Utils.bitmapToMat(myimg, img_scene);
MatOfKeyPoint keyPoints_object = detectObjectKeyPoints();
MatOfKeyPoint keyPoints_scene = new MatOfKeyPoint();
this.detector.detect(img_scene, keyPoints_scene);
Mat descriptors_object = calculateObjectDescriptor();
Mat descriptors_scene = newEmptyMat();
this.extractor.compute(img_scene, keyPoints_scene, descriptors_scene);
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors_object, descriptors_scene, matches);
double min_dist = Double.MAX_VALUE;
for (int i = -1; ++i < descriptors_object.rows();)
{
double dist = matches.toArray()[i].distance;
if (dist < min_dist)
{
min_dist = dist;
}
}
List<DMatch> good_matches = new ArrayList<DMatch>();
for (int i = -1; ++i < descriptors_object.rows();)
{
if (matches.toArray()[i].distance <= 3 * min_dist)
{
good_matches.add(matches.toArray()[i]);
}
}
System.out.println("4");
Mat img_matches = newEmptyMat();
Features2d.drawMatches(
img_object,
keyPoints_object,
img_scene,
keyPoints_scene,
new MatOfDMatch(good_matches.toArray(new DMatch[good_matches
.size()])), img_matches, Scalar.all(-1),
Scalar.all(-1), new MatOfByte(),
Features2d.NOT_DRAW_SINGLE_POINTS);
List<Point> object = new ArrayList<Point>();
List<Point> scene = new ArrayList<Point>();
for (int i = -1; ++i < good_matches.size();)
{
object.add(keyPoints_object.toArray()[good_matches.get(i).queryIdx].pt);
scene.add(keyPoints_scene.toArray()[good_matches.get(i).trainIdx].pt);
}
Mat H = Calib3d.findHomography(
new MatOfPoint2f(object.toArray(new Point[object.size()])),
new MatOfPoint2f(scene.toArray(new Point[scene.size()])),
Calib3d.RANSAC, 3);
Point[] object_corners = new Point[4];
object_corners[0] = new Point(0, 0);
object_corners[1] = new Point(img_object.cols(), 0);
object_corners[2] = new Point(img_object.cols(), img_object.rows());
object_corners[3] = new Point(0, img_object.rows());
MatOfPoint2f scene_corners2f = new MatOfPoint2f();
Core.perspectiveTransform(new MatOfPoint2f(object_corners),
scene_corners2f, H);
Point[] scene_corners = scene_corners2f.toArray();
Point[] scene_corners_norm = new Point[4];
scene_corners_norm[0] = new Point(scene_corners[0].x
+ img_object.cols(), scene_corners[0].y);
scene_corners_norm[1] = new Point(scene_corners[1].x
+ img_object.cols(), scene_corners[1].y);
scene_corners_norm[2] = new Point(scene_corners[2].x
+ img_object.cols(), scene_corners[2].y);
scene_corners_norm[3] = new Point(scene_corners[3].x
+ img_object.cols(), scene_corners[3].y);
Core.line(img_matches, scene_corners_norm[0], scene_corners_norm[1],
new Scalar(0, 255, 0), 4);
Core.line(img_matches, scene_corners_norm[1], scene_corners_norm[2],
new Scalar(0, 255, 0), 4);
Core.line(img_matches, scene_corners_norm[2], scene_corners_norm[3],
new Scalar(0, 255, 0), 4);
Core.line(img_matches, scene_corners_norm[3], scene_corners_norm[0],
new Scalar(0, 255, 0), 4);
bmp = Bitmap.createBitmap(img_matches.cols(), img_matches.rows(),
Bitmap.Config.ARGB_8888);
Intent resultIntent = new Intent("com.example.capturetest.Result");
startActivity(resultIntent);
}
private volatile Mat cachedObjectDescriptor = null;
private volatile MatOfKeyPoint cachedObjectKeyPoints = null;
private volatile Mat cachedObjectImage = null;
private Mat calculateObjectDescriptor()
{
Mat objectDescriptor = this.cachedObjectDescriptor;
if (objectDescriptor == null)
{
Mat objectImage = getObjectImage();
MatOfKeyPoint objectKeyPoints = detectObjectKeyPoints();
objectDescriptor = newEmptyMat();
this.extractor.compute(objectImage, objectKeyPoints,
objectDescriptor);
this.cachedObjectDescriptor = objectDescriptor;
}
return objectDescriptor;
}
private MatOfKeyPoint detectObjectKeyPoints()
{
MatOfKeyPoint objectKeyPoints = this.cachedObjectKeyPoints;
if (objectKeyPoints == null)
{
Mat objectImage = getObjectImage();
objectKeyPoints = new MatOfKeyPoint();
this.detector.detect(objectImage, objectKeyPoints);
this.cachedObjectKeyPoints = objectKeyPoints;
}
return objectKeyPoints;
}
private Mat getObjectImage()
{
Mat objectImage = this.cachedObjectImage;
if (objectImage == null)
{
objectImage = newEmptyMat();
Bitmap bitmap = ((BitmapDrawable) iv.getDrawable()).getBitmap();
Bitmap img = bitmap.copy(Bitmap.Config.ARGB_8888, false);
Utils.bitmapToMat(img, objectImage);
this.cachedObjectImage = objectImage;
}
return objectImage;
}
private Mat newEmptyMat()
{
return new Mat();
}
After this line matcher.match(descriptors_object, descriptors_scene, matches); i tried to convert the three Mat img_object, img_scene and matches to bitmaps and saved them to my android device just for checking. They all look as they are supposed to look, so until this point everthing is fine.
But after this part...
Mat img_matches = newEmptyMat();
Features2d.drawMatches(
img_object,
keyPoints_object,
img_scene,
keyPoints_scene,
new MatOfDMatch(good_matches.toArray(new DMatch[good_matches
.size()])), img_matches, Scalar.all(-1),
Scalar.all(-1), new MatOfByte(),
Features2d.NOT_DRAW_SINGLE_POINTS);
... i tried to convert the Mat img_matches (which is supposed to have all the information of the two input pictures if i get it right), to a bitmap and save it on my android device, but the picture looks like the picture in the link above (black pictures with lines instead of card-pictures with lines).
Does any of you know what im doing wrong here? I seem to be stuck at the moment.
Thanks in advance guys.
Edit:
Just wanted to let you know that i got the same code running and WORKING as a normal java program on my desktop. The picture is taken from the webcam there. The result image is diplayed absolutely correct in the desktop program, with cards and lines instead of black and lines ;)
Alright, found a working way:
Imgproc.cvtColor(img_object, img_object, Imgproc.COLOR_RGBA2RGB);
Imgproc.cvtColor(img_scene, img_scene, Imgproc.COLOR_RGBA2RGB);
It seems that after i converted my Bitmaps to Mats i have to use the above two lines to convert them from RGBA to RGB. It also works with RGBA to GRAY if you prefer gray pictures.
It seems that RGBA format is not working in this case.
Hope this helps anybody coming here from google.

Categories

Resources