I'm using opencv4android 3.2, to find a known object I'm using findHomography method and, when I capture an image where the object farther away from the camera it gives me a good result, but when I capture an image where the object closer away from the camera I got a bad result.
object to find
result
this is my code
// Matches current frame's descriptors to template's
descriptorMatcher.match(descriptors, templateDescriptors,
matches);
List<DMatch> matchesList = matches.toList();
double maxDistance = 0;
double minDistance = 100;
int rowCount = descriptors.rows();
for (int i = 0; i < rowCount; i++) {
double dist = matchesList.get(i).distance;
if (dist < minDistance) minDistance = dist;
if (dist > maxDistance) maxDistance = dist;
}
List<DMatch> goodMatchesList = new ArrayList<>();
double upperBound = 3 * minDistance;
for (int i = 0; i < rowCount; i++) {
if (matchesList.get(i).distance <= upperBound) {
goodMatchesList.add(matchesList.get(i));
}
}
// Iterate through good matches and put the 2D points of the
object (template) and frame (scene) into a list
List<KeyPoint> objKpList = new ArrayList<>();
List<KeyPoint> sceneKpList = new ArrayList<>();
objKpList = templateKeypoints.toList();
sceneKpList = keypoints.toList();
LinkedList<Point> objList = new LinkedList<>();
LinkedList<Point> sceneList = new LinkedList<>();
for (int i = 0; i < goodMatchesList.size(); i++) {
objList.addLast(objKpList.get(goodMatchesList.get(i).trainIdx).pt);
sceneList.addLast(sceneKpList.get(goodMatchesList.get(i).queryIdx).pt);
}
MatOfPoint2f obj = new MatOfPoint2f();
MatOfPoint2f scene = new MatOfPoint2f();
obj.fromList(objList);
scene.fromList(sceneList);
// Calculate the homography
Mat H = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, 3);
...
try {
Core.perspectiveTransform(objCorners, sceneCorners, H);
Point p1 = new Point(sceneCorners.get(0, 0));
Point p2 = new Point(sceneCorners.get(1, 0));
Point p3 = new Point(sceneCorners.get(2, 0));
Point p4 = new Point(sceneCorners.get(3, 0));
final List<Point> source = new ArrayList<Point>();
source.add(p1);
source.add(p2);
source.add(p3);
source.add(p4);
Mat startM = Converters.vector_Point2f_to_Mat(source);
flip(startM, startM, -1);
Mat result = warp(rgba, startM);
Bitmap bmppp = Bitmap.createBitmap(result.width(),
result.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(result, bmppp);
keypoints.release();
SaveImage(result);
} catch (CvException e) {
e.printStackTrace();
Log.e(TAG, "perspectiveTransform returned an assertion
failed error.");
return false;
}
why i don't got the good result findHomography?
Thanks.
this is the template using to find the card object
I was having the same problem. It sounds silly, but have you tried to switch the srcPoints with destinationPoints?
I'm using homography to control the navigation of a robot. I originally had:
M, mask = cv2.findHomography(robotPoints, visionpoints)
However, by switching the robotPoints with the visionPoints it worked. I was reverse mapping the planes.
M, mask = cv2.findHomography(visionPoints,robotPoints)
Related
I am new to OpenCV and I am trying to write an android code using OpenCV to compare two images for similarities, for my example i loaded two images from Drawable folder as you see in the code, but i am not able to complete the code in order to get a percentage of matching between images and to set a threshold or something? so please can any one help me solving my issue, thank you in advance. below is my Code:
public class MainActivity extends AppCompatActivity {
//TextView textView;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
System.loadLibrary("opencv_java3");
// textView =(TextView)findViewById(R.id.textView);
Mat m1 = Imgcodecs.imread(ResourcesCompat.getDrawable(getResources(), R.drawable.image1, null).toString());
Mat m2 = Imgcodecs.imread(ResourcesCompat.getDrawable(getResources(), R.drawable.image2, null).toString());
//Imgproc.cvtColor(m1, m1, Imgproc.COLOR_RGB2BGRA);
//Imgproc.cvtColor(m2, m2, Imgproc.COLOR_RGB2BGRA);
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(m1, keypoints1);
FeatureDetector detector2 = FeatureDetector.create(FeatureDetector.ORB);
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector2.detect(m2, keypoints2);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat descriptors1 = new Mat();
extractor.compute(m1, keypoints1, descriptors1);
DescriptorExtractor extractor2 = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat descriptors2 = new Mat();
extractor2.compute(m2, keypoints2, descriptors2);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1, descriptors2, matches);
List<DMatch> matchesList = matches.toList();
double maxDistance = 0;
double minDistance = 1000;
int rowCount = matchesList.size();
for (int i = 0; i < rowCount; i++)
{
double dist = matchesList.get(i).distance;
if (dist < minDistance) minDistance = dist;
if (dist > maxDistance) maxDistance = dist;
}
List<DMatch> goodMatchesList = new ArrayList<DMatch>();
double upperBound = 1.6 * minDistance;
for (int i = 0; i < rowCount; i++)
{
if (matchesList.get(i).distance <= upperBound)
{
goodMatchesList.add(matchesList.get(i));
}
}
MatOfDMatch goodMatches = new MatOfDMatch();
goodMatches.fromList(goodMatchesList);
}
}
I am trying to use this code http://androiderstuffs.blogspot.com/2016/06/detecting-rectangle-using-opencv-java.html to detect card. But instead of putting card on plane surface, I will be holding this card in hand in-front of my Head. Problem is, its not detecting card rectangle. I am new to OpenCV. See my code below, this code will highlight all found rectangles in output image. Problem is, it never find card rectangle.
private void findRectangleOpen(Bitmap image) throws Exception {
Mat tempor = new Mat();
Mat src = new Mat();
Utils.bitmapToMat(image, tempor);
Imgproc.cvtColor(tempor, src, Imgproc.COLOR_BGR2RGB);
Mat blurred = src.clone();
Imgproc.medianBlur(src, blurred, 9);
Mat gray0 = new Mat(blurred.size(), CvType.CV_8U), gray = new Mat();
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
List<Mat> blurredChannel = new ArrayList<Mat>();
blurredChannel.add(blurred);
List<Mat> gray0Channel = new ArrayList<Mat>();
gray0Channel.add(gray0);
MatOfPoint2f approxCurve;
int maxId = -1;
for (int c = 0; c < 3; c++) {
int ch[] = {c, 0};
Core.mixChannels(blurredChannel, gray0Channel, new MatOfInt(ch));
int thresholdLevel = 1;
for (int t = 0; t < thresholdLevel; t++) {
if (t == 0) {
Imgproc.Canny(gray0, gray, 10, 20, 3, true); // true ?
Imgproc.dilate(gray, gray, new Mat(), new Point(-1, -1), 1); // 1
// ?
} else {
Imgproc.adaptiveThreshold(gray0, gray, thresholdLevel,
Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C,
Imgproc.THRESH_BINARY,
(src.width() + src.height()) / 200, t);
}
Imgproc.findContours(gray, contours, new Mat(),
Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
int i = 0;
for (MatOfPoint contour : contours) {
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve,
Imgproc.arcLength(temp, true) * 0.02, true);
if (approxCurve.total() == 4 && area >= 200 && area <= 40000) {
double maxCosine = 0;
List<Point> curves = approxCurve.toList();
for (int j = 2; j < 5; j++) {
double cosine = Math.abs(angle(curves.get(j % 4),
curves.get(j - 2), curves.get(j - 1)));
maxCosine = Math.max(maxCosine, cosine);
}
if (maxCosine < 0.3) {
Imgproc.drawContours(src, contours, i, new Scalar(255, 0, 0), 3);
Bitmap bmp;
bmp = Bitmap.createBitmap(src.cols(), src.rows(),
Bitmap.Config.ARGB_8888);
Utils.matToBitmap(src, bmp);
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bmp.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] byteArray = stream.toByteArray();
//File origFile = getFileForSaving();
savePhoto(byteArray);
bmp.recycle();
}
}
i++;
}
}
}
}
private static double angle(org.opencv.core.Point p1, org.opencv.core.Point p2, org.opencv.core.Point p0) {
double dx1 = p1.x - p0.x;
double dy1 = p1.y - p0.y;
double dx2 = p2.x - p0.x;
double dy2 = p2.y - p0.y;
return (dx1 * dx2 + dy1 * dy2)
/ sqrt((dx1 * dx1 + dy1 * dy1) * (dx2 * dx2 + dy2 * dy2)
+ 1e-10);
}
Sample output image is:
Output of detecting rectangle
I'm OpenCV learner. I was trying Image Comparison. I have used OpenCV 2.4.13.3
I have these two images 1.jpg and cam1.jpg.
When I use the following command in openCV
File sdCard = Environment.getExternalStorageDirectory();
String path1, path2;
path1 = sdCard.getAbsolutePath() + "/1.jpg";
path2 = sdCard.getAbsolutePath() + "/cam1.jpg";
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.BRIEF);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
Mat img1 = Highgui.imread(path1);
Mat img2 = Highgui.imread(path2);
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(img1, keypoints1);
extractor.compute(img1, keypoints1, descriptors1);
//second image
// Mat img2 = Imgcodecs.imread(path2);
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector.detect(img2, keypoints2);
extractor.compute(img2, keypoints2, descriptors2);
//matcher image descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);
// Filter matches by distance
MatOfDMatch filtered = filterMatchesByDistance(matches);
int total = (int) matches.size().height;
int Match= (int) filtered.size().height;
Log.d("LOG", "total:" + total + " Match:"+Match);
Method filterMatchesByDistance
static MatOfDMatch filterMatchesByDistance(MatOfDMatch matches){
List<DMatch> matches_original = matches.toList();
List<DMatch> matches_filtered = new ArrayList<DMatch>();
int DIST_LIMIT = 30;
// Check all the matches distance and if it passes add to list of filtered matches
Log.d("DISTFILTER", "ORG SIZE:" + matches_original.size() + "");
for (int i = 0; i < matches_original.size(); i++) {
DMatch d = matches_original.get(i);
if (Math.abs(d.distance) <= DIST_LIMIT) {
matches_filtered.add(d);
}
}
Log.d("DISTFILTER", "FIL SIZE:" + matches_filtered.size() + "");
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_filtered);
return mat;
}
Log
total:122 Match:30
As we can see from the log match is 30.
But as we can see both images have same visual element (in).
How can I get match=90 using openCV?
It would be great if somebody can help with code snippet.
If using opencv it is not possible then what are the other
alternatives we can look for?
But as we can see both images have same visual element (in).
So, we should compare not whole images, but "same visual element" on it. You can improve Match value more if you do not compare the "template" and "camera" images themselves, but processed same way (converted to binary black/white, for example) "template" and "camera" images. For example, try to find blue (background of template logo) square on both ("template" and "camera") images and compare that squares (Region Of Interest). The code may be something like that:
Bitmap bmImageTemplate = <get your template image Bitmap>;
Bitmap bmTemplate = findLogo(bmImageTemplate); // process template image
Bitmap bmImage = <get your camera image Bitmap>;
Bitmap bmLogo = findLogo(bmImage); // process camera image same way
compareBitmaps(bmTemplate, bmLogo);
where
private Bitmap findLogo(Bitmap sourceBitmap) {
Bitmap roiBitmap = null;
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
Mat roiTmp = sourceMat.clone();
final Mat hsvMat = new Mat();
sourceMat.copyTo(hsvMat);
// convert mat to HSV format for Core.inRange()
Imgproc.cvtColor(hsvMat, hsvMat, Imgproc.COLOR_RGB2HSV);
Scalar lowerb = new Scalar(85, 50, 40); // lower color border for BLUE
Scalar upperb = new Scalar(135, 255, 255); // upper color border for BLUE
Core.inRange(hsvMat, lowerb, upperb, roiTmp); // select only blue pixels
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<Rect> squares = new ArrayList<>();
Imgproc.findContours(roiTmp, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
double rectangleArea = boundingRect.size.area();
// test min ROI area in pixels
if (rectangleArea > 400) {
Point rotated_rect_points[] = new Point[4];
boundingRect.points(rotated_rect_points);
Rect rect = Imgproc.boundingRect(new MatOfPoint(rotated_rect_points));
double aspectRatio = rect.width > rect.height ?
(double) rect.height / (double) rect.width : (double) rect.width / (double) rect.height;
if (aspectRatio >= 0.9) {
squares.add(rect);
}
}
}
Mat logoMat = extractSquareMat(roiTmp, getBiggestSquare(squares));
roiBitmap = Bitmap.createBitmap(logoMat.cols(), logoMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(logoMat, roiBitmap);
return roiBitmap;
}
method extractSquareMat() just extract Region of Interest (logo) from whole image
public static Mat extractSquareMat(Mat sourceMat, Rect rect) {
Mat squareMat = null;
int padding = 50;
if (rect != null) {
Rect truncatedRect = new Rect((int) rect.tl().x + padding, (int) rect.tl().y + padding,
rect.width - 2 * padding, rect.height - 2 * padding);
squareMat = new Mat(sourceMat, truncatedRect);
}
return squareMat ;
}
and compareBitmaps() just wrapper for your code:
private void compareBitmaps(Bitmap bitmap1, Bitmap bitmap2) {
Mat mat1 = new Mat(bitmap1.getWidth(), bitmap1.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(bitmap1, mat1);
Mat mat2 = new Mat(bitmap2.getWidth(), bitmap2.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(bitmap2, mat2);
compareMats(mat1, mat2);
}
your code as method:
private void compareMats(Mat img1, Mat img2) {
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.BRIEF);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(img1, keypoints1);
extractor.compute(img1, keypoints1, descriptors1);
//second image
// Mat img2 = Imgcodecs.imread(path2);
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector.detect(img2, keypoints2);
extractor.compute(img2, keypoints2, descriptors2);
//matcher image descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);
// Filter matches by distance
MatOfDMatch filtered = filterMatchesByDistance(matches);
int total = (int) matches.size().height;
int Match= (int) filtered.size().height;
Log.d("LOG", "total:" + total + " Match:" + Match);
}
static MatOfDMatch filterMatchesByDistance(MatOfDMatch matches){
List<DMatch> matches_original = matches.toList();
List<DMatch> matches_filtered = new ArrayList<DMatch>();
int DIST_LIMIT = 30;
// Check all the matches distance and if it passes add to list of filtered matches
Log.d("DISTFILTER", "ORG SIZE:" + matches_original.size() + "");
for (int i = 0; i < matches_original.size(); i++) {
DMatch d = matches_original.get(i);
if (Math.abs(d.distance) <= DIST_LIMIT) {
matches_filtered.add(d);
}
}
Log.d("DISTFILTER", "FIL SIZE:" + matches_filtered.size() + "");
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_filtered);
return mat;
}
As result for resized (scaled for 50%) images saved from your question result is:
D/DISTFILTER: ORG SIZE:237
D/DISTFILTER: FIL SIZE:230
D/LOG: total:237 Match:230
NB! This is a quick and dirty example just to demonstrate approach only for given template.
P.S. getBiggestSquare() can be like that (based on compare by area):
public static Rect getBiggestSquare(List<Rect> squares) {
Rect biggestSquare = null;
if (squares != null && squares.size() >= 1) {
Rect square;
double maxArea;
int ixMaxArea = 0;
square = squares.get(ixMaxArea);
maxArea = square.area();
for (int ix = 1; ix < squares.size(); ix++) {
square = squares.get(ix);
if (square.area() > maxArea) {
maxArea = square.area();
ixMaxArea = ix;
}
}
biggestSquare = squares.get(ixMaxArea);
}
return biggestSquare;
}
I have try below code using OpenCV functions cvtColor,Canny and HoughLinesP but not able to get accurate result or not work in some cases.
private boolean opencvProcessCount(Uri picFileUri) {
hairCount = 0;
totalC = 0;
//Log.e(">>>>>>>>","count " + picFileUri);
try {
InputStream iStream = getContentResolver().openInputStream(picFileUri);
byte[] im = getBytes(iStream);
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inDither = true;
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap image = BitmapFactory.decodeByteArray(im, 0, im.length);
Mat mYuv = new Mat();
Utils.bitmapToMat(image, mYuv);
Mat mRgba = new Mat();
Imgproc.cvtColor(mYuv, mRgba, Imgproc.COLOR_RGB2GRAY, 4);
Imgproc.Canny(mRgba, mRgba, 80, 90);
Mat lines = new Mat();
int threshold = 80;
int minLineSize = 30;
int lineGap = 100;
Imgproc.HoughLinesP(mRgba, lines, 1, Math.PI/180, threshold, minLineSize, lineGap);
for (int x = 0; x < lines.rows(); x++)
{
double[] vec = lines.get(x, 0);
double x1 = vec[0],
y1 = vec[1],
x2 = vec[2],
y2 = vec[3];
Point start = new Point(x1, y1);
Point end = new Point(x2, y2);
double dx = x1 - x2;
double dy = y1 - y2;
double dist = Math.sqrt (dx*dx + dy*dy);
totalC ++;
Log.e(">>>>>>>>","dist " + dist);
if(dist>300.d)
{
hairCount ++;
// Log.e(">>>>>>>>","count " + x);
Imgproc.line(mRgba, start, end, new Scalar(0,255, 0, 255),5);// here initimg is the original image.
}// show those lines that have length greater than 300
}
Log.e(">>>>>>>>",totalC+" out hairCount " + hairCount);
// Imgproc.
} catch (Throwable e) {
// Log.e(">>>>>>>>","count " + e.getMessage());
e.printStackTrace();
}
return false;
}
Below are sample images to count hair :
I think you will find this article interesting:
http://www.cs.ubc.ca/~lowe/papers/aij87.pdf
They take a 2D bitmap, apply canny edge detector and then regroup segments of the different edges based on how likely they belong to a same object - in this case hair (and give criterias for such regrouping).
I think you could use this to know how many objects there are on the image, and if the image contains only hair, then you'd have a count for hair.
I'm writing code that performs a projective transform (https://math.stackexchange.com/questions/296794/finding-the-transform-matrix-from-4-projected-points-with-javascript) on an image using 4 user selected points.
In doing so I have to use very large arrays (300k+ indices). When I run it, my phone screen blacks out and after a few seconds gives the message " has stopped working." However, it continues to print Log messages to my AndroidStudio logcat containing information about the array that it's working on, letting me know that it's still running.
I'm not very knowledgeable about computational efficiency, so I might be making some fatal mistake involving matrix manipulation. The part of the code that it breaks on is the final portion of transform(), and the logcat prints the "rounded" values while the phone shows a "stopped working" message.
I've included the relevant code. Any advice on anything I'm doing wrong (related or not) is appreciated as this is my first experience with Android development.
I'm more or less just following the transformation provided by in the math.stackexchange link.
public class projTransform extends Activity{
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_proj_transform);
Intent parent_intent = getIntent();
Uri imgUri = parent_intent.getData();
pointArray = parent_intent.getDoubleArrayExtra("points");
//dimens[0-3]: width, height, minX, minY
dimens = parent_intent.getIntArrayExtra("dimens");
transform(imgUri,pointArray, dimens);
}
//A*B = C
private static double[][] mMult(double[][] A, double[][] B){
int mA = A.length;
int nA = A[0].length;
int mB = B.length;
int nB = B[0].length;
if (nA != mB) throw new RuntimeException("Illegal matrix dimensions.");
double[][] C = new double[mA][nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i][j] += A[i][k] * B[k][j];
return C;
}
//A*x = y
private static double[] mMult(double[][] A, double[] x){
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
double[] y = new double[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += A[i][j] * x[j];
return y;
}
//https://en.wikipedia.org/wiki/Invertible_matrix#Inversion_of_3.C3.973_matrices
//A^(-1)
private static double[][] mInvert3x3(double[][] X){
double[][] Y = new double[3][3];
double A,B,C,D,E,F,G,H,I,detX;
A = X[1][1]*X[2][2] - X[1][2]*X[2][1];
B = -(X[1][0]*X[2][2] - X[1][2]*X[2][0]);
C = X[1][0]*X[2][1] - X[1][1]*X[2][0];
D = -(X[0][1]*X[2][2] - X[0][2]*X[2][1]);
E = X[0][0]*X[2][2] - X[0][2]*X[2][0];
F = -(X[0][0]*X[2][1] - X[0][1]*X[2][0]);
G = X[0][1]*X[1][2] - X[0][2]*X[1][1];
H = -(X[0][0]*X[1][2] - X[0][2]*X[1][0]);
I = X[0][0]*X[1][1] - X[0][1]*X[1][0];
detX = X[0][0]*A + X[0][1]*B + X[0][2]*C;
Y[0][0] = A/detX;
Y[1][0] = B/detX;
Y[2][0] = C/detX;
Y[0][1] = D/detX;
Y[1][1] = E/detX;
Y[2][1] = F/detX;
Y[0][2] = G/detX;
Y[1][2] = H/detX;
Y[2][2] = I/detX;
return Y;
}
private void transform(Uri data, double[] sourceArray, int[] dimens){
if (data != null) {
try {
InputStream imgStream = getContentResolver().openInputStream(data);
tempBmp = BitmapFactory.decodeStream(imgStream);
} catch (FileNotFoundException e) {
e.printStackTrace();
}
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedbmp = Bitmap.createBitmap(tempBmp, 0, 0, tempBmp.getWidth(), tempBmp.getHeight(), matrix, true);
crop = new int[dimens[0] * dimens[1]];
rotatedbmp.getPixels(crop, 0, dimens[0], dimens[2], dimens[3], dimens[0], dimens[1]);
//map for original bmp
double[][] sourceMap = tMap(sourceArray);
Log.e("sourceMap",toString(sourceMap));
//map for transformed bmp
double[] destArray = new double[] {0,0,0,destHeight,destWidth,0,destHeight,destWidth};
double[][] destMap = tMap(destArray);
Log.e("destMap",toString(destMap));
// C = B*[A^(-1)]
double[][] finalMap = mMult(sourceMap, mInvert3x3(destMap));
Log.e("width", String.valueOf(dimens[0]));
int[] destPixels = new int[destHeight*destWidth];
int[] temp;
for(int i=0; i<destHeight-1; i++){
for(int j=0; j<destWidth-1; j++){
temp = pixelMap(finalMap,i,j);
Log.e("rounded", String.valueOf(temp[0]) + ", " + String.valueOf(temp[1]));
destPixels[(i*destWidth)+j] = crop[(temp[0]*dimens[0]) + temp[1]];
}
}
display(destPixels, destWidth, destHeight);
}
}
//produces mapping matrix given corners
//A,B in SE post
private double[][] tMap(double[] pointArray){
double[][] tempArray = new double[3][3];
tempArray[0][0] = pointArray[0];
tempArray[1][0] = pointArray[1];
tempArray[0][1] = pointArray[2];
tempArray[1][1] = pointArray[3];
tempArray[0][2] = pointArray[4];
tempArray[1][2] = pointArray[5];
for(int i=0; i<3; i++){
tempArray[2][i] = 1;
}
//Log.e("tempArray",toString(tempArray));
double[] tempVector = new double[] {pointArray[6], pointArray[7], 1};
//Log.e("tempVector",toString(tempVector));
double[][] inverted = mInvert3x3(tempArray);
//Log.e("inverted",toString(inverted));
double[] coef = mMult(inverted, tempVector);
//Log.e("coef",toString(coef));
double[][] tran = new double[3][3];
for(int i=0; i<3; i++){
for (int j=0; j<3; j++){
tran[i][j] = tempArray[i][j]*coef[j];
}
}
return tran;
}
private int[] pixelMap(double[][] map, double x, double y){
double[] tempVector = new double[] {x,y,1};
double[] primeVector = mMult(map,tempVector);
return new int[] {(int) Math.round(primeVector[0]/primeVector[2]), (int) Math.round(primeVector[1]/primeVector[2])};
}
You don't need to divide by the determinant; the adjoint instead of the inverse is sufficient. Apart form that, you're creating a lot of objects.
The tight loop is one with the two nested loops around the pixelMap call. Try to inline pixelMap there, and try to avoid creating new arrays. Use separate variables for all the infividual coordinates. Use the fact that you know the dimensions.
x = finalMap[0][0]*i + finalMap[0][1]*j + finalMap[0][2];
and so on. If you want to, you can move finalMap into a set of 9 local variables to help the optimizer. Also disable the logging line, since caching and transfering that much log output takes considerable resources. Do one log line when you start the loop, and another when you're done. In the end, the loop would look somewhat like this:
for(int i=0; i<destHeight-1; i++){
for(int j=0; j<destWidth-1; j++){
double x = finalMap00*i + finalMap01*j + finalMap02;
double y = finalMap10*i + finalMap11*j + finalMap12;
double z = finalMap20*i + finalMap21*j + finalMap22;
int xi = (int)Math.round(x/z), yi = (int)Math.round(y/z);
destPixels[(i*destWidth)+j] = crop[(xi*srcWidth) + yi];
}
}