Rotation and translation at a specific point - android

I need to rotate an image at a specific point and then translate it to a specific point on the screen.
The point I want to rotate it at is in the center of the picture.
The traslation works, but the rotation doesn't.
I have a Vector of bitmap and I'm using Canvas and Matrix.
Code:
for (Bitmap image:images)
{
//rotation
double angle=Math.toDegrees(rotation);
Matrix matrix=new Matrix();
matrix.postRotate((float)angle,finalMap.getWidth()/2-1,0);
//transform
matrix.setTranslate(position.x,position.y);
//print on screen
c.drawBitmap(image,matrix, paint);
}

Try changing your rotation/translation calls like this (in exactly this order):
matrix.setTranslate(position.x,position.y);
matrix.preRotate((float)angle,finalMap.getWidth()/2-1,0);
The reason it doesn't work the way you have it currently, is that your setTranslate() call is discarding the rotation that you previously did and just replacing it with a translation performed on an identity matrix. Matrix transformation methods starting with the "set" prefix will just apply the transformation as if nothing happened before them.
If you want to read more, this is a useful answer: https://stackoverflow.com/a/8197896/2464728

What does 'the rotation does not' mean? How is it failing to do as you expect?
I would initially wonder about the use of 0 for the y rotation point.

Related

Getting a scale factor of a Bitmap after rotating it with a matrix

I've got the following problem which I tried to solve the whole day.
I load a Bitmap picture with his corresponding height and width into an ImageView.
Then I use a matrix for moving,scaling and rotating the image.
For scaling I'm using the postScale method.
For moving I'm using the postTranslate method.
Only for Rotating I'm using the preRotate method.
Now I need to get the factor I scaled the image with, because I later need this factor in another program.
Using the MSCALE_X and MSCALE_Y values of the matrix only fits until I did no rotation. If I rotated the image, the scale values don't fit anymore (because the matrix was multiplied with the formula which is shown in the api).
Now my Question is:
How can I still get the scale factor of the image after rotating it?
For the rotation factor (degrees) it is simple, because I store it within an extra variable which is icremented/decremented while rotating.
But for the scale factor it does not work, because if I first scale an image down to 50% and then rescale it up to 150% then I scaled it with a factor of 3 but the original scaling factor is only 1.5).
Another example is. Even if I did not rescale the picture it even changes its scaling factor if I rotate it.
//Edit:
Finally I solved the problem on my own :) (doing a bit math and then I figured something interesting (or lets say obvious) out).
Here my solution:
I figured out that the values MSCALE_X and MSCALE_Y are calculated by using the cosinus function (yeah the basic math...). (Using 0° rotation leads to the correct scalingWidth and scalingHeight within X and Y). (90 and 270° results in a scalingWidth/Height of 0 and 180° results in a scalingWidth/Height multiplied by -1).
This leads me to the idea to write the following function:
This function saves the current matrix within a new matrix. Then it rotates the new matrix to the startstate (0°). Now we can read the non violated values MSCALE_X and MSCALE_Y in our matrix (which are the correct scaling factors now)
I had the same problem. This is a simple way and the logic is sound (in my mind).
For: xScale==yScale
float scale = matrix.mapRadius(1f) - matrix.mapRadius(0f);
For: xScale != yScale
float[] points={0f,0f,1f,1f};
matrix.mapPoints(points);
float scaleX=points[2]-points[0];
float scaleY=points[3]-points[1];
If you are not translating, you may be able to get away with just with 1f vector/point. I've tested the xScale==yScale (mapRadius) variant and it seems to work.
I had a similar problem with an app I'm writing. I couldn't see an obvious an simple solution so I just created a RectF object that had the same initial coords of the bitmap. Then, everytime I adjusted the matrix, I'd apply the transformation to the RectF as well (using Matrix.mapRect() ). This worked perfectly for me. It also allowed me to keep track of the absolute position of the edges of the bitmap.

Billboarding in Android OpenGL ES 1.0

I'm trying to get billboarding to work, but having trouble with the last step.
After following these directions from NeHe's tutorials (http://nehe.gamedev.net/data/articles/article.asp?article=19) I have my look, right, and up vectors, and I have already translated the modelview matrix to the centerpoint of the billboard by using glTranslatef().
float[] m = {right.x,right.y,right.z,0f,
up.x,up.y,up.z,0f,
look.x,look.y,look.z,0f,
pos.x,pos.y,pos.z,1f}; //pos is the centerpoint position
gl.glMultMatrixf(m, 0);
When I try to create a multiplication matrix out of these like so, the billboards are displayed all over the place in the wrong positions and orientations.
I guess my problem is that I don't know how to correctly create and multiply the matrix. I tried doing this instead, but then half the lines (the ones that need to rotate counter-clockwise) are rotated in the wrong direction:
//normal is the vector that the billboard faces before any manipulations.
float angle = look.getAngleDeg(normal); //returns angle between 0 and 180.
gl.glRotatef(angle, up.x,up.y,up.z);
Got it, using my second method. Calculating the angle between vectors (arccos of the dot product) can only give an angle between 0 and 180, so half the time you want to negate the angle so the rotation is in the opposite direction.
This is easy to check...since I already have the right vector, I can just check if the angle between the right vector and the normal is acute. If it's acute, then you want to negate the original angle.

Android imageview matrix operations

Me used imageview to display image.
I set the scale type as ScaleType.MATRIX
There is option for scaling (zooming), dragging and all. All this are done by doing matrix manipulations mainly postTranslate and postScale
My problem now it can be drag such that the image is not in the screen
So how we get how much it is dragged.
In brief
I have two matrix (android.graphics.Matrix) one the initial stage and the other that i got after drag and zoom. Now from this 2 matrices i want to calculate how much it moved in x-direction and y-direction?
What matrix operation i need to do here.
Thank you
You can get float array of matrix's values using getValues function. The 2nd (Matrix.MTRANS_X) and 5th (Matrix.MTRANS_Y) values of matrix are transitions in x and y directions.
http://developer.android.com/reference/android/graphics/Matrix.html
You need to apply the matrix to get the transformed coordiantes. Matrix class has a few methods that will give you tranformed coordinates.
Look at
mapPoints
mapVectors
mapRect
Once you have the points (original and the transformed) you can easily calculate the distance using euclidean distance.

3D rotation while object being translated

I've been playing with Android animation framework and I found the following 3D rotation sample code:
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/animation/Transition3d.html
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/animation/Rotate3dAnimation.html
It does pretty much what I want but I want the ImageView to rotate while it's being translated from point A to point B and the it should rotate along it's own center(which is moving) instead of the center of the container of screen.
Does anyone know how to do that?
-Rachel
Well it's pretty close to what you posted. Essentially you're multiplying the rotational matrix by the translation matrix. That's essentially what happens under the covers. Android hides that detail from you with it's API:
Matrix matrix = transformation.getMatrix();
matrix.rotate( rotateX, rotateY );
matrix.postTranslate( transX, transY );
Rotate first then translate will rotate the image around it's own axis first before translating it.

How to use an OpenCV rotation and translation vector with OpenGL ES in Android?

I'm am working on a basic augmented reality application on Android. What I did so far is detect a square with opencv and then using cvFindExtrinsicCameraParams2() I calculated a rotation and translation vector. For this I used 4 object points, which are just the corners of a square around (0,0,0) and the 4 corners of the square in the image.
This yields me a pretty good rotation and translation matrix. I also calculated the rotation matrix with cvRodrigues2() since using this is easier than the rotation vector. As long as I use these to draw some points in the image everything works fine. My next step is however to pass these vectors and the matrix back to java and then use them with OpenGL to draw a square in an OpenGLView. The square should be exactly around the square in the image which is displayed behind the OpenGLView.
My problem is that I cannot find the correct way of using the rotation matrix and translation vector in OpenGL. I started of with exactly the same object points as used for the openCV functions. Then I applied the rotation matrix and translation vector in pretty much any possible way I could think of. Sadly none of these approaches produce a result which is anyway near what I hoped for. Can anyone tell me how to use them correctly?
So far the "closest" results I have gotten, was when randomly multiplying the whole matrix with -1. But most of the time the squares still look mirror inverted or rotated for 180 degrees. So I guess it was just a lucky hit, but not the right approach.
Okay after some more testing I finally managed to get it to work. While I don't understand it... it does 'work'. For anyone who will need to do this in the future here is my solution.
float rv[3]; // the rotation vector
float rotMat[9]; // rotation matrix
float tv[3]; // translation vector.
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rotMat[0], rotMat[3], rotMat[6], 0.0f,
rotMat[1], rotMat[4], rotMat[7], 0.0f,
rotMat[2], rotMat[5], rotMat[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
As genpfault said in his comment everything needs to be transposed since OpenGL since OpenGL needs a column-major order. (Thanks for the comment, I saw that page earlier already.) Furthermore the y and z rotation angle as well as the y and z translation need to be multiplied by -1. This is what I find a bit weird. Why only those and not the x values too?
This works as it should I guess. But corners the don't match exactly. I guess this is caused by some wrong openGLView configurations. So even though I am still not a 100% happy with my solution I guess it is the answer to my question.
Pandoro's method really works! In case someone wondering "how to convert the rotation vector into a rotation matrix" here's how I did it. By the way, I've used these in OpenGL 2, not ES.
// use the rotation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float rv[] = {rotation->data.fl[0], rotation->data.fl[1], rotation->data.fl[2] };
// use the translation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float tv[] = {translation->data.fl[0], translation->data.fl[1], translation->data.fl[2]} ;
float rm[9];
// rotation matrix
CvMat* rotMat = cvCreateMat (3, 3, CV_32FC1);
// rotation vectors can be converted to a 3-by-3 rotation matrix
// by calling cvRodrigues2() - Source: O'Reilly Learning OpenCV
cvRodrigues2(rotation, rotMat, NULL);
for(int i=0; i<9; i++){
rm[i] = rotMat->data.fl[i];
}
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rm[0], rm[3], rm[6], 0.0f,
rm[1], rm[4], rm[7], 0.0f,
rm[2], rm[5], rm[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
Good luck!

Categories

Resources