When calling Matrix.postScale( sx, sy, px, py ); the matrix gets scaled and also translated (depending on the given point x, y). That predestines this method to be used for zooming into images because I can easily focus one specific point.
The android doc describes the method like this:
Postconcats the matrix with the specified scale. M' = S(sx, sy, px, py) * M
At a first glance this seems ridiculous because M is supposed to be a 3x3-Matrix. Digging around I've found out that android uses a 4x4-Matrix for its computations (while only providing 3x3 on its API). Since this code is written in C I'm having a hard time trying to understand what is actually happening.
What I actually want to know: How can I apply this kind of scaling (with a focused point) to the 3x3 Matrix that I can access within my Java-code?
Related
I have a Opengl ES 1.x ANdroid 1.5 app that shows a Square with Perspective projection, on the center of the screen.
I need to move the camera (NOT THE SQUARE) when the user moves the finger on the screen, for example, if the user moves the finger to the right, the camera must be moved to the left, it must be shown like if the user is moving the square.
I need to do it without translating the square. The square must be on the opengl position 0,0,-1 allways.
I DONT WANT to rotate the camera arround the square, no, what i want is to move the camera side to side. Code examples are welcome, my opengl skills are very low, and i can't find good examples for this in google
I know that i must use this function: public static void gluLookAt (GL10 gl, float eyeX, float eyeY, float eyeZ, float centerX, float centerY, float centerZ, float upX, float upY, float upZ), but i dont understand where and how to get the values for the parameters. Because this, i will apreciate code examples for doing this.
for example:
I have a cube on the position 0,0,-1. I want that my camera points the cube. I tryed with this: GLU.gluLookAt(gl, 0, 0, 2, 0, 0, 0, 0, 0, 1);, but the cube is not in the screen, i just donmt understand what im doing wrong
First of all, you have to understand that in OpenGL there are not distinct model and view matrices. There is only a combined modelview matrix. So OpenGL doesn't care (or even know) if you translate the camera (what is a camera anyway?) or the object, so your requirement not to move the square is entirely artificial. Though it may be that this is a valid requirement and the distinction between model and view transformation often is very practical, just don't think that translating the square is any different from translating the camera from OpenGL's point of view.
Likewise don't you neccessarily need to use gluLookAt. Like glOrtho, glFrustum or gluPerspective this function just modifies the currently selected matrix (usually the modelview matrix), nothing different from the glTranslate, glRotate or glScale functions. The gluLookAt function comes in handy when you want to position a classical camera, but its functionality can also be achieved by calls to glTranslate and glRotate without problems and sometimes (depending on your requirements) this is even easier than artificially mapping your view parameters to gluLookAt parameters.
Now to your problem, which is indeed solvable quite easily without gluLookAt: What you want to do is move the camera in a direction parallel to the screen plane and this in turn is equivalent to moving the camera in the x-y-plane in view space (or camera space, if you want). And this in turn is equivalent to moving the scene in opposite direction in the x-y-plane in view space.
So all that needs to be done is
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, 0.0f);
//camera setup...
Where (x, y) is the movement vector determined from the touch events, appropriately scaled (try dividing the touch coords you get by the screen dimensions or something similar for example). After this glTranslate comes whatever other camera or scene transformations you already have (be it gluLookAt or just some glTranslate/glRotate/glScale calls). Just make sure that the glTranslate(x, y, ...) is the first transformation you do on the modelview matrix after setting it to identity, since we want to move in view space.
So you don't even need gluLookAt. From your other questions I know your code already looks something like
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, z);
glRotatef(...);
...
So everything you need to do is plug the x and y values determined from the touch movement into the first glTranslate call (or add them to already existing x and y values), since multiple translations are perfectly commutative.
For more insight into OpenGL's transformation pipeline (which is definitely needed before progressing further), you may also look at the asnwers to this question.
EDIT: If you indeed want to use gluLookAt (be it instead or after the above mentioned translation), here some small words about its workings. It defines a camera using three 3d vectors (passed in as 3 consecutive values each). First the camera's position (in your case (0, 0, 2)), then the point at which the camera looks (in your case (0, 0, 0), but (0, 0, 1) or (0, 0, -42) would result in the same camera, the direction matters). And last comes an up-vector, defining the approximate up-direction of the camera (which is further orthogonalized by gluLookAt to make an appropriate orthogonal camera frame).
But since the up-vector in your case is the z-axis, which is also the negative viewing direction, this results in a singular matrix. You probably want the y-axis as up-direction, which would mean a call to
gluLookAt(0,0,2, 0,0,0, 0,1,0);
which is in turn equivalent to a simple
glTranslate(0, 0, -2);
since you use the negative z-axis as viewing direction, which is also OpenGL's default.
I've got the following problem which I tried to solve the whole day.
I load a Bitmap picture with his corresponding height and width into an ImageView.
Then I use a matrix for moving,scaling and rotating the image.
For scaling I'm using the postScale method.
For moving I'm using the postTranslate method.
Only for Rotating I'm using the preRotate method.
Now I need to get the factor I scaled the image with, because I later need this factor in another program.
Using the MSCALE_X and MSCALE_Y values of the matrix only fits until I did no rotation. If I rotated the image, the scale values don't fit anymore (because the matrix was multiplied with the formula which is shown in the api).
Now my Question is:
How can I still get the scale factor of the image after rotating it?
For the rotation factor (degrees) it is simple, because I store it within an extra variable which is icremented/decremented while rotating.
But for the scale factor it does not work, because if I first scale an image down to 50% and then rescale it up to 150% then I scaled it with a factor of 3 but the original scaling factor is only 1.5).
Another example is. Even if I did not rescale the picture it even changes its scaling factor if I rotate it.
//Edit:
Finally I solved the problem on my own :) (doing a bit math and then I figured something interesting (or lets say obvious) out).
Here my solution:
I figured out that the values MSCALE_X and MSCALE_Y are calculated by using the cosinus function (yeah the basic math...). (Using 0° rotation leads to the correct scalingWidth and scalingHeight within X and Y). (90 and 270° results in a scalingWidth/Height of 0 and 180° results in a scalingWidth/Height multiplied by -1).
This leads me to the idea to write the following function:
This function saves the current matrix within a new matrix. Then it rotates the new matrix to the startstate (0°). Now we can read the non violated values MSCALE_X and MSCALE_Y in our matrix (which are the correct scaling factors now)
I had the same problem. This is a simple way and the logic is sound (in my mind).
For: xScale==yScale
float scale = matrix.mapRadius(1f) - matrix.mapRadius(0f);
For: xScale != yScale
float[] points={0f,0f,1f,1f};
matrix.mapPoints(points);
float scaleX=points[2]-points[0];
float scaleY=points[3]-points[1];
If you are not translating, you may be able to get away with just with 1f vector/point. I've tested the xScale==yScale (mapRadius) variant and it seems to work.
I had a similar problem with an app I'm writing. I couldn't see an obvious an simple solution so I just created a RectF object that had the same initial coords of the bitmap. Then, everytime I adjusted the matrix, I'd apply the transformation to the RectF as well (using Matrix.mapRect() ). This worked perfectly for me. It also allowed me to keep track of the absolute position of the edges of the bitmap.
i was just going through the documentation given on developer.android.com and when i was going through the canvas class if i found this method named scale, so i searched for its documentation and found the following:
public void scale (float sx, float sy)
Since: API Level 1
Preconcat the current matrix with the specified scale.
Parameters
sx The amount to scale in X
sy The amount to scale in Y
what matrix are they talking about over here? How is the matrix associated with canvas and how does it matter if i scale my canvas or not?
Canvas makes a lot of native calls under the covers and delegates its work to GL. I believe there is a default Matrix already associated with it. If you want in-depth knowledge of what's happening I'd recommend looking through the source code for the graphics stuff.
I'm using Matrix to scale and rotate Bitmaps. Now I'm wondering what the difference between preconcat & postconcat is, or more precisely the difference between:
postRotate
preRotate
setRotate
From what I could figure out so far setRotate always overwrites the whole matrix, while with preRotate and postRotate I can apply multiple changes to a matrix (e.g. scaling + rotation). However, either using postRotate or preRotate didn't cause any different results for the cases I used them.
The answer to your question isn't really specific to Android; it's a graphics and math question. There's lots of theory in this answer--you've been warned! For a superficial answer to your question, skip to the bottom. Also, because this is such a long-winded tirade, I might have a typo or two making things unclear. I apologize in advance if that's the case.
In computer graphics, we can represent pixels (or in 3D, vertices) as vectors. If your screen is 640x480, here's a 2D vector for the point in the middle of your screen (forgive my shoddy markup):
[320]
[240]
[ 1]
I'll explain why the 1 is important later. Transformations are often represented using matrices because it's then very simple (and very efficient) to chain them together, like you mentioned. To scale the point above by a factor of 1.5, you can left-multiply it by the following matrix:
[1.5 0 0]
[ 0 1.5 0]
[ 0 0 1]
You'll get this new point:
[480]
[360]
[ 1]
Which represents the original point, scaled by 1.5 relative to the corner of your screen (0, 0). This is important: scaling is always done with respect to the origin. If you want to scale with some other point as your center (such as the middle of a sprite), you need to "wrap" the scale in translations to and from the origin. Here's the matrix to translate our original point to the origin:
[1 0 -320]
[0 1 -240]
[0 0 1]
Which yields:
[320*1 + 1*-320] [0]
[240*1 + 1*-240] = [0]
[ 1*1 ] [1]
You'll recognize the above as the identity matrix with the displacement coordinates slapped in the upper-right corner. That's why the 1 (the "homogenous coordinate") is necessary: to make room for these coordinates, thus making it possible to translate using multiplication. Otherwise it would have to be represented by matrix addition, which is more intuitive to humans, but would make graphics cards even more complicated than they already are.
Now, matrix multiplication generally isn't commutative, so when "adding" a transformation (by multiplying your matrix) you need to specify whether you're left-multiplying or right-multiplying. The difference it makes is what order your transformations are chained in. By right-multiplying your matrix (using preRotate()) you're indicating that the rotation step should happen before all the other transformations that you've just asked for. This might be what you want, but it usually isn't.
Often, it doesn't matter. If you only have one transformation, for example, it never matters :) Sometimes, your transformations can happen in either order with the same effect, such as scaling and rotation--my linear algebra is rusty, but I believe that in this case the matrix multiplication actually is commutative because the scale matrix is symmetric, that is, it mirrors itself across the diagonal. But really, just think about it: If I rotate some picture 10 degrees clockwise and then scale it to 200%, it looks the same as if I scaled it first, then rotated it.
If you were doing some weirder compound transformations, you'd begin to notice a discrepancy. My advice is to stick with postRotate().
I answered the question yesterday, but I feel sometiong wrong today ,So I correct the answer here:
matrix: float[] values ={1.2f,0.5f,30,0.5f,1.2f,30,0,0,1};
//as we all know, the basic value in matrix,means no transformation added
matrix2: float[] values2 ={1f,0,0,0,1f,0,0,0,1};
Let's say our matrix values are the values above.
1、 when we do the transformation like below:
matrix.preTranslate(-50, -50);
is equals to do sequence transformation to matrix2 above like below:
matrix2.postTranslate(-50, -50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);// note here
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
2、 when we do the transformation like below :
matrix.preRotate(50);
is equals to do sequence transformation to matrix2 like below:
matrix2.postRotate(50);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
3、 when we do the transformation like below :
matrix.preScale(1.3f,1.3f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postScale(1.3f,1.3f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
4、 when we do the transformation like below :
matrix.preSkew(0.4f,0.4f);
is equals to do sequence transformation to matrix2 like below:
matrix2.postSkew(0.4f,0.4f);
matrix2.postSkew(0.5f/1.2f,0.5f/1.2f);
matrix2.postScale(1.2f, 1.2f);
matrix2.postTranslate(30, 30);
I'm am working on a basic augmented reality application on Android. What I did so far is detect a square with opencv and then using cvFindExtrinsicCameraParams2() I calculated a rotation and translation vector. For this I used 4 object points, which are just the corners of a square around (0,0,0) and the 4 corners of the square in the image.
This yields me a pretty good rotation and translation matrix. I also calculated the rotation matrix with cvRodrigues2() since using this is easier than the rotation vector. As long as I use these to draw some points in the image everything works fine. My next step is however to pass these vectors and the matrix back to java and then use them with OpenGL to draw a square in an OpenGLView. The square should be exactly around the square in the image which is displayed behind the OpenGLView.
My problem is that I cannot find the correct way of using the rotation matrix and translation vector in OpenGL. I started of with exactly the same object points as used for the openCV functions. Then I applied the rotation matrix and translation vector in pretty much any possible way I could think of. Sadly none of these approaches produce a result which is anyway near what I hoped for. Can anyone tell me how to use them correctly?
So far the "closest" results I have gotten, was when randomly multiplying the whole matrix with -1. But most of the time the squares still look mirror inverted or rotated for 180 degrees. So I guess it was just a lucky hit, but not the right approach.
Okay after some more testing I finally managed to get it to work. While I don't understand it... it does 'work'. For anyone who will need to do this in the future here is my solution.
float rv[3]; // the rotation vector
float rotMat[9]; // rotation matrix
float tv[3]; // translation vector.
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rotMat[0], rotMat[3], rotMat[6], 0.0f,
rotMat[1], rotMat[4], rotMat[7], 0.0f,
rotMat[2], rotMat[5], rotMat[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
As genpfault said in his comment everything needs to be transposed since OpenGL since OpenGL needs a column-major order. (Thanks for the comment, I saw that page earlier already.) Furthermore the y and z rotation angle as well as the y and z translation need to be multiplied by -1. This is what I find a bit weird. Why only those and not the x values too?
This works as it should I guess. But corners the don't match exactly. I guess this is caused by some wrong openGLView configurations. So even though I am still not a 100% happy with my solution I guess it is the answer to my question.
Pandoro's method really works! In case someone wondering "how to convert the rotation vector into a rotation matrix" here's how I did it. By the way, I've used these in OpenGL 2, not ES.
// use the rotation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float rv[] = {rotation->data.fl[0], rotation->data.fl[1], rotation->data.fl[2] };
// use the translation vector generated from OpenCV's cvFindExtrinsicCameraParams2()
float tv[] = {translation->data.fl[0], translation->data.fl[1], translation->data.fl[2]} ;
float rm[9];
// rotation matrix
CvMat* rotMat = cvCreateMat (3, 3, CV_32FC1);
// rotation vectors can be converted to a 3-by-3 rotation matrix
// by calling cvRodrigues2() - Source: O'Reilly Learning OpenCV
cvRodrigues2(rotation, rotMat, NULL);
for(int i=0; i<9; i++){
rm[i] = rotMat->data.fl[i];
}
rv[1]=-1.0f * rv[1]; rv[2]=-1.0f * rv[2];
//Convert the rotation vector into a matrix here.
//Complete matrix ready to use for OpenGL
float RTMat[] = {rm[0], rm[3], rm[6], 0.0f,
rm[1], rm[4], rm[7], 0.0f,
rm[2], rm[5], rm[8], 0.0f,
tv[0], -tv[1], -tv[2], 1.0f};
Good luck!