Difference between Camera.translate and Matrix.preTranslate or Matrix.postTranslate? - android

We use Camera to do 3D transformations in canvas.We usually rotate camera and get it's Matrix then translate it.But Camera also has translate method.The results of using methods are different.
My question is : What is difference between Camera.translate and Matrix.preTranslate or Matrix.postTranslate?

The reason there are both, is because matrix multiplication must be done in a certain order to achieve the proper result (as you may already know).
The sequence of translations/rotations/scales are done in reverse order as you type them.
So if you do something like this:
Camera.rotate(15, 0, 0);
Camera.scale(.5f, .5f, .5f);
Camera.translate(70, 70, 70);
You're first translating 70,70,70 then scaling by 50% in all directions, then rotating 15 degrees about the X axis.
So Matrix has a pre and post translate (well, pre and post everything), because maybe you want to actually rotate it first by 15 degrees and then translate it, and then finally scale it.
So that answers the pre and post translates. Now the reason Camera has a straight rotate and translate is for people that know how this works already (like me!), so I never use Matrix or Camera for that matter, because I can simply do my rotations and translations directly on the Canvas. You can too as long as you know that translations, scales, and rotates are done in reverse order.
Also, if you know what I have told you, it gives you more power. You can do a sequence of 10 matrices without surrounding them in multiple Matrix objects for each one (for example you want to do a swing motion that swings outward AND rotates about the center to simulate centrifugal force). This would need to be done with multiple rotates and translations (surrounded by multiple Matrix objects being passed into one another), but if you know how each translate works, you can simply do a series of .translate(), .rotate(), and .scale().
This information is especially useful if you ever do 3D graphics, because that's when these matrices give people headaches.
I hope this helps!

The result would be visually the same if you i.e. do not touch the canvas but rotate the camera 90 degs or keep camera still but rotate the canvas it looks at by -90 degs.

Related

Android Camera2 frame horizontal inversion

I want to horizontally invert frames coming to Surface object made from TextureView. What I can do is to set transformation matrix to this TextureView instance, whereas I postScale by -1 for x and 1 for y (leave unchanged), and than postTranslate dx on the full width of the view and leave dy unchanged (0F).
But the problem appears when I rotate my device by 90 degrees (horizontally) with screen rotation off: the image is rotated by 180 degrees, and it, of course, makes perfect sense, because x and y axises did not change.
How it can be solved? Is it possible to play with the translation matrix in such a way to resolve this problem? Or may be with OpenGL ES tools?
P.S.: strangely, but rotation is done two times two times more comparing to the device rotation itself, e.g. when I rotate the device by 90 degrees - preview is rotated by 180.
P.S.S.: I tried to invert the preview using Matrix's setPolyToPoly method... and have got exactly the same result.
P.S.S.S.: Also, played with open gl to achieve the goal using simple scale and rotation transformations for the model and projection matricies, and have got exact the same result!
Update:
These are screenshots that describe default behavior of the front camera - frame inversion is applied by HAL by default and I can't read the text; still, whether I rotate the device or not - "frame orientation" does not change:
And these are screenshots when I apply, e.g., Matrix.scaleM(modelMatrix, 0, -1F, 1F, 1F); and then apply this matrix to every coordinate that comes into vertex shader, so I can now read the text because I applied the inversion myself so HAL's inversion with my custom inversion will result into inversion absence, but when I rotate my device (with device orientation change on rotation disabled, of course, and that's the point) - I'll see myself flipped upside down, and that's, of course, make perfect sense, because device's coordinate system won't change. Still, I want to be able to avoid image rotation somehow on device rotation itself (like in the default mode - whether the device is rotated or not - the preview image just "does not care"), and to still be able to read the text (I mean, like in portrait mode).
Do you run your tests on front or back camera? Usually only the front camera frame needs to be flipped horizontally, not the back camera.
Anyway if you need different transformations based on device orientation, you need to detect orientation changes in your Activity. Camera doesn't do anything by itself when device is rotated, you will get the same results whatever transformation method you choose.
Update #2
I just realised that this problem can be solved if transformation is applied at the right moment in the pipeline. I will show a working example, it uses OpenGL extensively but I hope you can find something of use. See this code fragment, it uses a vertex shader to transform the coordinates. In drawFrame() there's a call to getTransformMatrix(). Instead of performing this call, get a matrix from this method I wrote when performing a similar task.

How to use the numbers from Game Rotation Vector in Android?

I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRo‌tationMatrixFromVecto‌r converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.

Opengl es rotation causing stretching of rectangle

I am working on a project that uses scale/translate/rotate. I know that I need to rotate first (bottom of list of transformations) I am currently pushing before every objects draw and popping matrix afterwards. This is done in the draw() method I created. I have tested removing drawing of objects systematically and not using transformations other than rotate on the object in question.
This is my problem. When I rotate the object at 0/360 degrees it is a perfect square, as it rotates it stretches longer along the x-axis (still maintaining properties of a rectangle if you follow me) at 270 degrees (straight down x-axis) the stretch reaches its highest point... It will stretch directly coordinating to the angle in which it is facing. I am setting point of origin to 0,0 before I re-rotate.
I am wondering if this problem is a case of a common newbie mistake, working as intended and I need to compensate, or my code has a flaw that I couldn't find in a few hours of research and digging. I will post code if requested but I think that because of the nature and the checking I have already done it may take quite a bit of space.
Any input would be greatly appreciated.
Thanks in advance!
I changed the image so that I could see more of what was happening. It doesn't appear to be rotating the way I intended so that might be the cause of all this.
After fixing the angle I have noticed that the slope is where it's being drawn from/to is off. I am assuming that I have a problem with my points.

Open GL Android 3D Object Rotation Issue

I am having trouble rotating my 3D objects in Open GL. I start each draw frame by loading the identity (glLoadIdentity()) and then I push and pop on the stack according to what I need (for the camera, etc). I then want 3D objects to be able to roll, pitch and yaw and then have them displayed correctly.
Here is the catch... I want to be able to do incremental rotations as if I was flying an airplane. So every time the up button is pushed the object rotates around it's own x axis. But then if the object is pitched down and chooses to yaw, the rotation should then be around the object's up vector and not the Y axis.
I've tried doing the following:
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and those don't seem to work. (Keeping in mind that the vectors are being computed correctly)I've also tried...
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and I still get weird rotations.
Long story short... What is the proper way to rotate a 3D object in Open GL using the object's look, right and up vector?
You need to do the yaw rotation around (around Y) before you do the pitch one. Otherwise, the pitch will be off.
E.g. you have a 45 degrees downward pitch and a 180 degrees yaw. By doing the pitch first, and then rotate the yaw around the airplane's Y vector, the airplane would end up pointing up and backwards despite the pitch being downwards. By doing the yaw first, the plane points backwards, then the pitch around the plane's X vector will make it point downwards correctly.
The same logic applies for roll, which needs to be applied last.
So your code should be :
glRotatef(yawTotal, 0,1,0);
glRotatef(pitchTotal, 1,0,0);
glRotatef(rollTotal, 0,0,1);
Cumulative rotations will suffer from gimbal lock. Look at it this way: suppose you are in an aeroplane, flying level. You apply a yaw of 90 degrees anticlockwise. You then apply a roll of 90 degrees clockwise. You then apply a yaw of 90 degrees clockwise.
Your plane is now pointing straight downward — the total effect is a pitch of 90 degrees clockwise. But if you just tried to add up the different rotations then you'd end up with a roll of 90 degrees, and no pitch whatsoever because you at no point applied pitch to the plane.
Trying to store and update rotation as three separate angles doesn't work.
Common cited solutions are to use a quaternion or to store the object orientation directly as a matrix. The matrix solution is easier to build because you can prototype it with OpenGL's built-in matrix stacks. Most people also seem to find matrices easier to understand than quaternions.
So, assuming you want to go matrix, your prototype might do something like (please forgive my lack of decent Java knowledge; I'm going to write C essentially):
GLfloat myOrientation[16];
// to draw the object:
glMultMatrixf(myOrientation);
/* drawing here */
// to apply roll, assuming the modelview stack is active:
glPushMatrix(); // backup what's already on the stack
glLoadIdentity(); // start with the identity
glRotatef(angle, 0, 0, 1);
glMultMatrixf(myOrientation); // premultiply the current orientation by the roll
// update our record of orientation
glGetFloatv(GL_MODELVIEW_MATRIX, myOrientation);
glPopMatrix();
You possibly don't want to use the OpenGL stack in shipping code because it's not really built for this sort of use and so performance may be iffy. But you can prototype and profile rather than making an assumption. You also need to consider floating point precision problems — really you should be applying a step that ensures myOrientation is still orthonormal after it has been adjusted.
It's probably easiest to check Google for that, but briefly speaking you'll use the dot product to remove erroneous crosstalk from two of the axes to the third, then to remove from one of the first two axes from the second, then renormalise all three.
Thanks for the responses. The first response pointed me in the right direction, the second response helped a little too, but ultimately it boiled down to a combination of both. Initially, your 3D object should have a member variable which is a float array size 16. [0-15]. You then have to initialize it to the identity matrix. Then the member methods of your 3D object like "yawObject(float amount)" just know that you are yawing the object from "the objects point of view" and not the world, which would allow the incremental rotation. Inside the yawObject method (or pitch,roll ojbect) you need to call the Matrix.rotateM(myfloatarray,0,angle,0,1,0). That will store the new rotation matrix (as describe in the first response). You can then when you are about to draw your object, multiply the model matrix by the myfloatarray matrix using gl.glMultMatrix.
Good luck and let me know if you need more information than that.

Android: How can I take advantage of Android's ability to rotate/scale a canvas

I'm using a SurfaceView (actually I'm tweaking the Android sample app "LunarView"). I've changed the doDraw() method of that sample app so that I can draw my own things to the canvas provided in the call. This is working fine, I can draw my own things to the canvas and they're showing up in the emulator fine.
What I would like to do, if possible, is have the canvas adapt to the X and Y scale orientation my app naturally uses. For example, my app needs to draw a simple X-Y graph, but I need the X axis to be "down" the screen and the Y axis to be "to the right". (In other words, a typical graph but rotated 90 degrees clockwise.)
I thought that the Matrix class, with its setRectToRect(...) method would be just the ticket for this, but it isn't working for me. I've tried a whole bunch of different invocations of setRectToRect(...) and whenever I call it my canvas shows nothing. If I comment out the calls, my canvas shows what I expect.
The canvas class has some super-powerful methods for scaling and translating, so it just seems natural to me that it would also support the type of axis swap I need, but for the life of me I can't figure out how to do it!
Any help would be great,
Thanks
Rich
Rich,
You can use canvas functions....
You need to tell it where to rotate, using center coordinates usually work best. Find center of canvas and call this: canvas.rotate(90,cx,cy);

Categories

Resources