I want to horizontally invert frames coming to Surface object made from TextureView. What I can do is to set transformation matrix to this TextureView instance, whereas I postScale by -1 for x and 1 for y (leave unchanged), and than postTranslate dx on the full width of the view and leave dy unchanged (0F).
But the problem appears when I rotate my device by 90 degrees (horizontally) with screen rotation off: the image is rotated by 180 degrees, and it, of course, makes perfect sense, because x and y axises did not change.
How it can be solved? Is it possible to play with the translation matrix in such a way to resolve this problem? Or may be with OpenGL ES tools?
P.S.: strangely, but rotation is done two times two times more comparing to the device rotation itself, e.g. when I rotate the device by 90 degrees - preview is rotated by 180.
P.S.S.: I tried to invert the preview using Matrix's setPolyToPoly method... and have got exactly the same result.
P.S.S.S.: Also, played with open gl to achieve the goal using simple scale and rotation transformations for the model and projection matricies, and have got exact the same result!
Update:
These are screenshots that describe default behavior of the front camera - frame inversion is applied by HAL by default and I can't read the text; still, whether I rotate the device or not - "frame orientation" does not change:
And these are screenshots when I apply, e.g., Matrix.scaleM(modelMatrix, 0, -1F, 1F, 1F); and then apply this matrix to every coordinate that comes into vertex shader, so I can now read the text because I applied the inversion myself so HAL's inversion with my custom inversion will result into inversion absence, but when I rotate my device (with device orientation change on rotation disabled, of course, and that's the point) - I'll see myself flipped upside down, and that's, of course, make perfect sense, because device's coordinate system won't change. Still, I want to be able to avoid image rotation somehow on device rotation itself (like in the default mode - whether the device is rotated or not - the preview image just "does not care"), and to still be able to read the text (I mean, like in portrait mode).
Do you run your tests on front or back camera? Usually only the front camera frame needs to be flipped horizontally, not the back camera.
Anyway if you need different transformations based on device orientation, you need to detect orientation changes in your Activity. Camera doesn't do anything by itself when device is rotated, you will get the same results whatever transformation method you choose.
Update #2
I just realised that this problem can be solved if transformation is applied at the right moment in the pipeline. I will show a working example, it uses OpenGL extensively but I hope you can find something of use. See this code fragment, it uses a vertex shader to transform the coordinates. In drawFrame() there's a call to getTransformMatrix(). Instead of performing this call, get a matrix from this method I wrote when performing a similar task.
Related
If I have custom coordinate system X - left/right, Y - forward/backward, Z - Up/down that is represented on my PC screen inside my unreal project, how would I map the accelerator values In a way that when I move my phone toward the PC screen (regardless of the phone orientation) so that my Y value goes up and same for other axes?
I got something similar working with rotation by taking "referent" rotation quaternion, inverting it and multiplying it by current rotation quaternion, but I'm just stuck on how to transform movement.
Example of my problem is that if I'm moving my phone up with screen pointing at sky my Z axis increases which is what I want, but when I also point my phone screen to my PC screen and move it forward Z axis again goes up, when I would want in this case that my Y value increases.
There is a similar question Acceleration from device's coordinate system into absolute coordinate system but that doesn't really solve my problem since I don't want to depend on the location of the north for Y and so on.
Clarification of question intent
It sounds like what you want is the acceleration of your device with respect to your laptop. As you correctly mentioned, the similar question Acceleration from device's coordinate system into absolute coordinate system maps the local accelerometer data of a device with respect to a global frame of reference (FoR) (the Cartesian "flat" Earth FoR to be specific - as opposed to the ultra-realistic spherical Earth FoR).
What you know
From your device, you know the local Phone FoR, and from the link above, you can also find the behavior of your device with respect to a flat Earth FoR with a rotation matrix, which I'll call R_EP for Rotation in Earth FoR from Phone FoR. In order to represent the acceleration of your device with respect to your laptop, you will need to know how your laptop is oriented and positioned with respect to either your phone's FoR (A), or the flat Earth FoR (B), or some other FoR that is known to both your laptop and your phone but I'll ignore this cause it's irrelevant and the method is identical to B.
What you'll need
In the first case, A, this will allow you to construct a rotation matrix which I'll call R_LP for Rotation in Laptop FoR from Phone FoR - and that would be super convenient because that's your answer. But alas, life isn't fun without a little bit of a challenge.
In the second case, B, this will allow you to construct a rotation matrix which I'll call R_LE for Rotation in Laptop FoR from Earth FoR. Because the Hamilton product is associative (but NOT commutative: Are quaternions generally multiplied in an order opposite to matrices?), you can find the acceleration of your phone with respect to your laptop by daisy-chaining the rotations, like so:
a_P]L = R_LE * R_EP * a_P]P
Where the ] means "in the frame of", and a_P is acceleration of the Phone. So a_P]L is the acceleration of the Phone in the Laptop FoR, and a_P]P is the acceleration of the Phone in the Phone's FoR.
NOTE When "daisy-chaining" rotation matrices, it's important that they follow a specific order. Always make sure that the rotation matrices are multiplied in the correct order, see Sections 2.6 and 3.1.4 in [1] for more information.
Hint
To define your laptop's FoR (orientation and position) with respect to the global "flat" Earth FoR, you can place your phone on your laptop and set the current orientation and position as your laptop's FoR. This will let you construct R_LE.
Misconceptions
A rotation quaternion, q, is NEITHER the orientation NOR attitude of one frame of reference relative to another. Instead, it represents a "midpoint" vector normal to the rotation plane about which vectors from one frame of reference are rotated to the other. This is why defining quaternions to rotate from a GLOBAL frame to a local frame (or vice-versa) is incredibly important. The ENU to NED rotation is a perfect example, where the rotation quaternion is [0; sqrt(2)/2; sqrt(2)/2; 0], a "midpoint" between the two abscissa (X) axes (in both the global and local frames of reference). If you do the "right hand rule" with your three fingers pointing along the ENU orientation, and rapidly switch back and forth from the NED orientation, you'll see that the rotation from both FoR's is simply a rotation about [1; 1; 0] in the Global FoR.
References
I cannot recommend the following open-source reference highly enough:
[1] "Quaternion kinematics for the error-state Kalman filter" by Joan Solà. https://hal.archives-ouvertes.fr/hal-01122406v5
For a "playground" to experiment with, and gain a "hands-on" understanding of quaternions:
[2] Visualizing quaternions, An explorable video series. Lessons by Grant Sanderson. Technology by Ben Eater https://eater.net/quaternions
I am developing a game for android using the android NDK Vulkan APIs. The code is, for the most part, in C++14. For most cases things work fine, however, on some devices, I have this problem where the x and y coordinates are switched. I draw on what I think of as the top of the screen, and it draws the objects on the side. Also, when I do anything with the view point (the view matrix), x and y are reversed. If I move the view point in the x direction, it actually moves in the y direction.
Also, the width and height reported by the swap chain are reversed. So that if I plug these values into the perspective matrix like so:
glm::perspective(glm::radians(60.0f), swapchainRetrievedWidth / (float) swapchainRetrievedHeight, 0.1f, 10.0f);
it will draw horribly skewed objects. But if I reverse the width and height, like so:
glm::perspective(glm::radians(60.0f), swapchainRetrievedHeight / (float) swapchainRetrievedWidth, 0.1f, 10.0f);
The objects look fine.
One device where this happens on is using an Adreno 530, API version 1.0.49, driver version: 35.143.1455, OS: android 8.0, phone vendor: HTC. For this device, these symptoms only occur if the device is using the split screen mode with the device held in landscape orientation (the app forces portrait mode). I've seen this happen on other devices too and in the full screen (not split screen) mode. So, I don't think it is the way I reinitialized the swap chain, pipeline, depth buffer, render pass and command buffers when the screen size changes. Since the screen would not change size for the devices where this problem occurs in full screen mode.
Am I doing something wrong? Is there a bug? I am willing to give more information on this problem, but do not know what is needed.
I tried the same thing in OpenGLES 2.0 on the same device in the same circumstances and these symptoms do not occur. Thanks for all your support and help.
(Answer based on discussion in comments)
This happens when you set VkSwapchainCreateInfo::preTransform to something other than VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR, but don't actually apply that transform during rendering. The safe thing to do is to always use VK_SURFACE_TRANSFORM_IDENTITY_BIT_KHR, but if your window is being rotated by the system compositor this is suboptimal from a performance/power point of view.
It's more efficient to look at what transform the system compositor is applying (VkSurfaceCapabilitiesKHR::currentTransform), apply that transform yourself during rendering, and let the compositor know you did so by setting VkSwapchainCreateInfo::preTransform.
I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRotationMatrixFromVector converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.
We use Camera to do 3D transformations in canvas.We usually rotate camera and get it's Matrix then translate it.But Camera also has translate method.The results of using methods are different.
My question is : What is difference between Camera.translate and Matrix.preTranslate or Matrix.postTranslate?
The reason there are both, is because matrix multiplication must be done in a certain order to achieve the proper result (as you may already know).
The sequence of translations/rotations/scales are done in reverse order as you type them.
So if you do something like this:
Camera.rotate(15, 0, 0);
Camera.scale(.5f, .5f, .5f);
Camera.translate(70, 70, 70);
You're first translating 70,70,70 then scaling by 50% in all directions, then rotating 15 degrees about the X axis.
So Matrix has a pre and post translate (well, pre and post everything), because maybe you want to actually rotate it first by 15 degrees and then translate it, and then finally scale it.
So that answers the pre and post translates. Now the reason Camera has a straight rotate and translate is for people that know how this works already (like me!), so I never use Matrix or Camera for that matter, because I can simply do my rotations and translations directly on the Canvas. You can too as long as you know that translations, scales, and rotates are done in reverse order.
Also, if you know what I have told you, it gives you more power. You can do a sequence of 10 matrices without surrounding them in multiple Matrix objects for each one (for example you want to do a swing motion that swings outward AND rotates about the center to simulate centrifugal force). This would need to be done with multiple rotates and translations (surrounded by multiple Matrix objects being passed into one another), but if you know how each translate works, you can simply do a series of .translate(), .rotate(), and .scale().
This information is especially useful if you ever do 3D graphics, because that's when these matrices give people headaches.
I hope this helps!
The result would be visually the same if you i.e. do not touch the canvas but rotate the camera 90 degs or keep camera still but rotate the canvas it looks at by -90 degs.
I am having trouble rotating my 3D objects in Open GL. I start each draw frame by loading the identity (glLoadIdentity()) and then I push and pop on the stack according to what I need (for the camera, etc). I then want 3D objects to be able to roll, pitch and yaw and then have them displayed correctly.
Here is the catch... I want to be able to do incremental rotations as if I was flying an airplane. So every time the up button is pushed the object rotates around it's own x axis. But then if the object is pitched down and chooses to yaw, the rotation should then be around the object's up vector and not the Y axis.
I've tried doing the following:
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and those don't seem to work. (Keeping in mind that the vectors are being computed correctly)I've also tried...
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and I still get weird rotations.
Long story short... What is the proper way to rotate a 3D object in Open GL using the object's look, right and up vector?
You need to do the yaw rotation around (around Y) before you do the pitch one. Otherwise, the pitch will be off.
E.g. you have a 45 degrees downward pitch and a 180 degrees yaw. By doing the pitch first, and then rotate the yaw around the airplane's Y vector, the airplane would end up pointing up and backwards despite the pitch being downwards. By doing the yaw first, the plane points backwards, then the pitch around the plane's X vector will make it point downwards correctly.
The same logic applies for roll, which needs to be applied last.
So your code should be :
glRotatef(yawTotal, 0,1,0);
glRotatef(pitchTotal, 1,0,0);
glRotatef(rollTotal, 0,0,1);
Cumulative rotations will suffer from gimbal lock. Look at it this way: suppose you are in an aeroplane, flying level. You apply a yaw of 90 degrees anticlockwise. You then apply a roll of 90 degrees clockwise. You then apply a yaw of 90 degrees clockwise.
Your plane is now pointing straight downward — the total effect is a pitch of 90 degrees clockwise. But if you just tried to add up the different rotations then you'd end up with a roll of 90 degrees, and no pitch whatsoever because you at no point applied pitch to the plane.
Trying to store and update rotation as three separate angles doesn't work.
Common cited solutions are to use a quaternion or to store the object orientation directly as a matrix. The matrix solution is easier to build because you can prototype it with OpenGL's built-in matrix stacks. Most people also seem to find matrices easier to understand than quaternions.
So, assuming you want to go matrix, your prototype might do something like (please forgive my lack of decent Java knowledge; I'm going to write C essentially):
GLfloat myOrientation[16];
// to draw the object:
glMultMatrixf(myOrientation);
/* drawing here */
// to apply roll, assuming the modelview stack is active:
glPushMatrix(); // backup what's already on the stack
glLoadIdentity(); // start with the identity
glRotatef(angle, 0, 0, 1);
glMultMatrixf(myOrientation); // premultiply the current orientation by the roll
// update our record of orientation
glGetFloatv(GL_MODELVIEW_MATRIX, myOrientation);
glPopMatrix();
You possibly don't want to use the OpenGL stack in shipping code because it's not really built for this sort of use and so performance may be iffy. But you can prototype and profile rather than making an assumption. You also need to consider floating point precision problems — really you should be applying a step that ensures myOrientation is still orthonormal after it has been adjusted.
It's probably easiest to check Google for that, but briefly speaking you'll use the dot product to remove erroneous crosstalk from two of the axes to the third, then to remove from one of the first two axes from the second, then renormalise all three.
Thanks for the responses. The first response pointed me in the right direction, the second response helped a little too, but ultimately it boiled down to a combination of both. Initially, your 3D object should have a member variable which is a float array size 16. [0-15]. You then have to initialize it to the identity matrix. Then the member methods of your 3D object like "yawObject(float amount)" just know that you are yawing the object from "the objects point of view" and not the world, which would allow the incremental rotation. Inside the yawObject method (or pitch,roll ojbect) you need to call the Matrix.rotateM(myfloatarray,0,angle,0,1,0). That will store the new rotation matrix (as describe in the first response). You can then when you are about to draw your object, multiply the model matrix by the myfloatarray matrix using gl.glMultMatrix.
Good luck and let me know if you need more information than that.