Using Rotation Matrix to rotate points in space - android

I'm using android's rotation matrix to rotate multiple points in space.
Work so far
I start by reading the matrix from the SensorManager.getRotationMatrix function. Next I transform the rotation matrix into a quaternion using the explanation given in this link. I'm doing this because I read that Euler angles can lead to Gimbal lock issue and that operations with a 3x3 matrix can be exhaustive. source
Problem
Now what I want to do is: Imagine the phone is the origin of the referential and given a set of points (projected lat/lng coordinates into a xyz coordinate system see method bellow) I want to rotate them so I can check which ones are on my line of sight. For that I'm using this SO question which returns a X and Y (left and top respectively) to display the point on screen. It's working fine but only works when facing North (because it doesn't take orientation into account and my projected vector uses North/South as X and East/West as Z). So my thought was to rotate all objects. Also even though the initial altitude (Y) is 0 I want to be able to position the point up/down according to phone's orientation.
I think part of the solution may be on this post. But since this uses Euler angles I don't think that's the best method.
Conclusion
So, if it's really better to rotate each point's position how can I archive that using the rotation quaternion? Otherwise which is the better way?
I'm sorry if I said anything wrong in this post. I'm not good at physics.
Code
//this functions returns a 3d vector (0 for Y since I'm discarding altitude) using 2 coordinates
public static float[] convLocToVec(LatLng source, LatLng destination)
{
float[] z = new float[1];
z[0] = 0;
Location.distanceBetween(source.latitude, source.longitude, destination
.latitude, source.longitude, z);
float[] x = new float[1];
Location.distanceBetween(source.latitude, source.longitude, source
.latitude, destination.longitude, x);
if (source.latitude < destination.latitude)
z[0] *= -1;
if (source.longitude > destination.longitude)
x[0] *= -1;
return new float[]{x[0], (float) 0, z[0]};
}
Thanks for your help and have a nice day.
UPDATE 1
According to Wikipedia:
Compute the matrix product of a 3 × 3 rotation matrix R and the
original 3 × 1 column matrix representing v→. This requires 3 × (3
multiplications + 2 additions) = 9 multiplications and 6 additions,
the most efficient method for rotating a vector.
Should I really just use the rotation matrix to rotate a vector?

Since no one answered I'm here to answer myself.
After some research (a lot actually) I came to the conclusion that yes it is possible to rotate a vector using a quaternion but it's better for you that you transform it into a rotation matrix.
Rotation matrix - 9 multiplications and 6 additions
Quartenion - 15 multiplications and 15 additions
Source: Performance comparisons
It's better to use the rotation matrix provided by Android. Also if you are going to use quaternion somehow (Sensor.TYPE_ROTATION_VECTOR + SensorManager.getQuaternionFromVector for example) you can (and should) transform it into a rotation matrix. You can use the method SensorManager.getRotationMatrixFromVector to convert the rotation vector to a matrix. After you get the rotation matrix you just have to multiply it for the projected vector you want. You can use this function for that:
public float[] multiplyByVector(float[][] A, float[] x) {
int m = A.length;
int n = A[0].length;
if (x.length != n) throw new RuntimeException("Illegal matrix dimensions.");
float[] y = new float[m];
for (int i = 0; i < m; i++)
for (int j = 0; j < n; j++)
y[i] += (A[i][j] * x[j]);
return y;
}
Although I'm still not able to get this running correctly I will mark this as answer.

Related

ARCore - Get camera facing angle to world forward (-Z axis)

Is there a way to get the angle that the ARCore camera is facing compared to one of ARCore's axis in Euler angles? I understand that we can get the forward vector the ARCore camera, but I am unsure as to how get the correct angle that could be calculated from this.
Was probably overthinking the problem. Here is a simple way to get the angle the camera is facing to the -Z axis (positive angle is to right of -Z axis). This method could also be used to figure out the angle the camera is facing to a different axis also.
// get the camera
Camera arCamera = arFragment.getArSceneView().getScene().getCamera();
// get forward vector of the camera
Vector3 cameraPos = arCamera.getWorldPosition();
Vector3 cameraForward = Vector3.add(cameraPos, arCamera.getForward().scaled(1.0f));
Vector3 forwardVector = Vector3.subtract(cameraForward, cameraPos);
forwardVector = new Vector3(forwardVector.x, 0, forwardVector.z).normalized();
double degreesFromCamToNegZAxis = Vector3.angleBetweenVectors(Vector3.forward(), forwardVector);
// take the dot product between the two vectors
// to see from what side of the -Z axis it is facing
float dotProduct = Vector3.dot(Vector3.right(), forwardVector);
// value is between -1 to 1
// if negative, forward vector is facing to the right of -Z axis
if (dotProduct < 0) {
degreesFromCamToNegZAxis = -degreesFromCamToNegZAxis;
}
If you want to figure out from a different axis just change the vector to the world you are trying to relate it to like so. Positive angle is to the right of positive Z axis in this example.
double degreesFromCamToZAxis = Vector3.angleBetweenVectors(Vector3.back(), forwardVector);
// take the dot product between the two vectors
// to see from what side of the Z axis it is facing
float dotProduct = Vector3.dot(Vector3.left(), forwardVector);
// value is between -1 to 1
// if negative, forward vector is facing to the left of Z axis
if (dotProduct < 0) {
degreesFromCamToZAxis = -degreesFromCamToZAxis;
}
Here is an image that visualizes the coordinate system ARCore uses (same as Android's Sensors).
ARCore Coordinate System Reference

How to tell what part of a texture on a 3d cube was touched [duplicate]

I have a renderer using directx and openGL, and a 3d scene. The viewport and the window are of the same dimensions.
How do I implement picking given mouse coordinates x and y in a platform independent way?
If you can, do the picking on the CPU by calculating a ray from the eye through the mouse pointer and intersect it with your models.
If this isn't an option I would go with some type of ID rendering. Assign each object you want to pick a unique color, render the objects with these colors and finally read out the color from the framebuffer under the mouse pointer.
EDIT: If the question is how to construct the ray from the mouse coordinates you need the following: a projection matrix P and the camera transform C. If the coordinates of the mouse pointer is (x, y) and the size of the viewport is (width, height) one position in clip space along the ray is:
mouse_clip = [
float(x) * 2 / float(width) - 1,
1 - float(y) * 2 / float(height),
0,
1]
(Notice that I flipped the y-axis since often the origin of the mouse coordinates are in the upper left corner)
The following is also true:
mouse_clip = P * C * mouse_worldspace
Which gives:
mouse_worldspace = inverse(C) * inverse(P) * mouse_clip
We now have:
p = C.position(); //origin of camera in worldspace
n = normalize(mouse_worldspace - p); //unit vector from p through mouse pos in worldspace
Here's the viewing frustum:
First you need to determine where on the nearplane the mouse click happened:
rescale the window coordinates (0..640,0..480) to [-1,1], with (-1,-1) at the bottom-left corner and (1,1) at the top-right.
'undo' the projection by multiplying the scaled coordinates by what I call the 'unview' matrix: unview = (P * M).inverse() = M.inverse() * P.inverse(), where M is the ModelView matrix and P is the projection matrix.
Then determine where the camera is in worldspace, and draw a ray starting at the camera and passing through the point you found on the nearplane.
The camera is at M.inverse().col(4), i.e. the final column of the inverse ModelView matrix.
Final pseudocode:
normalised_x = 2 * mouse_x / win_width - 1
normalised_y = 1 - 2 * mouse_y / win_height
// note the y pos is inverted, so +y is at the top of the screen
unviewMat = (projectionMat * modelViewMat).inverse()
near_point = unviewMat * Vec(normalised_x, normalised_y, 0, 1)
camera_pos = ray_origin = modelViewMat.inverse().col(4)
ray_dir = near_point - camera_pos
Well, pretty simple, the theory behind this is always the same
1) Unproject two times your 2D coordinate onto the 3D space. (each API has its own function, but you can implement your own if you want). One at Min Z, one at Max Z.
2) With these two values calculate the vector that goes from Min Z and point to Max Z.
3) With the vector and a point calculate the ray that goes from Min Z to MaxZ
4) Now you have a ray, with this you can do a ray-triangle/ray-plane/ray-something intersection and get your result...
I have little DirectX experience, but I'm sure it's similar to OpenGL. What you want is the gluUnproject call.
Assuming you have a valid Z buffer you can query the contents of the Z buffer at a mouse position with:
// obtain the viewport, modelview matrix and projection matrix
// you may keep the viewport and projection matrices throughout the program if you don't change them
GLint viewport[4];
GLdouble modelview[16];
GLdouble projection[16];
glGetIntegerv(GL_VIEWPORT, viewport);
glGetDoublev(GL_MODELVIEW_MATRIX, modelview);
glGetDoublev(GL_PROJECTION_MATRIX, projection);
// obtain the Z position (not world coordinates but in range 0 - 1)
GLfloat z_cursor;
glReadPixels(x_cursor, y_cursor, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &z_cursor);
// obtain the world coordinates
GLdouble x, y, z;
gluUnProject(x_cursor, y_cursor, z_cursor, modelview, projection, viewport, &x, &y, &z);
if you don't want to use glu you can also implement the gluUnProject you could also implement it yourself, it's functionality is relatively simple and is described at opengl.org
Ok, this topic is old but it was the best I found on the topic, and it helped me a bit, so I'll post here for those who are are following ;-)
This is the way I got it to work without having to compute the inverse of Projection matrix:
void Application::leftButtonPress(u32 x, u32 y){
GL::Viewport vp = GL::getViewport(); // just a call to glGet GL_VIEWPORT
vec3f p = vec3f::from(
((float)(vp.width - x) / (float)vp.width),
((float)y / (float)vp.height),
1.);
// alternatively vec3f p = vec3f::from(
// ((float)x / (float)vp.width),
// ((float)(vp.height - y) / (float)vp.height),
// 1.);
p *= vec3f::from(APP_FRUSTUM_WIDTH, APP_FRUSTUM_HEIGHT, 1.);
p += vec3f::from(APP_FRUSTUM_LEFT, APP_FRUSTUM_BOTTOM, 0.);
// now p elements are in (-1, 1)
vec3f near = p * vec3f::from(APP_FRUSTUM_NEAR);
vec3f far = p * vec3f::from(APP_FRUSTUM_FAR);
// ray in world coordinates
Ray ray = { _camera->getPos(), -(_camera->getBasis() * (far - near).normalize()) };
_ray->set(ray.origin, ray.dir, 10000.); // this is a debugging vertex array to see the Ray on screen
Node* node = _scene->collide(ray, Transform());
cout << "node is : " << node << endl;
}
This assumes a perspective projection, but the question never arises for the orthographic one in the first place.
I've got the same situation with ordinary ray picking, but something is wrong. I've performed the unproject operation the proper way, but it just doesn't work. I think, I've made some mistake, but can't figure out where. My matix multiplication , inverse and vector by matix multiplications all seen to work fine, I've tested them.
In my code I'm reacting on WM_LBUTTONDOWN. So lParam returns [Y][X] coordinates as 2 words in a dword. I extract them, then convert to normalized space, I've checked this part also works fine. When I click the lower left corner - I'm getting close values to -1 -1 and good values for all 3 other corners. I'm then using linepoins.vtx array for debug and It's not even close to reality.
unsigned int x_coord=lParam&0x0000ffff; //X RAW COORD
unsigned int y_coord=client_area.bottom-(lParam>>16); //Y RAW COORD
double xn=((double)x_coord/client_area.right)*2-1; //X [-1 +1]
double yn=1-((double)y_coord/client_area.bottom)*2;//Y [-1 +1]
_declspec(align(16))gl_vec4 pt_eye(xn,yn,0.0,1.0);
gl_mat4 view_matrix_inversed;
gl_mat4 projection_matrix_inversed;
cam.matrixProjection.inverse(&projection_matrix_inversed);
cam.matrixView.inverse(&view_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&projection_matrix_inversed);
gl_mat4::vec4_multiply_by_matrix4(&pt_eye,&view_matrix_inversed);
line_points.vtx[line_points.count*4]=pt_eye.x-cam.pos.x;
line_points.vtx[line_points.count*4+1]=pt_eye.y-cam.pos.y;
line_points.vtx[line_points.count*4+2]=pt_eye.z-cam.pos.z;
line_points.vtx[line_points.count*4+3]=1.0;

Strange Matrix transformation for SVG rotate

I have a java code for SVG drawing. It processes transforms including rotate, and does this very well, as far as I can see in numerous test pictures compared against their rendering in Chrome. Next what I need is to get actual object location, which is in many images declared via transforms. So I decided just to read X and Y from Matrix used for drawing. Unfortunately I get incorrect values for rotate transform, that is they do not correspond to real object location in the image.
The stripped down code looks like this:
Matrix matrix = new Matrix();
float cx = 1000; // suppose this is an object X coordinate
float cy = 300; // this is its Y coordinate
float angle = -90; // rotate counterclockwise, got from "rotate(-90, 1000, 300)"
// shift to -X,-Y, so object is in the center
matrix.postTranslate(-cx, -cy);
// rotate actually
matrix.postRotate(angle);
// shift back
matrix.postTranslate(cx, cy);
// debug goes here
float[] values = new float[9];
matrix.getValues(values);
Log.v("HELLO", values[Matrix.MTRANS_X] + " " + values[Matrix.MTRANS_Y]);
The log outputs the values 700 and 1300 respectively. I'd expect 0 and 0, because I see the object rotated inplace in my image (that is there is no any movement), and postTranslate calls should compensate each other. Of course, I see how these values are formed from 1000 and 300, but don't understand why. Once again, I point out that the matrix with these strange values is used for actual object drawing, and it looks correct. Could someone explain what happens here? Am I missing something? So far I have only one solution of my problem: just do not try to obtain position from rotate, do it only for explicit matrix and translate transforms. But this approach lacks generality, and anyway I thought matrix should have reasonable values (including offsets) for any transformation type.
The answer is that the matrix is an operator for space transformation, and should not be used for direct extraction of object position. Instead, one should get initial object coordinates, as specified in x and y attributes of an SVG tag, and apply the matrix on them:
float[] src = new float[2];
src[0] = cx;
src[1] = cy;
matrix.mapPoints(src);
After this we get proper location values in x and y variables.

Android: axes vectors from orientation/rotational angles?

So there's a couple methods in the Android SensorManager to get your phone's orientation:
float[] rotational = new float[9];
float[] orientation = new float[3];
SensorManager.getRotationMatrix(rotational, whatever, whatever, whatever);
SensorManager.getOrientation(rotational, orientation);
This gives you a rotation matrix called "rotational" and an array of 3 orientation angles called "orientation". However, I can't use the angles in my AR program - what I need is the actual vectors which represent the axes.
For example, in this image from Wikipedia:
I'm basically being given the α, β, and γ angles (though not exactly since I don't have an N - I'm being given the angles from each of the blue axes), and I need to find vectors which represents the X, Y, and Z axes (red in the image). Does anyone know how to do this conversion? The directions on Wikipedia are very complicated, and my attempts to follow them have not worked. Also, I think the data that Android gives you may be in a slightly different order or format than what the conversion directions on Wikipedia expect.
Or as an alternative to these conversions, does anyone know any other ways to get the X, Y, and Z axes from the camera's perspective? (Meaning, what vector is the camera looking down? And what vector does the camera consider to be "up"?)
The rotation matrix in Android provides a rotation from the body (a.k.a device) frame to the world (a.k.a. inertial) frame. A normal back facing camera appears in landscape mode on the screen. This is native mode for a tablet, so has the following axes in the device frame:
camera_x_tablet_body = (1,0,0)
camera_y_tablet_body = (0,1,0)
camera_z_tablet_body = (0,0,1)
On a phone, where portrait is native mode, a rotation of the device into landscape with top turned to point left is:
camera_x_phone_body = (0,-1,0)
camera_y_phone_body = (1,0,0)
camera_z_phone_body = (0,0,1)
Now applying the rotation matrix will put this in the world frame, so (for rotation matrix R[] of size 9):
camera_x_tablet_world = (R[0],R[3],R[6]);
camera_y_tablet_world = (R[1],R[4],R[7]);
camera_z_tablet_world = (R[2],R[5],R[8]);
In general, you can use SensorManager.remapCoordinateSystem() which for the phone example above would be Display.getRotation()=Surface.ROTATION_90 and give the answer you provided.
But if you rotate differently (ROTATION_270 for example) it will be different.
Also, an aside: the best method to get orientation in Android is to listen for Sensor.TYPE_ROTATION_VECTOR events. These are filled with the best possible orientation on most (i.e. Gingerbread or newer) platforms. It is actually the vector part of the quaternion. You can get the full quaternion using this (and last two lines are a way to get the RotationMatrix):
float vec[] = event.values.clone();
float quat[] = new float[4];
SensorManager.getQuaternionFromVector(quat, vec);
float [] RotMat = new float[9];
SensorManager.getRotationMatrixFromVector(RotMat, quat);
More information at: http://www.sensorplatforms.com/which-sensors-in-android-gets-direct-input-what-are-virtual-sensors
SensorManager.getRotationMatrix(rotational, null, gravityVals, geoMagVals);
// camera's x-axis
Vector u = new Vector(-rotational[1], -rotational[4], -rotational[7]); // right of phone (in landscape mode)
// camera's y-axis
Vector v = new Vector(rotational[0], rotational[3], rotational[6]); // top of phone (in landscape mode)
// camera's z-axis (negative into the scene)
Vector n = new Vector(rotational[2], rotational[5], rotational[8]); // front of phone (the screen)
// world axes (x,y,z):
// +x is East
// +y is North
// +z is sky
The orientation matrix that you receive from getRotationMatrix should be based on the gravity field and the magnetic field - in other words X points East, Y - North, Z - the center of the Earth. (http://developer.android.com/reference/android/hardware/SensorManager.html)
To the point of your question, I think the three rotation values can be used directly as a vector, but provide the values in reverse order:
"For either Euler or Tait-Bryan angles, it is very simple to convert from an intrinsic (rotating axes) to an extrinsic (static axes) convention, and vice-versa: just swap the order of the operations. An (α, β, γ) rotation using X-Y-Z intrinsic convention is equivalent to a (γ, β, α) rotation using Z-Y-X extrinsic convention; this is true for all Euler or Tait-Bryan axis combinations."
Source wikipedia
I hope this helps!

How to calculate the direction the screen is facing

How do you calculate the direction that your camera is pointing towards in Android? Azimuth works only if the device is vertical. How do you account for the pitch and roll?
If R is the rotation matrix, I want to find something like:
getRotationMatrix(R, I, grav, mag);
float[] rotated = R {0, 0, -1} ; //not sure how to do matrix multiplication
float direction = Math.atan(rotated[0]/rotated[1]);
I would lookup the Accelerometer API. Also, check out this article, it might help: http://www.anddev.org/convert_android_accelerometer_values_and_get_tilt_from_accel-t6595.html

Categories

Resources