I'm using the ARToolKit for Android to build an AR app. I can apply the Projection Matrix and the Marker Transformation Matrix in OpenGL without problem, as explained in the ARSimple example. However, I did not find a way to apply correctly these to the jPCT-AE camera. Here is what I did for the camera:
Camera cam = world.getCamera();
Matrix projMatrix = new Matrix();
projMatrix.transformToGL();
projMatrix.setDump(ARToolKit.getInstance().getProjectionMatrix());
cam.setPosition(projMatrix.getTranslation());
cam.setBack(projMatrix);
and for the object:
Matrix objMat = new Matrix();
objMat.transformToGL();
objMat.setDump(ARToolKit.getInstance().queryMarkerTransformation(markerID));
cube.setTranslationMatrix(objMat);
cube.setRotationMatrix(objMat);
It almost works: I can see the 3D object if the marker is placed at the center of the screen. However when I move the marker it quickly disappears off screen. Also, the cube (and other models I tried to load) seems to render in some sort of "inverted" way.
For what I read on the web the ARToolKit matrices are relative to OpenGL world coordinates (while jPCT-AE has its own coordinates), and also that the projection matrix of jPCT-AE is built internally based on the fov, near and far clipping plane, position and rotation, and then I cannot set it directly.
How do I translate the projection matrix and marker matrix to the jPCT-AE engine?
Reviewing my code, it seems JPCT-AE does not get the position and back vector correctly from the matrix (although I see no reason why it does not), but it does when you split them in separate vectors. This are just my findings from trial and error.
This is how I did it for the camera, using the direction and up vectors.
float[] projection = ARToolKit.getInstance().getProjectionMatrix();
Matrix projMatrix = new Matrix();
projMatrix.setDump(projection);
projMatrix.transformToGL();
SimpleVector translation = projMatrix.getTranslation();
SimpleVector dir = projMatrix.getZAxis();
SimpleVector up = projMatrix.getYAxis();
mCamera.setPosition(translation);
mCamera.setOrientation(dir, up);
And then for the model I extract translation and rotation. It is important to clear the translation, since it is not an absolute position, but a modification to the current position. I think this may be your main problem why the objects move out of the screen.
float[] transformation = ARToolKit.getInstance().queryMarkerTransformation(markerID);
Matrix dump = new Matrix();
dump.setDump(transformation);
dump.transformToGL();
mModel.clearTranslation();
mModel.translate(dump.getTranslation());
mModel.setRotationMatrix(dump);
Also, you should do the transformToGl after calling setDump, I think that is the reason why you see them inverted.
Finally, you should reuse the matrix between executions instead of creating a new object every frame, for optimization.
Related
I'm currently working on Google VR Sdk integration with LibGDX. As a starting point, I used this one: https://github.com/yangweigbh/Libgdx-CardBoard-Extension
However I was not really satisfied with it as it does not follow the the general project structure of LibGDX projects, so I started to refactor and move things around. So far so good, I have a demo running showing a rotating cube.
Now I wanted to add a Skybox, starting from here: LibGDX 0.9.9 - Apply cubemap in environment
The image is drawn, but does not move with head rotation like the other objects, so I looked deeper into the CardboardCamera class, especially the part about setting the projection matrix.
The Skybox class from above gets a quaternion from the camera's view matrix. However this matrix is not set within the CardboardCamera class; instead it sets the projection matrix directly leaving the view matrix unchanged.
So my question now is. If I have the projection matrix, how can I either get the correct quaterion for it to be used for the Skybox or how can I calculate the view matrix so that its getRotation() method returns the correct values ? If both does not make sense, where could I get the correct getRotation() data from ?
Relevant code of the CardboardCamera class:
public void setEyeProjection(Matrix4 projection) {
this.projection.set(projection);
}
final Matrix4 tmpMatrix = new Matrix4();
final Vector3 tmpVec = new Vector3();
#Override
public void update(boolean updateFrustum) {
// below line does not make much sense as position, direction and up are never set...
view.setToLookAt(position, tmpVec.set(position).add(direction), up);
tmpMatrix.set(eyeViewAdjustMatrix);
Matrix4.mul(tmpMatrix.val, view.val);
combined.set(projection);
Matrix4.mul(combined.val, tmpMatrix.val);
if (updateFrustum) {
invProjectionView.set(combined);
Matrix4.inv(invProjectionView.val);
frustum.update(invProjectionView);
}
}
I am trying to create a 2D game. Because I am using OpenGL ES I have to plot everything in 3D, but I just fix the z coordinate, which is fine. Now what I want to do is calculate the angle between two vectors (C = player center, P = point just above player, T = touch point) CP and CT so that I can make the player face that direction. I know how to get the angle between 2 vectors, but my problem is getting all the points to exist on the same plane (by translating the T).
I know that T exists on a plane where (0,0) is upper left and UP is actually DOWN (visually). I also know that C and P's UP is actually UP and that any their X and Y is on a completely 3 dimensional different plane to T. I need to get either C and P onto T's plane (which I have tried below) or get T onto C and P's plane. Can anyone help me? I am using the standard OpenGL projection model and I am 0,0,-4 zoomed out of the frustrum (I am looking directly at (0,0,0)). My 2D objects all sit on the plane (0,0,1);
private float getRotation(float touch_x, float touch_y)
{
//center_x = this.getWidth() / 2;
//center_y = this.getHeight() / 2;
float cx, cy, tx, ty, ux, uy;
cx = (player.x * _renderer.centerx);
cy = (player.y * -_renderer.centery);
ux = cx;
uy = cy+1.0f;
tx = (touch_x - _renderer.centerx);
ty = (touch_y - _renderer.centery);
Log.d(TAG, "center x: "+cx+"y:"+cy);
Log.d(TAG, "up x: "+ux+"y:"+uy);
Log.d(TAG, "touched x: "+tx+"y:"+ty);
float P12 = length(cx,cy,tx,ty);
float P13 = length(cx,cy,ux,uy);
float P23 = length(tx,ty,ux,uy);
return (float)Math.toDegrees(Math.acos((P12*P12 + P13*P13 - P23*P23)/2.0 * P12 * P13));
}
Basically I want to know if there is a way I can translate (tx, ty, -4) to (x, y, 1) using the standard view frustum.
I have tried some other things now. In my touch event I am trying to do this:
float[] coords = new float[4];
GLU.gluUnProject(touch_x, touch_y, -4.0f, renderer.model, 0, renderer.project, 0, renderer.view, 0, coords, 0);
Which is throwing an exception I am setting up the model, projection and view in the OnSurfaceChanged of the Renderer object:
GL11 gl11 = (GL11)gl;
model = new float[16];
project = new float[16];
int[] view = new int[4];
gl11.glGetFloatv(GL10.GL_MODELVIEW, model, 0);
gl11.glGetFloatv(GL10.GL_PROJECTION, project, 0);
gl11.glGetIntegerv(GL11.GL_VIEWPORT, view, 0);
I have several textbooks on openGL and after dusting one off I found that the term for what I want to do is called picking. Once I knew what I was asking, I found a lot of good web sites and references:
http://www.lighthouse3d.com/opengl/picking/
OpenGL ES (iPhone) Touch Picking
Coordinate Picking with OpenGL ES 2.0
Android OpenGL 3D picking
converting 2D mouse coordinates to 3D space in OpenGL ES
Coordinate Picking with OpenGL ES 2.0
Ray-picking in OpenGL ES 2.0
Android: GLES20: Called unimplemented OpenGL ES API
...
The list is almost innumerable. There are 700 ways to do this, and none of them worked for me. Ultimately I have decided to go back to basics and do a thorough OpenGL|ES learning stint, to which effect I have bought the book here: http://www.amazon.com/Graphics-Programming-Android-Programmer-ebook/dp/B0070D83W2/ref=sr_1_2?s=digital-text&ie=UTF8&qid=1362250733&sr=1-2&keywords=opengl+es+2.0+android
One thing I have already learnt is that I was most definitely using the wrong type of projection. I should not use full 3D for a 2D game. In order to do picking in a full 3D environment I would have to cast a ray from the screen point onto the surface of the 3D plane where the game was taking place. In addition to being a horrendous waste of resources (raycasting per click), there were other tell-tales. I would render my player with a circle encompassing her, and as I moved her, the circle would go off center of the player. This is due to the full 3D environment rendered on a 2D plane. It just will not produce a professional result. I need to use an orthographic projection.
I think you're trying to do too much all at once. I can understand each sentence of your question separately; but strung all together, it's very confusing.
For the exceptions, you probably need to pass identity matrices instead of zero matrices to get a basic 1-to-1 projection.
Then I'd suggest that you scale the y dimension by -1 so all the UPs and DOWNs match at least.
I hope this helps, because I'm not 100% sure what you're trying to do. Particularly, " translate (tx, ty, -4) to (x, y, 1) using the standard view frustum" doesn't make sense to me. You can translate with a translation matrix. You can clip to a view frustum, or project an object from the frustum to a plane (usually the view plane). But if all your Zs are constant, you can just discard them right? So, assuming x=tx and y=ty, then tz += 5?
So there's a couple methods in the Android SensorManager to get your phone's orientation:
float[] rotational = new float[9];
float[] orientation = new float[3];
SensorManager.getRotationMatrix(rotational, whatever, whatever, whatever);
SensorManager.getOrientation(rotational, orientation);
This gives you a rotation matrix called "rotational" and an array of 3 orientation angles called "orientation". However, I can't use the angles in my AR program - what I need is the actual vectors which represent the axes.
For example, in this image from Wikipedia:
I'm basically being given the α, β, and γ angles (though not exactly since I don't have an N - I'm being given the angles from each of the blue axes), and I need to find vectors which represents the X, Y, and Z axes (red in the image). Does anyone know how to do this conversion? The directions on Wikipedia are very complicated, and my attempts to follow them have not worked. Also, I think the data that Android gives you may be in a slightly different order or format than what the conversion directions on Wikipedia expect.
Or as an alternative to these conversions, does anyone know any other ways to get the X, Y, and Z axes from the camera's perspective? (Meaning, what vector is the camera looking down? And what vector does the camera consider to be "up"?)
The rotation matrix in Android provides a rotation from the body (a.k.a device) frame to the world (a.k.a. inertial) frame. A normal back facing camera appears in landscape mode on the screen. This is native mode for a tablet, so has the following axes in the device frame:
camera_x_tablet_body = (1,0,0)
camera_y_tablet_body = (0,1,0)
camera_z_tablet_body = (0,0,1)
On a phone, where portrait is native mode, a rotation of the device into landscape with top turned to point left is:
camera_x_phone_body = (0,-1,0)
camera_y_phone_body = (1,0,0)
camera_z_phone_body = (0,0,1)
Now applying the rotation matrix will put this in the world frame, so (for rotation matrix R[] of size 9):
camera_x_tablet_world = (R[0],R[3],R[6]);
camera_y_tablet_world = (R[1],R[4],R[7]);
camera_z_tablet_world = (R[2],R[5],R[8]);
In general, you can use SensorManager.remapCoordinateSystem() which for the phone example above would be Display.getRotation()=Surface.ROTATION_90 and give the answer you provided.
But if you rotate differently (ROTATION_270 for example) it will be different.
Also, an aside: the best method to get orientation in Android is to listen for Sensor.TYPE_ROTATION_VECTOR events. These are filled with the best possible orientation on most (i.e. Gingerbread or newer) platforms. It is actually the vector part of the quaternion. You can get the full quaternion using this (and last two lines are a way to get the RotationMatrix):
float vec[] = event.values.clone();
float quat[] = new float[4];
SensorManager.getQuaternionFromVector(quat, vec);
float [] RotMat = new float[9];
SensorManager.getRotationMatrixFromVector(RotMat, quat);
More information at: http://www.sensorplatforms.com/which-sensors-in-android-gets-direct-input-what-are-virtual-sensors
SensorManager.getRotationMatrix(rotational, null, gravityVals, geoMagVals);
// camera's x-axis
Vector u = new Vector(-rotational[1], -rotational[4], -rotational[7]); // right of phone (in landscape mode)
// camera's y-axis
Vector v = new Vector(rotational[0], rotational[3], rotational[6]); // top of phone (in landscape mode)
// camera's z-axis (negative into the scene)
Vector n = new Vector(rotational[2], rotational[5], rotational[8]); // front of phone (the screen)
// world axes (x,y,z):
// +x is East
// +y is North
// +z is sky
The orientation matrix that you receive from getRotationMatrix should be based on the gravity field and the magnetic field - in other words X points East, Y - North, Z - the center of the Earth. (http://developer.android.com/reference/android/hardware/SensorManager.html)
To the point of your question, I think the three rotation values can be used directly as a vector, but provide the values in reverse order:
"For either Euler or Tait-Bryan angles, it is very simple to convert from an intrinsic (rotating axes) to an extrinsic (static axes) convention, and vice-versa: just swap the order of the operations. An (α, β, γ) rotation using X-Y-Z intrinsic convention is equivalent to a (γ, β, α) rotation using Z-Y-X extrinsic convention; this is true for all Euler or Tait-Bryan axis combinations."
Source wikipedia
I hope this helps!
How do I get the current translate position from a Canvas? I am trying to draw stuff where my coordinates are a mix of relative (to each other) and absolute (to canvas).
Lets say I want to do
canvas.translate(x1, y1);
canvas.drawSomething(0, 0); // will show up at (x1, y1), all good
// now i want to draw a point at x2,y2
canvas.translate(x2, y2);
canvas.drawSomething(0, 0); // will show up at (x1+x2, y1+y2)
// i could do
canvas.drawSomething(-x1, -y1);
// but i don't always know those coords
This works but is dirty:
private static Point getCurrentTranslate(Canvas canvas) {
float [] pos = new float [2];
canvas.getMatrix().mapPoints(pos);
return new Point((int)pos[0], (int)pos[1]);
}
...
Point p = getCurrentTranslate(canvas);
canvas.drawSomething(-p.x, -p.y);
The canvas has a getMatrix method, it has a setTranslate but no getTranslate. I don't want to use canvas.save() and canvas.restore() because the way I'm drawing things it's a little tricky (and probably messy ...)
Is there a cleaner way to get these current coordinates?
You need to reset the transformation matrix first. I'm not an android developer, looking at the android canvas docs, there is no reset matrix, but there is a setMatrix(android.graphics.Matrix). It says if the given matrix is null it will set the current matrix to the identity matrix, which is what you want. So I think you can reset your position (and scale and skew) with:
canvas.setMatrix(null);
It would also be possible to get the current translation through getMatrix. There is a mapVectors() method you could use for matrices to see where the point [0,0] would be mapped to, this would be your translation. But in your case I think resetting the matrix is best.
I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back
There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}