Android Matrix.multiplyMV not changing my result by the translation - android

I am trying to translate a set of points in one coordinate system to another coordinate system for my Android OpenGL project.
Assume matrix is a float array of 16 elements.
Assume points is a reference to an array of float arrays each with 4 elements. (points = new float[8][4])
I set the matrix to an identity matrix, I then translate it and multiply by the rotation of the gameobject. I then try to translate each of the 8 vertices into the new matrices coordinates system, but none of the points are changing.
Matrix.setIdentityM(matrix, 0);
Matrix.translateM(matrix, 0, go.getPosition().getX(), go.getPosition().getY(), go.getPosition().getZ());
Matrix.multiplyMM(matrix, 0, matrix, 0, go.getRotationArray(), 0);
//Matrix.rotateM(matrix, 0, 30f, 1.5f, -5f, 0f); testing debug purpose
for(int i=0;i<8;i++)
{
Matrix.multiplyMV(points[i], 0, matrix, 0, points[i], 0);
}
I am basically trying to do what Android's 2D canvas and Matrix 'mapPoints' does.

When you have a vector, your last element of the 4 long vector must be 1 and not 0! So non of the translation data was being multiplied correctly. (Well it was being done correctly, just not how I wanted it.) So when android.opengl.Matrix asks for a vector of four long and the first three elements represent your x,y,z, make sure the fourth element is set to 1!

Related

Trouble mapping device coordinate system to real-world (rotation vector) coordinate system in Processing Android

I know this question has been asked many many times, but with all the knowledge out there I still can't get it to work for myself in the specific setting I now find myself in: Processing for Android.
The coordinate systems involved are (1) the real-world coordinate system as per Android's view: y is tangential to the ground and pointing north, z goes up into the sky, and x goes to your right, if you're standing on the ground and looking north; and (2) the device coordinate system as per Processing's view: x points to the right of the screen, y down, and z comes out of the screen.
The goal is simply to draw a cube on the screen and have it rotate on device rotation such that it seems that it is stable in actual space. That is: I want a map between the two coordinate systems so that I can draw in terms of the real-world coordinates instead of the screen coordinates.
In the code I'm using the Ketai sensor library, and subscribe to the onRotationVectorEvent(float x, float y, float z) event. Also, I have a simple quaternion class lying around that I got from https://github.com/kynd/PQuaternion. So far I have the following code, in which I have two different ways of trying to map, that coincide, but nevertheless don't work as I want them to:
import ketai.sensors.*;
KetaiSensor sensor;
PVector rotationAngle = new PVector(0, 0, 0);
Quaternion rot = new Quaternion();
void setup() {
fullScreen(P3D);
sensor = new KetaiSensor(this);
sensor.start();
}
void draw() {
background(#333333);
translate(width/2, height/2);
lights();
// method 1: draw lines for real-world axes in terms of processing's coordinates
PVector rot_x_axis = rot.mult(new PVector(400, 0, 0));
PVector rot_y_axis = rot.mult(new PVector(0, 0, -400));
PVector rot_z_axis = rot.mult(new PVector(0, 400, 4));
stroke(#ffffff);
strokeWeight(8); line(0, 0, 0, rot_x_axis.x, rot_x_axis.y, rot_x_axis.z);
strokeWeight(5); line(0, 0, 0, rot_y_axis.x, rot_y_axis.y, rot_y_axis.z);
strokeWeight(2); line(0, 0, 0, rot_z_axis.x, rot_z_axis.y, rot_z_axis.z);
// method 2: first rotate appropriately
fill(#f4f7d2);
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, rotationAngle.y, rotationAngle.z);
box(200, 200, 200);
}
void onRotationVectorEvent(float x, float y, float z) {
rotationAngle = new PVector(x, y, z);
// I believe these two do the same thing.
rot.set(x, y, z, cos(asin(rotationAngle.mag())));
//rot.setAngleAxis(asin(rotationAngle.mag())*2, rotationAngle);
}
The above works well enough that the real-world axis lines coincide with the cube drawn, and both rotate in an interesting way. But still, there seems to be some "gimbal stuff" going on, in the sense that, when I rotate my device up and down standing one way, the cube also rotates up and down, but standing another way, the cube rotates sideways --- as if I'm applying the rotations in the wrong order. However, I'm trying to avoid gimbal madness by working with quaternions this way --- how does it still apply?
I've solved it now, just by a simple "click to test next configuration" UI, to test all possible 6 * 8 configurations of rotate(asin(rotationAngle.mag()) * 2, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>); -- the solution to which seemed to be 0, -1, 2, i.e.:
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, -rotationAngle.y, rotationAngle.z);

Scale, rotate, translate w. matrices in openGl ES 2.0

I'm working with OpenGL ES 2.0 and trying to build my object class with some methods to rotate/translate/scale them.
I just set up my object in 0,0,0 and move it afterwards to the desired position on the screen. Below are my methods to move it seperately. After that i run the buildObjectModelMatrix to pass all the matrices into one objectMatrix, so i can take the vertices and multiply them with my modelMatrix/objectMatrix and render it afterwards.
What i think is right, i have to multiply my matrices in this order:
[scale]x[rotation]x[translation]
->
[temp]x[translation]
->
[objectMatrix]
I've found some literature. Maybe i get it in a few Minutes, if i will, i will update it.
Beginning Android 3D
http://gamedev.stackexchange.com
setIdentityM(scaleMatrix, 0);
setIdentityM(translateMatrix, 0);
setIdentityM(rotateMatrix, 0);
public void translate(float x, float y, float z) {
translateM(translateMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
public void rotate(float angle, float x, float y, float z) {
rotateM(rotateMatrix, 0, angle, x, y, z);
buildObjectModelMatrix();
}
public void scale(float x, float y,float z) {
scaleM(scaleMatrix, 0, x, y, z);
buildObjectModelMatrix();
}
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, scaleMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, translateMatrix, 0);
}
SOLVED:
The Problem within the whole thing is, if you scale before you translate you get a difference in the distance you translate! the correct code for multiplying your matrices should be (correct me if i'm wrong)
private void buildObjectModelMatrix() {
multiplyMM(tempM, 0, translateMatrix, 0, rotateMatrix, 0);
multiplyMM(objectMatrix, 0, tempM, 0, scaleMatrix, 0);
}
with this you translate and rotate first. Afterwards you can scale the object.
Tested with multiple Objects... so i hope this helped :)
You know this is the most common issue with most people when beginning to deal with matrix operations. How matrix multiplication works is as if you were looking from the objects first person view getting some commands: For instance if you began at (0,0,0) facing toward positive X axis and up would be positive Y axis then translate (a,0,0) would mean "go forward", translate (0,0,a) would would mean "go left", rotate (a, 0, 1, 0) would mean "turn left"...
So if in your case you scaled by 3 units, rotated by 90 degrees and then translated by (2,0,0) what happens is you first enlarge yourself by scale of 3, then turn 90 degrees so you are now facing positive Z still being quite large. Then you go forward by 2 units measured in your own coordinate system which means you will actually go to (0,0,2*3). So you end up at (0,0,6) looking toward positive Z axis.
I believe this way is the best to be able to imagine what goes on when dealing with such operations. And might save your life when having a bug in matrix operation order.
You should know that although this kind of matrix operating is normal when beginning with a 3D scene you should try to move to a better system as soon as possible. What I mostly use is to have an object structure/class which contains 3 vectors: position, forward and up (this is much like using glLookAt but not totally the same). So when having these 3 vectors you can simply set a specific position or rotation using trigonometry or your matrix tools by multiplying the vectors with matrices instead of the matrices with matrices. Or you can work with them internally (first person) where for instance "go forward" would be done as position = position + position*forward*scale, turn left would be rotating a forward vector around the up vector. Anyway I hope can understand how to manipulate those 3 vectors to get a desired effect... So what you need to do to reconstruct the matrix from those 3 vector is need to generate another vector right which is a cross product of up and forward then the model matrix consists of:
right.x, right.y, right.z, .0
up.x, up.y, up.z, .0
forward.x, forward.y, forward.z, .0
position.x, position.y, position.z, 1.0
Just note the row-column order may change depending on what you are working with.
I hope this gives you some better understanding...

Zoom in Opengl ES

I want to implement zoom in\out of my plaine object.
Now i try scaling :
Matrix.translateM(mModelMatrix, 0, mFocalPoint.x, mFocalPoint.y, 0f);
Matrix.scaleM(mModelMatrix, 0, mCurrentScaleFactor, mCurrentScaleFactor, 1f);
Matrix.translateM(mModelMatrix, 0, -mFocalPoint.x, -mFocalPoint.y, 0f);
In first zoom i have success result, but on the next zoom i have a problem - looks like a focal point calculated based on old matrix.
Here i calculated a focal point
float glX = detector.getFocusX() * mScaleCoefX - mGLSceneWidth/2;
float glY = mGLSceneHeight - detector.getFocusY() * mScaleCoefY - mGLSceneHeight/2;
mFocalPoint = new PointF(glX, glY);
Also i save my model matrix after each zoom and restore before each draw.
So i have a question. Why my zoom doesn't work if i save matrix after each zoom and start scaling on new matrix?
Also - maybe i should recalculate my mFocalPoint?
Each time you use a Matrix, be sure to initialize it with the identity matrix before you use Translate, Scale or Rotate. I don't see that in your code. The calculated Matrix should then be multiplied to the projection matrix either before or in the Vertex shader.

rotate an Object, but translate the Object always in its own front Axis

I want to program a racinggame for Android. My Problem is, that if I rotate the car and want to translate the position it doesn't translate into the new direction of the car , but always in the X axis of the world.
Here is my wrong code.. thank you
gl.glTranslatef(car.position.x, car.position.y, car.position.z);
gl.glRotatef(car.currentAngle, 0, 1, 0);
Opengl uses matrices to create images.
In Matrices, multiplication do not have an associative property. Therefore when you rotate an object and then translate it, the object will end up in a different position as opposed if you did not translate it first.
A solution to transforming and translating an object would be to animate and translate. That way you can translate anywhere you want without worrying about object rotation's associative property.
To see the effects of the non-associative multiplication on your object, try this: rotate and translate your object about 8 times, rotating and translating 8 times each respectively. You will notice that your object will disappear. As opposed to "rotate in a circle while changing position".
Ok I have the solution. All I have to do is to translate my Car towards the new directional vector who gets changed by the new angle of my car :)
if (accel < 0)
position.add((float) Math.sin(current * Math.PI/180)/5, 0, (float) Math.cos(currentangle * Math.PI/180)/5);
if (accel > 0)
position.sub((float) Math.sin(current * Math.PI/180)/5, 0, (float) Math.cos(currentangle * Math.PI/180)/5);
and in the rendering class
gl.glTranslatef(car.position.x, car.position.y, car.position.z);
gl.glRotatef(car.currentAngle, 0, 1, 0);

How to find the current translate position in Canvas?

How do I get the current translate position from a Canvas? I am trying to draw stuff where my coordinates are a mix of relative (to each other) and absolute (to canvas).
Lets say I want to do
canvas.translate(x1, y1);
canvas.drawSomething(0, 0); // will show up at (x1, y1), all good
// now i want to draw a point at x2,y2
canvas.translate(x2, y2);
canvas.drawSomething(0, 0); // will show up at (x1+x2, y1+y2)
// i could do
canvas.drawSomething(-x1, -y1);
// but i don't always know those coords
This works but is dirty:
private static Point getCurrentTranslate(Canvas canvas) {
float [] pos = new float [2];
canvas.getMatrix().mapPoints(pos);
return new Point((int)pos[0], (int)pos[1]);
}
...
Point p = getCurrentTranslate(canvas);
canvas.drawSomething(-p.x, -p.y);
The canvas has a getMatrix method, it has a setTranslate but no getTranslate. I don't want to use canvas.save() and canvas.restore() because the way I'm drawing things it's a little tricky (and probably messy ...)
Is there a cleaner way to get these current coordinates?
You need to reset the transformation matrix first. I'm not an android developer, looking at the android canvas docs, there is no reset matrix, but there is a setMatrix(android.graphics.Matrix). It says if the given matrix is null it will set the current matrix to the identity matrix, which is what you want. So I think you can reset your position (and scale and skew) with:
canvas.setMatrix(null);
It would also be possible to get the current translation through getMatrix. There is a mapVectors() method you could use for matrices to see where the point [0,0] would be mapped to, this would be your translation. But in your case I think resetting the matrix is best.

Categories

Resources