I am rendering several square objects in sphere (I am in the center of the sphere and the objects are around me). I am using the phone's Rotation sensor to view the the objects in the sphere.
all the objects start from position around (0,0,0) , but at rendering time I'm rotating each object to a different angle in the sphere .
this is how I handle the matrices until from the moment I get the rotation matrix :
public void position(float[] rotationMatrix , float[] projectionMatrix , float rotationAngle , float xRotation , float yRotation , float zRotation , float xTranslation , float yTranslation , float zTranslation){
float[] MVP = new float[16];
float[] modelMatrix = new float[16];
System.arraycopy(rotationMatrix, 0, modelMatrix, 0, rotationMatrix.length);
Matrix.rotateM(modelMatrix, 0, -90f, 1f, 0f, 0f); //correct the axis so that the direction of Y axis is to the sky and Z is to the front
Matrix.rotateM(modelMatrix, 0, rotationAngle, xRotation, yRotation, zRotation); //rotate the object around axis
// used to control the distance of viewing the object (currently only z translation is used)
Matrix.translateM(modelMatrix, 0, xTranslation, yTranslation, zTranslation);
Matrix.multiplyMM(MVP , 0 , projectionMatrix , 0 , modelMatrix , 0);
textureShaderProgram.setUniforms(MVP , texture);
return;
}
then in the shader I multiply each object location(which is the same location basically) with this MVP matrix , and they are rendered around in a sphere like world.
this works good . now what I like to do is identify when an object is right in front of me . get each object's location at all times , and when I view a certain object make it selectable or lighted.
but since every object is multiplied several times , how can I know it's location and when I am actually viewing it now ?
The translation is always stored in the last column of a transformation matrix (or in the last row depending whether the matrix is stored column major or row major).
Thus the position of an object in worldspace is the last column of the model matrix.
Related
I'm trying to create an app for Android using OpenGL ES, but I'm having trouble handling touch input.
I've created a class CubeGLRenderer which spawns a Cube. CubeGLRenderer is in charge of the projection and view matrix, and Cube is in charge of its model matrix. The Cube is moving along the positive X axis, with no movement in Y nor Z.
CubeGLRenderer updates the view matrix each frame in order to move along with the cube, making the cube look stationary on screen:
Matrix.setLookAtM(mViewMatrix, 0, 0.0f, cubePos.y, -10.0f, 0.0f, cubePos.y, 0.0f, 0.0f, 1.0f, 0.0f);
The projection matrix is calculated whenever the screen dimension changes (i.e. when the orientation of the device changes). The two matrices are then muliplied and passed to Cube.draw() where it applies its model matrix and renders itself to screen.
So far, so good. Let's move on to the problem.
I want to touch the screen and calculate an angle from the center of the cube's screen coordinates to the point of the screen that I touched.
I thought I'd just accomplish this using GLU.gluProject(), but I'm either not using it correctly or simply haven't understood it at all.
Here's the code I use to calculate the screen coordinates from the cube's world coordinates:
public boolean onTouchEvent(MotionEvent e) {
Vec3 cubePos = cube.getPos();
float[] modelMatrix = cube.getModelMatrix();
float[] modelViewMatrix = new float[16];
Matrix.multiplyMM(modelViewMatrix, 0, mViewMatrix, 0, modelMatrix, 0);
int[] view = {0, 0, width, height};
float[] screenCoordinates = new float[3];
GLU.gluProject(cubePos.x, cubePos.y, cubePos.z, modelViewMatrix, 0, mProjectionMatrix, 0, view, 0, screenCoordinates, 0);
switch (e.getAction()) {
case MotionEvent.ACTION_DOWN:
Log.d("CUBEAPP", "screenX: " + String.valueOf(screenCoordinates[0]));
break;
}
return true;
}
What am I doing wrong?
The same calculation you do in the vertex shader you use to render the cube should be used to translate the cube center into the screen space.
Normally you would multiply each each vertex of the cube by the modelViewProjection matrix and then send it to the fragment shader.
You should use the exact same matrix you use in the vertex shader and multiply the center of the cube with it.
However, multiplying a 4x4 matrix with a Vec4 vertex (x, y, z, 1) would give you a Vec4 result of (x2, y2, z2, w).
In order to get the screen space coordinates you need to divide x2 and y2 by w!
After you divide it by w your xy coordinates are suppose to be within [-1..1]x[-1..1] range.
In order to get the exact pixel you would need to normalize x2/w and y2/w into [0..1] and then multiply it by the screen resolution width and height.
Hope this helps.
I have a View which is rotated around the X and Y axis using View.setRotationX and View.setRotationY. I have used View.getMatrix() and modified the values of the Matrix. I would now like to apply the Matrix back to the View but I have not found a good way of doing this without using the legacy Animation API in Android.
Basically what i need is to convert the Matrix values to View transformation values.
Example:
float[] src = new float[]{0, 0, view.getWidth(), 0, 0, view.getHeight(), view.getWidth(), view.getHeight()};
float[] dst = new float[8];
Matrix matrix = view.getMatrix();
matrix.mapPoints(dst, src);
dst[7] = NEW_Y_COORD_OF_CORNER;
matrix.setPolyToPoly(src, 0, dst, 0, dst.length >> 1);
//The matrix is now changed but the View is not.
So i would like to get rotation and translation of the Matrix to apply it back to the View:
float newRotationX = convertMatrixRotationToViewRotationX(matrix); //method i need
float newRotationY = convertMatrixRotationToViewRotationY(matrix); //method i need
float newTranslationX = convertMatrixRotationToViewTranslationX(matrix); //method i need
float newTranslationY = convertMatrixRotationToViewTranslationY(matrix); //method i need
view.setRotationY(newRotationX);
view.setRotationX(newRotationY);
view.setX(newTranslationX);
view.setY(newTranslationY);
This might seem like a far to complex way of transforming the View but I need to do it this way in order to be able to set x,y corner coordinates of the View. For more info of getting the corner coordinates of a way see my previous post here: https://stackoverflow.com/questions/24330073/how-to-get-all-4-corner-coordinates-of-a-view-in-android?noredirect=1#comment37672700_24330073
I'm moving a bitmap along a path on a Canvas. The path has various curves and the bitmap follows along. pm.getMatrix does a really lovely job of handling the position and rotation adjustments along the path when its passed the PathMeasure.POSITION_MATRIX_FLAG and TANGENT_MATRIX_FLAG, however, it rotates the bitmap pivoted on the 0,0 coordinate. I need it to pivot on the center of the bitmap.
I cracked open the matrix in the debugger, and it appears that there is indeed *no spoon. There is however 3 arrays of floats, each containing 3 floats. I'm guessing that if I can get those values, I can probably figure out which of them describes the rotation of the object, and there's probably some way to alter the pivot point? I see no other way to do it... Would love some guidance on at least what those three float arrays actually describe.
PathMeasure pm = new PathMeasure(playerPath, false);
float fSegmentLen = pm.getLength() / numSteps;
Matrix mxTransform = new Matrix();
pm.getMatrix(fSegmentLen * iCurStep, mxTransform,
PathMeasure.POSITION_MATRIX_FLAG + PathMeasure.TANGENT_MATRIX_FLAG );
canvas.drawBitmap(playerCar, mxTransform, null);
try this:
private void setDrawingMatrix(float distance) {
pm.getMatrix(distance, mxTransform, PathMeasure.POSITION_MATRIX_FLAG | PathMeasure.TANGENT_MATRIX_FLAG);
mxTransform.preTranslate(-playerCar.getWidth() / 2.0f, -playerCar.getHeight() / 2.0f);
}
and then in onDraw method:
canvas.drawBitmap(playerCar, mxTransform, null);
happy driving...
I'm building a "navigation type" app for android.
For the navigation part I'm building an Activity where the user can move and zoom the map (which is a bitmap) using touch events, and also the map rotate around the center of the screen using the compass.
I'm using Matrix to scale, transpose and rotate the image, and than I draw it to the canvas.
Here is the code called on loading of the view, to center the image in the screen:
image = new Matrix();
image.setScale(zoom, zoom);
image_center = new PointF(bmp.getWidth() / 2, bmp.getHeight() / 2);
float centerScaledWidth = image_center.x * zoom;
float centerScaledHeigth = image_center.y * zoom;
image.postTranslate(
screen_center.x - centerScaledWidth,
screen_center.y - centerScaledHeigth);
The rotation of the image is doing using the postRotate method.
Then in the onDraw() method I only call
canvas.drawBitmap(bmp, image, drawPaint);
The problem is that, when the user touch the screen, I want to get the point touched on the image, but apparently I can't get the correct position.
I tried to invert the image matrix and translate the touched points, it isn't working.
Do somebody know how to translate the point coordinates?
EDIT
I'm using this code for traslation.
dx and dy are translation values get from the onTouch listener.
*new_center* is an array of float values in this form {x0, y0, x1, y1...}
Matrix translated = new Matrix();
Matrix inverted = new Matrix();
translated.set(image);
translated.postTranslate(dx, dy);
translated.invert(inverted);
inverted.mapPoints(new_center);
translated.mapPoints(new_center);
Log.i("new_center", new_center[0]+" "+new_center[1]);
Actually I tried using as *new_center = {0,0}*:
appling only the translated matrix, I get as espected the distance between the (0,0) point of the bmp and the (0,0) point of the screen, but it seems to not take account of the rotation.
Appling the inverted matrix to the points I get those results, moving the image in every possible way.
12-26 13:26:08.481: I/new_center(11537): 1.9073486E-6 -1.4901161E-7
12-26 13:26:08.581: I/new_center(11537): 0.0 -3.874302E-7
12-26 13:26:08.631: I/new_center(11537): 1.9073486E-6 1.2516975E-6
12-26 13:26:08.781: I/new_center(11537): -1.9073486E-6 -5.364418E-7
12-26 13:26:08.951: I/new_center(11537): 0.0 2.682209E-7
12-26 13:26:09.093: I/new_center(11537): 0.0 7.003546E-7
Instead I was especting the coordinates translated on the image.
Is it correct my line of thoughts?
Ok, I get it.
First I separated the rotation from the translation and zooming of image.
Because I created a custom ImageView, this was simple. I apply the rotation to the canvas of the ImageView, and the other transformations to the matrix of the image.
I get trace of the canva's matrix throught a global matrix variable.
Some code:
To set the correct movement for the corresponding onTouch event, first I "rotate back" the points passed from onTouch (start and stop points) using the inverse of the matrix of the canvas
Then I calculate the difference between x and y, and apply that to the image matrix.
float[] movement = {start.x, start.y, stop.x, stop.y};
Matrix c_t = new Matrix();
canvas.invert(c_t);
c_t.mapPoints(movement);
float dx = movement[2] - movement[0];
float dy = movement[3] - movement[1];
image.postTranslate(dx, dy);
If instead you want to check that the image movement don't exceed its size, before the image.postTranslate(dx, dy); you put this code:
float[] new_center = {screen_center.x, screen_center.y};
Matrix copy = new Matrix();
copy.set(image);
copy.postTranslate(dx, dy);
Matrix translated = new Matrix();
copy.invert(translated);
translated.mapPoints(new_center);
if ((new_center[0] > 0) && (new_center[0] < bmp.getWidth()) &&
(new_center[1] > 0) && (new_center[1] < bmp.getHeight())) {
// you can remove the image.postTranslate and copy the "copy" matrix instead
image.set(copy);
...
It's important to note that:
A) The center rotation of the image is the center of the screen, so it will not change coordinates during the canvas' rotation
B) You can use the coordinates of the center of the screen to get the rotation center of the image.
With this method you can also convert every touch event to image coordinates.
As a beginner to android and openGL 2.0 es, I'm testing simple things and see how it goes.
I downloaded the sample at http://developer.android.com/training/graphics/opengl/touch.html .
I changed the code to check if I could animate a rotation of the camera around the (0,0,0) point, the center of the square.
So i did this:
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
long time = SystemClock.uptimeMillis() % 4000L;
float angle = ((float) (2*Math.PI)/ (float) 4000) * ((int) time);
Matrix.setLookAtM(mVMatrix, 0, (float) (3*Math.sin(angle)), 0, (float) (3.0f*Math.cos(angle)), 0 ,0, 0, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I expected the camera to look always to the center of the square (the (0,0,0) point) but that's not what happens. The camera is indeed rotating around the square but the square does not stay in the center of the screen.. instead it is moving along the X axis...:
I also expected that if we gave the eyeX and eyeY the same values as centerX and centerY,like this:
Matrix.setLookAtM(mVMatrix, 0, 1, 1, -3, 1 ,1, 0, 0f, 1.0f, 0.0f);
the square would keep it's shape (I mean, your field of vision would be dragged but along a plane which would be paralel to the square), but that's also not what happens:
This is my projection matrix:
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
What is going on here?
Looking at the source code to the example you downloaded, I can see why you're having that problem, it has to do with the order of the matrix multiplication.
Typically in OpenGL source you see matrices set up such that
transformed vertex = projMatrix * viewMatrix * modelMatrix * input vertex
However in the source example program that you downloaded, their shader is setup like this:
" gl_Position = vPosition * uMVPMatrix;"
With the position on the other side of the matrix. You can work with OpenGL in this way, but it requires that you reverse the lhs/rhs of your matrix multiplications.
Long story short, in your case, you should change your shader to read:
" gl_Position = uMVPMatrix * vPosition;"
and then I believe you will get the expected behavior.