Visually correct rotation with OpenGl and touch screen capabilities - android

I have been trying to do Cube of Rubik for android. I have one question about rotations. I want to rotate a figure visually correct. It means if user touch screen and after move his finger to right a figure rotate to right from side of observation point. But when I make some rotations the figure start move in not correct direction. I understand that it depends on that axis are change their situation. But I tried to use inverse model matrix to get necessary coordinates, but I haven't already result. Could anybody give me example or link of visually correct rotation of 3D figure with help of mouse or touch screen?
//get vector 3D of touch
Vector3f touchVector = getRubikSystemCoordinates(mTouchX,mTouchY,square.rubikRotationMatrix);
//Get vector 3D of move
Vector3f moveVector = getRubikSystemCoordinates(mMoveX,mMoveY,square.rubikRotationMatrix);
//get direction of motion
float direction = touchVector.substractFrom(moveVector);
//get axis for rotation
Vector3f axis = touchVector.vectorProductTo(moveVector);
//normalize axis
axis.normalize();
//get angle of rotation
float angle = direction.length;
//make identity Quad
Quaternion quad = new Quaternion();
//make rotation quad
quad.makeRotationKvaternion(angle,axis);
//from quad recieve matrix
Matrix4f matrix = quad.toMatrix();
//multiply to current modelview matrix
gl.glMultMatrixf(matrix.returnArray(),0);
//save rotation matrix
square.rotationMatrix = square.rotationMatrix.multiply(matrix);
//save modelView matrix
square.saveModelView(square.initMatrix.returnArray());
// touch coords to current modelView coords
private Vector3f getRubikSystemCoordinates(float x, float y, Matrix4f matrix){
// touch coords to normal coords of screen
Vector2f normalCoords = (new Vector2f(x,y)).toNormalScreenCoordinates(Settings.viewPort[2],Settings.viewPort[3]);
// to sphere coords in 3D
Vector3f sphereVector = new Vector3f(normalCoords.x,normalCoords.y, FloatMath.sqrt(2-normalCoords.x*normalCoords.x-normalCoords.y*normalCoords.y));
//Get inverse matrix from ModelView Matrix
Matrix4f m = matrix.inverseMatrix();
//Get vector for current modelView 3D coords
Vector3f vector = m.multiplyToVector(vector);
// make normalize vector
vector.normalize();
return vector;
}

What you're looking for is named arcball rotation. You'll find plenty of resources in java around the internet on this.

You are probably storing your rotation as three angles. Use matrix instead. Create a separate transformation matrix just for this rotation. Every time user rotates the object apply the rotation to this matrix. This way the movement will always be relative to current orientation.

Related

Mapping real-world coordinate to OpenGL coordinate system

I'm currently trying to implement an AR-browser based on indoor maps, but I'm facing several problems, let's take a look at the figure:
In this figure, I've already changed the coordinate to OpenGL's right-handed coordinate system.
In our real-world scenario,
given the angle FOV/2 and the camera height h then I can get nearest visible point P(0,0,-n).
Given the angle B and the camera height h then I can get a point Q(0,0,-m) between nearest visible point and longest visible point.
Here comes a problem: when I finished setup my vertices(including P and Q) and use the method Matrix.setLookAtM like
Matrix.setLookAtM(modelMatrix, 0, 0f,h,0f,0f,-2000f,0f,0f,1f,0f);
the aspect ratio is incorrect.
If the camera height h is set to 0.92 and FOV is set to 68 degrees, n should be 1.43, But in OpenGL the coordinate of the nearest point is not (0,0,-1.43f). So I'm wondering how to fix this problem, how to map real-world coordinate to OpenGL's coordinate system?
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.
Model matrix:
The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions of the mesh to the world space.
View matrix:
The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space. In the coordinat system on the viewport, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
The view matrix can be set up by Matrix.setLookAtM
Projection matrix:
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
The perspective projection matrix can be set up by Matrix.perspectiveM
You can set up a separate view matrix and a separate projection matrix and finally multiply them. The aspect ratio and the field of view are parameters to [Matrix.perspectiveM]:
Matrix viewM = new Matrix();
Matrix.setLookAtM(viewM, 0, 0, 0, 0f, 0f,-2000f, 0f, 0f, 1.0f, 0.0f);
Matrix prjM = new Matrix();
Matrix.perspectiveM(prjM, 0, fovy, aspect, zNear, zFar);
Matrix viewPrjM = new Matrix();
Matrix.multiplyMM(viewPrjM, 0, prjM, 0, viewM, 0);
Thank to #Rabbid76's support, I finally figure it out myself.
Figure 1: Real-life scenario
Figure 2: OpenGL scenario
In real-life, if we are facing north, we coordinate system would be like:
x point to the east
y point to the north
z point to the sky
so given a camera held by a user, assuming its height is 1.5 meter and its field of view is 68 degrees, we can reference the nearest visible point is located at P(0,2.223,0). We can set the angle B to 89 degrees, so segment QP will be the visible ground on the smartphone screen.
How can we map the coordinate of real-life to OpenGL coordinate system? I found that we must go through several steps:
Assign the camera position to be the origin (e.g. C in figure2).
Due to OpenGL always draw from (1,1) to (-1,-1), we must assign the distance from C to C' to be 1, so that C' is (0, -1, 0).
Finally, we calculate the aspect ratio with camera height in real-life and segment C, C' in OpenGL, and apply it to other coordinates.
By doing stuff above, we can map real-world coordinate to OpenGL coordinate system magically.

Android opengl: Arbitrary axis rotation fails to apply rotation

I'm working in android with opengles 2.0 to render out an object and allow the user to manipulate the object. The user can rotate the object correctly around the fixed axis that the object begins at, but after an initial rotation has been applied, I'm unable to rotate the object around a new arbitrary axis.
As soon as there is a touch event, I reset the modelMatrix and apply the rotations that were already inputted by the user. The variable angle is a three element vector that contains the angle rotation for each axis.
Matrix.setIdentityM(modelMatrix,0);
Matrix.rotateM(modelMatrix,0,angle[0],1.0f,0.0f,0.0f);
Matrix.rotateM(modelMatrix,0,angle[1],0.0f,1.0f,0.0f);
Matrix.rotateM(modelMatrix,0,angle[2],0.0f,0.0f,1.0f);
Then, I apply a new incremental rotation that was inputted during the touch event. This seems to work.
Matrix.rotateM(modelMatrix,0,newYAngle,1.0f,0.0f,0.0f);
Matrix.rotateM(modelMatrix,0,newXAngle,0.0f,1.0f,0.0f);
Afterwards, I try and calculate the arbitrary axis to add the incremental rotation into the angle variable. This seems to be where problems arise.
Matrix.multiplyMV(newAxisX,0,modelMatrix,0,new float[] {0.0f,1.0f,0.0f,1.0f},0);
Matrix.multiplyMV(newAxisY,0,modelMatrix,0,new float[] {1.0f,0.0f,0.0f,1.0f},0);
float lengthX = (float) Math.sqrt(Math.pow(newAxisX[0]/newAxisX[3],2)+Math.pow(newAxisX[1]/newAxisX[3],2)+Math.pow(newAxisX[2]/newAxisX[3],2));
float lengthY = (float) Math.sqrt(Math.pow(newAxisY[0]/newAxisY[3],2)+Math.pow(newAxisY[1]/newAxisY[3],2)+Math.pow(newAxisY[2]/newAxisY[3],2));
angleChanges[0] = (newXAngle*((newAxisX[0]/newAxisX[3])/lengthX)) + (newYAngle*((newAxisY[0]/newAxisY[3])/lengthY));
angleChanges[1] = (newXAngle*((newAxisX[1]/newAxisX[3])/lengthX)) + (newYAngle*((newAxisY[1]/newAxisY[3])/lengthY));
angleChanges[2] = (newXAngle*((newAxisX[2]/newAxisX[3])/lengthX)) + (newYAngle*((newAxisY[2]/newAxisY[3])/lengthY));
After this code executes, the onDraw() method is called and the modelMatrix is multipled by the viewMatrix. That is then multiplied by the projection matrix and the result is fed into my shape class.
This causes a curving when rotating. For example, if I were to rotate the object 90° upwards with a y angle (on the 0.0,1.0,0.0 axis), then attempt to rotate the object 90° to the right (from the user perspective after the first rotation was applied), the object will curve downwards. I've logged the data from the axis vectors I'm using when applying the rotation, and what seems to happen (with the x axis vector) is that it starts out close to 0,0,1, which is correct, then slowly transforms to 1,0,0, which causes problems.
This is my current approach, but I've tried reversing the initial rotation and then applying the incremental rotation based on the vectors generated from that with no success.
Any help would be greatly appreciated.
I was able to solve this problem in the short time since posting (even though it has stumped me for several days).
All I had to do was create a global rotation matrix that stored all the rotations performed on the object. I passed the changes in x and y in as a global variable from my touch event. These are then fed into a temporary rotation matrix, which is then multiplied by the global rotation matrix. The global rotation matrix is them multiplied by the model matrix.
Matrix.setIdentityM(tempRotMatrix,0);
Matrix.rotateM(tempRotMatrix,0,newX,0.0f,1.0f,0.0f);
Matrix.rotateM(tempRotMatrix,0,newY,1.0f,0.0f,0.0f);
newX = 0.0f;
newY = 0.0f;
Matrix.setIdentityM(modelMatrix,0);
Matrix.multiplyMM(rotationMatrix,0,tempRotMatrix,0,rotationMatrix,0);
Matrix.multiplyMM(modelMatrix,0,modelMatrix,0,rotationMatrix,0);

glRotatef is backing to central position

I'm having some issues with glRotateF. The problem is the following:
i'm drawing 3 lines that represent a Cartesian coordinate system, they represent the point of view of a camera.
This camera can rotate its point of view. When it rotates, it sends a message with the angles and the axis that had rotated.
My app gets this message and sends it to a method, this method gets the axis that was rotated and its angle and sends it to this method:
So far everything is working, the problem starts after this.
if I send a message to rotate the angle X on any angle and angle Z on any angle too, it just rotates the z axis.
In Debugging I noticed that first it rotated the X angle in the given angle, but when it rotates the Z in the angle, it goes back to the original position and then it rotates Z, losing the X's rotation.
Like this example:
Initial position:
rotate X on 90º:
rotate Z on 90º (What should be):
rotate Z on 90º (What is really happening):
What i want is rotate x and then z without lose x rotation.
This is how i call to rotate an axis:
openGl.rotateX((float) x);
openGl.rotateZ((float) z);
The method rotateX and rotateZ
public void rotateX(float grau) {
mCubeRotation = grau;
eixoX = 1.0f;
eixoY = 0.0f;
eixoZ = 0.0f;
surfaceView.requestRender(); // line to call onDrawFrame
}
public void rotateZ(float grau) {
mCubeRotation = grau;
eixoX = 0.0f;
eixoY = 0.0f;
eixoZ = 1.0f;
surfaceView.requestRender(); // line to call onDrawFrame
}
This is the code of rotation:
#Override
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(0.0f, 0.0f, -8.0f);
// When i rotate x eixoX equal 1 and the other axis is 0.
// When i rotate z eixoZ equal 1 and the other axis is 0.
gl.glRotatef(mCubeRotation, eixoX, eixoY, eixoZ);
mCube.draw(gl);
gl.glLoadIdentity();
lock = false;
}
if someone help me, I will appreciate.
Your rotate methods are explicitly clobbering the global eixo* variables set by the "other" rotate method, and your draw method is resetting the rendering state matrix via glLoadIdentity, so that is why you only ever get a single rotation.
In general you can't do what you're trying to do with three simple variables and a single call to glRotatef on top of an identity matrix; rotation doesn't concatenate that neatly unless your rotations are trivial because you need to order them correctly (applying two rotations around world-space X then Y is not the same thing as two rotations around world-space Y and then X - ordering matters). This is especially true across frames where you generally want to start from an arbitrary starting rotation. In general you want to apply a series of glRotatef calls on top of an existing matrix to generate the rotation incrementally.
Slightly off topic - if you have any choice here I would really suggest switching to use OpenGL ES 2.x and maintaining the complete translation matrix locally in your application. The OpenGL ES 1.1 API is old and relatively inefficient, and it's push/pop matrix stack assumes a very specific way of writing a scene graph rendering engine; OpenGL ES 2.x and shaders is the way forward and available on all modern devices.

how do I know where on my image I have touched if I have moved and resized my image with pinch, drag and zoom

I have an image, I'm using it as an image map. If the image was fixed then there would be no problem but I need to zoom and drag this image and get and use the coordinates of where the image clicked.
Do I need to keep track of exactly how much this image has moved and has been resized or can I get the 0x0 point of my image(the top left corner of my image).
Is there another way to do it
I should add I've based my image manipulation on this excellent tutorial http://www.zdnet.com/blog/burnette/how-to-use-multi-touch-in-android-2/1747?tag=rbxccnbzd1
You can get the point using the same transformation matrix that is being applied to the image. You want to transform the point between the screen's coordinate system to the image's coordinate system, reversing the effect of the original matrix.
Specifically, you want to transform the x,y coordinates where the user clicked on the screen to the corresponding point in the original image, using the inverse of the matrix that was used to transform the image onto the screen.
A bit of pseudocode assuming matrix contains the transformation that was applied to the image:
// pretend user clicked the screen at {20.0, 15.0}
float x = 20.0;
float y = 15.0;
float[] pts[2];
pts[0] = x;
pts[1] = y;
// get the inverse of the transformation matrix
// (a matrix that transforms back from destination to source)
Matrix inverse = new Matrix();
if(matrix.invert(inverse)) {
// apply the inverse transformation to the points
inverse.mapPoints(pts);
// now pts[0] is x relative to image left
// pts[1] is y relative to image top
}

Android OpenGL ES rotate a cube on x, y, z points, and the cube's center

I want to: rotate my cube on a x,y,z center point. And the second way: i want to rotate my cube on my cube's center points. How i can do this?
Assuming you have a OpenGL ES GL10 object called gl, in your ondraw or similar:
// Push matrix so we can pop later
gl.glPushMatrix();
// Translate to the center of your cube
// Or to whatever xyz point you want
glTranslatef(centreX, centreY, 0);
// rotation = degrees to rotate
// x,y,z are unit vectors for rotation to take place
// I.E x=0.0 y=0.0 z=0.0 would rotate around the z-axis
gl.glRotatef(rotation, x, y, z);
// CUBE DRAWING FUNCTION HERE
// Popmatrix so we undo translation and rotation for
// rest of opengl calls
gl.glPopMatrix();
I suggest looking at the android ports of the nehe opengl tutorials as they are brilliant guides to starting opengl with android.

Categories

Resources