An image speaks always more than a ton of text, here's what I'm trying to do :
What is in the center of the circle is the user's phone position (origin). The app displays a custom camera view and it also shows an OpenGL scene (depending where you are looking). The OpenGL scene is only composed of a simple cube and when the user is looking in the right direction the cube is rendered.
I'm pretty new to OpenGL and I achieved to display the cube in front of the camera but it's static : I can't move around the 360° view.
I get from the sensors the orientation of the device :
int type = event.sensor.getType();
float[] data;
if (type == Sensor.TYPE_ACCELEROMETER) {
mGData = data;
} else if (type == Sensor.TYPE_MAGNETIC_FIELD) {
mMData = data;
} else {
// we should not be here.
return;
}
for (int i=0 ; i<3 ; i++)
data[i] = event.values[i];
SensorManager.getRotationMatrix(mR, mI, mGData, mMData);
SensorManager.getOrientation(mR, mOrientation);
As far as I understand, the 3 simultaneous orthogonal rotation angles are stored in mOrientation. But what then ? I wanted to make something like GLU.lookAt(0, 0, 0, X, Y, Z, ?, ?, ?) in the onDrawFrame method but it didn't work. I want to make something like that guy said he couldn't do (see the last paragraph here : https://stackoverflow.com/a/9114246/1304830).
Here's the code I used in onDrawFram :
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Look in a direction (with the sensors)
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0.0f, 0.0f, 0.0f, ?, ?, ?, ?, ?, ?); // Where I need help
gl.glPushMatrix();
// Place the cube in the scene
gl.glTranslatef(0, 0, -5);
mCube.draw(gl);
Thanks for your help
This will do what that person was trying to do, but the problem with his idea is the up-vector is always pointing up on the y-axis. So if you roll the phone the camera isn't going to roll with it.
float pi = (float) Math.PI;
float rad2deg = 180/pi;
// Get the pitch, yaw and roll from the sensor.
float yaw = orientation[0] * rad2deg;
float pitch = orientation[1] * rad2deg;
float roll = orientation[2] * rad2deg;
// Convert pitch, yaw and roll to a vector
float x = (float)(Math.cos( yaw ) * Math.cos( pitch ));
float y = (float)(Math.sin( yaw ) * Math.cos( pitch ));
float z = (float)(Math.sin( pitch ));
GLU.gluLookAt( gl, 0.0f, 0.0f, 0.0f, x, y, z, 0.0f, 1.0f, 0.0f );
Using the three glRotates is a better option IMO, unless you want to lock the roll for some reason.
Note: I'm not sure which direction Android calls up in relation to the phone's screen so I may have got yaw, pitch and roll misconfigured.
Related
I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}
float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.
This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )
I have a sphere created using the Rajawali3D OpenGL ES library, with the camera placed inside the sphere at (0, 0, 0). The user can rotate this sphere on swipe.
I want to get the 3D co-ordinates of the spot the user touches on the sphere
Currently I am using the Unproject method to get the near and far planes, calculate vector direction and find the intersection point in the sphere.Here is the code
mNearPos4 = new double[4];
mFarPos4 = new double[4];
mNearPos = new Vector3();
mFarPos = new Vector3();
mNewPos = new Vector3();
// near plane
GLU.gluUnProject(x, getViewportHeight() - y, 0,
mViewMatrix.getDoubleValues(), 0,
mProjectionMatrix.getDoubleValues(), 0, mViewport, 0,
mNearPos4, 0);
// far plane
GLU.gluUnProject(x, getViewportHeight() - y, 1.0f,
mViewMatrix.getDoubleValues(), 0,
mProjectionMatrix.getDoubleValues(), 0, mViewport, 0,
mFarPos4, 0);
// transform 4D to 3D
mNearPos.setAll(mNearPos4[0] / mNearPos4[3], mNearPos4[1]
/ mNearPos4[3], mNearPos4[2] / mNearPos4[3]);
mFarPos.setAll(mFarPos4[0] / mFarPos4[3],
mFarPos4[1] / mFarPos4[3], mFarPos4[2] / mFarPos4[3]);
Vector3 dir = new Vector3(mFarPos.x - mNearPos.x, mFarPos.y - mNearPos.y, mFarPos.z - mNearPos.z);
dir.normalize();
// compute the intersection with the sphere centered at (0, 0, 0)
double a = Math.pow(dir.x, 2) + Math.pow(dir.y, 2) + Math.pow(dir.z, 2);
double b = 2 * (dir.x * (mNearPos.x) + dir.y * (mNearPos.y) + dir.z * (mNearPos.z));
double c = Math.pow(mNearPos.x, 2) + Math.pow(mNearPos.y, 2) + Math.pow(mNearPos.z, 2) - radSquare;
double D = Math.pow(b, 2) - 4 * a * c;
// need only smaller root since the camera is within
// mNewPos is used as the position of the point
mNewPos.setAll((mNearPos.x + dir.x * t), (mNearPos.y + dir.y * t), mNearPos.z);
The problem is that i am getting the same range of co-ordinates when i rotate the sphere. For example, If i get the co-ordinates (a, b, c) on one side of the sphere, i get the same on the opposite side of the sphere.
How do i solve this problem and get the correct co-ordinates for all sides?
I am using Rajawali 1.0.232 snapshot
SOLVED: The problem was that i was saving the camera's projection and view matrices in variables.
So when a call was made to unproject() to convert the 2D point to 3D, it was taking the old values and hence the point was not getting plotted correctly.
So a solution would be to get the camera's view and projection matrices on demand without caching them.
mViewport = new int[]{0, 0, getViewportWidth(), getViewportHeight()};
Vector3 position3D = new Vector3();
mapToSphere(event.getX(), event.getY(), position3D, mViewport,
mCam.getViewMatrix(), mCam.getProjectionMatrix());
where the mapSphere() function does the unproject function as follows
public static void mapToSphere(float x, float y, Vector3 position, int[] viewport,
Matrix4 viewMatrix, Matrix4 projectionMatrix) {
//please refer for explanation in case of openGL
//http://myweb.lmu.edu/dondi/share/cg/unproject-explained.pdf
double[] tempPosition = new double[4];
GLU.gluUnProject(x, viewport[3] - y, 0.7f,
viewMatrix.getDoubleValues(), 0,
projectionMatrix.getDoubleValues(), 0, viewport, 0,
tempPosition, 0);
// the co-ordinates are stored in tempPosition as 4d (x, y, z, w)
// convert to 3D by dividing x, y, z by w
// the minus (-) for the z co-ordinate worked for me
position.setAll(tempPosition[0] / tempPosition[3], tempPosition[1]
/ tempPosition[3], -tempPosition[2] / tempPosition[3]);
}
I'm using OpenGL ES 2.0 for Android. I'm translating and rotating a model using the touch screen. My translations are only in the (x, y) plane, and my rotation is only about the z-axis. Imagine looking directly down at a map on a table and moving to various coordinates on the map, and being able to rotate the map around the point you are looking at.
The problem is that after I rotate, my subsequent translations are to longer matched to the motions of the pointer on the screen, the axes are different.
Everything I've tried gives me one of two behaviors One is equivalent to:
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, Xposition, Yposition, 0.0f);
Matrix.rotateM(mModelMatrix, 0, rotationAngle, 0.0f, 0.0f, 1.0f);
This allows me to translate as expected (up/down on the screen moves the model up and down, left/right moves model left and right), regardless of rotation. The problem is that the rotation is about the center of the object, and I need the rotation to be about the point that I am looking at, which is different than the center of the object.
The other behavior I can get is equivalent to:
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.rotateM(mModelMatrix, 0, rotationAngle, 0.0f, 0.0f, 1.0f);
Matrix.translateM(mModelMatrix, 0, Xposition, Yposition, 0.0f);
This gives me the rotation that I want, always about the point I am looking at. The problem is that after a rotation, the translations are wrong. Left/right on the screen translates the object at a different angle, along the rotated axes.
I need some way to get both behaviors at the same time. It needs to rotate about the point I am looking at, and translate in the direction that the finger moves on the screen.
Is this even possible? Am I basically trying to reconcile quantum mechanics with Newtonian physics, and doomed to failure?
I don't want to list all of the tricks that I've tried, because I want to consider all possibilities with a fresh perspective.
EDIT:
I'm still completely stuck on this.
I have an object that starts at (0, 0, 0) in world coordinates. My view is looking down the z-axis at the object, and I want to limit translation to the x/y plane. I also want to rotate the object about the z-axis only. The center of the rotation must always be the center of the screen.
I am controlling the translation with the touch screen so I need the object to the same way the finger moves, regardless of how it it rotated.
As soon as I rotate, then all of my translations start happening on the rotated coordinate system, which means the object does not move with the pointer on the screen. I've tried to do a second translation as Hugh Fisher recommended, but I can't figure out how to calculate the second translation. Is there another way?
I had the same problem. However i was using C# with OpenGL (SharpGL) and using a rotation Matrix.
Translation after rotation was required to keep rotation point at center of screen.
As a CAD type application does.
Problem was mouse translations are not always parallel to screen after rotations.
I found a fix here.
(Xposition, Yposition) = (Xposition, Yposition) + mRotation.transposed() * (XIncr, YIncr)
or
NewTranslationPosition = oldTranslationPosition + rotationMatrix.Transposed * UserTranslationIncrement.
Many many thanks to reto.koradi (at OpenGL)!
So I roughly coded in 3D like:
double gXposition = 0;
double gYposition = 0;
double gZposition = 0;
double gXincr = 0;
double gYincr = 0;
double gZincr = 0;
float[] rotMatrix = new float[16]; //Rotational matrix
private void openGLControl_OpenGLDraw(object sender, PaintEventArgs e)
{
OpenGL gl = openGLControl.OpenGL;
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
gl.LoadIdentity();
gl.MultMatrix(rotMatrix); //This is my rotation, using a rotation matrix
gl.Translate(gXposition, gYposition, gZposition); //translate second to keep rotation at center of screen
DrawCube(ref gl);
}
private void buttonTransLeft_Click(object sender, EventArgs e)
{
double tX = -0.1;
double tY = 0;
double tZ = 0;
TransposeRotMatrixFindPoint(ref tX, ref tY, ref tZ);
gXposition = gXposition + tX;
gYposition = gYposition + tY;
gZposition = gZposition + tZ;
}
private void buttonTransRight_Click(object sender, EventArgs e)
{
double tX = 0.1;
double tY = 0;
double tZ = 0;
TransposeRotMatrixFindPoint(ref tX, ref tY, ref tZ);
gXposition = gXposition + tX;
gYposition = gYposition + tY;
gZposition = gZposition + tZ;
}
public void TransposeRotMatrixFindPoint(ref double x, ref double y, ref double z)
{
//Multiply [x,y,z] by Transpose Rotation matrix to generate new [x,y,z]
double Xt = 0; //Tempoary variable
double Yt = 0; //Tempoary variable
Xt = (x * rotMatrix[0, 0]) + (y * rotMatrix[0, 1]) + (z * rotMatrix[0, 2]);
Yt = (x * rotMatrix[1, 0]) + (y * rotMatrix[1, 1]) + (z * rotMatrix[1, 2]);
z = (x * rotMatrix[2, 0]) + (y * rotMatrix[2, 1]) + (z * rotMatrix[2, 2]);
//or try this
//Xt = (x * rotMatrix[0, 0]) + (y * rotMatrix[1, 0]) + (z * rotMatrix[2, 0]);
//Yt = (x * rotMatrix[0, 1]) + (y * rotMatrix[1, 1]) + (z * rotMatrix[2, 1]);
//z = (x * rotMatrix[0, 2]) + (y * rotMatrix[1, 2]) + (z * rotMatrix[2, 2]);
x = Xt;
y = Yt;
}
This is an old post, but I'm posting the solution that worked best for me for posterity.
The solution was to keep a separate model matrix that accumulates transformations as they occur, and multiply each transformation by this matrix in the onDrawFrame() method.
//Initialize the model matrix for the current transformation
Matrix.setIdentityM(mModelMatrixCurrent, 0);
//Apply the current transformations
Matrix.translateM(mModelMatrixCurrent, 0, cameraX, cameraY, cameraZ);
Matrix.rotateM(mModelMatrixCurrent, 0, mAngle, 0.0f, 0.0f, 1.0f);
//Multiply the accumulated transformations by the current transformations
Matrix.multiplyMM(mTempMatrix, 0, mModelMatrixCurrent, 0, mModelMatrixAccumulated, 0);
System.arraycopy(mTempMatrix, 0, mModelMatrixAccumulated, 0, 16);
Then the accumulated matrix is used to position the object.
Here's how I think about translations and rotations: you are not moving the object, you are moving the origin of the coordinate system. Thinking about it this way, you'll need an extra translation on your first behaviour.
The finger motion is a translation that should be aligned to the XY axes of the screen, so as you've worked out, should be done before rotation. Then your rotation takes place, which rotates the coordinate system of the object around that point. If you want the object to be drawn somewhere else relative to that point, you'll need to do another translation first to move the origin there.
So I think your final sequence should be something like
translate(dx, dy) ; rotate(A) ; translate(cx, cy) ; draw()
where cx and cy are the distance between the centre of the map and the point being looked at. (Might simplify to -dx, -dy)
Hope this helps.
You should use your first method, although mathematically the second one makes more sense. There is a difference between how OpenGL and Android store matrices. They are arrays after all, but are the first 4 values a row or a column?
That's why it is "backwards". Check this for more info, or read about row major vs column major matrix operations.
I noticed that the first method "backwards" works as intended.
Mathematically:
Suppose you want to rotate around a point (x1, y1, z1). The origin of your object is (Ox, Oy, Oz).
Set Origin:
Matrix.setIdentityM(mModelMatrix, 0);
Then move the point you want to rotate about to the origin:
Matrix.translateM(mModelMatrix, 0, -x1, -y1, -z1);
Then
Matrix.rotateM(mModelMatrix, 0, rotationAngle, 0.0f, 0.0f, 1.0f);
Then move it back:
Matrix.translateM(mModelMatrix, 0, x1, y1, z1);
Then move it where you want:
Matrix.translateM(mModelMatrix, 0, x, y, z);
However, on the backward thinking, you do it in reverse order.
Try:
Set Origin:
Matrix.setIdentityM(mModelMatrix, 0);
Then do stuff in reverse order:
Matrix.translateM(mModelMatrix, 0, x, y, z);
Matrix.translateM(mModelMatrix, 0, x1, y1, z1);
Matrix.rotateM(mModelMatrix, 0, rotationAngle, 0.0f, 0.0f, 1.0f);
Matrix.translateM(mModelMatrix, 0, -x1, -y1, -z1);
I hope this helps.
Edit
I may have miss understood the question: Here is what worked for me is:
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, x1, y1, z1);
Matrix.rotateM(mModelMatrix, 0, rotationAngle, 0.0f, 0.0f, 1.0f);
Matrix.Multiply(mViewProjection, 0, mProjection, 0, mCameraView, 0); //something like this, but do you have the right order?
In my Shader, i have mViewProjection * mModelMatrix * a_Position;
Are you using the vertex shader to do the final multiplication?
Try to do static translate/ rotate (with constant values) instead of controlling the translation with the touch screen. If it works fine, probably you have a bug somewhere else
I'm trying to rotate a model (terrain) situated at (X,Y,Z)=(0,0,0) of my OpenGL-World. +X is east, -Z is north, +Y is altitude. View is looking to -Y because the device Euler angles are (0°,0°,0°) while the device is lying on the table pointing north.
The Problem:
I need to switch the axes of my device to change the measured angle device -y to device z.
Currently rotation around the device x-axis works (results in rotation around World X-axis).
But rotation around device y leads to a rotation around World Y instead of World -Z and rotation around device z to rotation around World Z instead of World Y.
By now I'm running out of ideas how to solve this. Anybody who could give me a hint please?
(OpenGL) (Device)
^ +Y-axis ^ +z-axis
* *
* *
* * ^ +y-axis (North)
* * *
* * *
* * *
************> + X-axis ************> +x-axis
*
*
v +Z-axis (Minus North)
What I've tried so far:
Using Euler angles from SensorManager.getOrientation and switching angles works fine, though I get into gimbal lock close to pitch 90 deg. So I'm in search for another solution (the SensorManager rotation matrix or quaternions).
SensorManager.remapCoordinateSystem in almost every possible constellation -> doesn't help
Changing cols/rows of my rotation matrix from SensorManager.getRotationMatrix -> doesn't help
Well, I've found a pretty easy solution using quaternions.
The axes are beeing switched at the end.
See also:
Based on this solution
SensorEvent.values (--> Sensor.TYPE_ROTATION_VECTOR)
SensorManager.getQuaternionFromVector
Now I'm still wondering how to enable pitch angles above 90 deg (in portrait mode). Any ideas?
Activity:
private SensorManager sensorManager;
private Sensor rotationSensor;
public void onCreate(Bundle savedInstanceState) {
...
rotationSensor = sensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR);
...
}
// Register/unregister SensorManager Listener at onResume()/onPause() !
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR) {
// Set rotation vector
MyRenderer.setOrientationVector(event.values);
}
}
MyRenderer:
float[] orientationVector = new float[3];
public void setOrientationVector (float[] vector) {
// Rotation vector to quaternion
float[] quat = new float[4];
SensorManager.getQuaternionFromVector(quat, vector);
// Switch quaternion from [w,x,y,z] to [x,y,z,w]
float[] switchedQuat = new float[] {quat[1], quat[2], quat[3], quat[0]};
// Quaternion to rotation matrix
float[] rotationMatrix = new float[16];
SensorManager.getRotationMatrixFromVector(rotationMatrix, switchedQuat);
// Rotation matrix to orientation vector
SensorManager.getOrientation(rotationMatrix, orientationVector);
}
public void onDrawFrame(GL10 unused) {
...
// Rotate model matrix (note the axes beeing switched!)
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[1] * 180/Math.PI), 1, 0, 0);
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[0] * 180/Math.PI), 0, 1, 0);
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[2] * 180/Math.PI), 0, 0, 1);
...
}
I have an orthogonal perspective which I initialize like so:
gl.glViewport(0, 0, Constants.SCREEN_WIDTH, Constants.SCREEN_HEIGHT);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0,Constants.GAME_AREA_WIDTH, Constants.GAME_AREA_HEIGHT, 0, 1, 10);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
What I want to do here is have a square start off the top of the screen (at like (x,-100,z) and the that square should descend (on y) while at the same time roate (on z).
The square's upper-left is what I use as reference for the square's position.
Ok, now, I think I get how to roate it around itself. I translate the thing to (-squareSize/2, -squareSize/2,z), rotate it along z, then translate back. And indeed, if I only test this rotation it works ok:
gl.glLoadIdentity();
angle = angle + 3;
if(angle>360) {
angle = angle - 360;
}
gl.glTranslatef(xCurrent+size/2, yCurrent+size/2,0);
gl.glRotatef(angle, 0, 0, 1);
gl.glTranslatef(-(xCurrent+size/2), -(yCurrent+size/2),0);
//omitted: enable client state, draw elements, disable client state.
With just this, no matter where I place my square (even small negative values for x and y which only make it partially show on the screen), it will rotate around its center.
However I can't figure out how to add the downwards translation on y. If I do something like this:
angle = angle + 3;
if(angle>360) {
angle = angle - 360;
}
gl.glTranslatef(xCurrent+size/2, yCurrent+size/2,0);
gl.glRotatef(angle, 0, 0, 1);
gl.glTranslatef(-(xCurrent+size/2), -(yCurrent+size/2),0);
yCurrent = yCurrent + realSpeed;
if(yCurrent>Constants.GAME_AREA_HEIGHT+size) {
yCurrent=-size;
}
gl.glTranslatef(0f, yCurrent,0f);
it will only work ok if my square start at (0,0,z) - in which case it will move down and rotate around it's center.
If however I start it at any positive or negative non 0 value for either x or y, it will still move down, but do a weird spiral motion instead of rotating agains its center.
The OpenGL matrix stack post multiplies. Which effectively means that you should do the most local transformation last.
So what you probably want to do is to perform a glTranslatef to the tile's current position, then do the translate/rotate/untranslate sequence to effect your rotation.
Editor's Note: This answer was moved from a question edit, it is written by the Original Poster.
First off, what Tommy sais in the answer below is right, I should first code the translation to the new position, and THEN add the lines of code that do translate/rotate/translate.
Also, the values I asign to x and y when wanting to translate the center of the square to coordinates (0,0,z) are simply wrong, I misscalculated them. The basic idea here is this. Let's say a square has the following vertices:
private static float xLeft = -0.75f;
private static float xRight = +0.25f;
private static float yTop = 2f;
private static float yBottom = 1f;
protected static float vertices[] = {
//x y z
xLeft, yTop, -5f, //Top left triangle1-1 triangle2-1
xRight, yTop, -5f, //Top right triangle1-2
xLeft, yBottom, -5f, //Bottom left triangle2-3
xRight, yBottom, -5f //Bottom right triangle1-3 triangle2-2
};
then the translation amounts needed to place this square's center at (0,0,z) are:
private float xCenterTranslation = (xRight+xLeft)/2f;
private float yCenterTranslation = (yTop+yBottom)/2f;
and the code for translating the square on the y axis while at the same time rotating it along its center is:
gl.glTranslatef(0, translationAmountLinearY, 0); //translate on y
//decrement Y translation for next rendering
translationAmountLinearY+=translationDeltaLinearY;
gl.glTranslatef(xCenterTranslation, yCenterTranslation, 0);//translate BACK from center
gl.glRotatef(rotationAmountZDegrees, 0, 0, 1);//rotate
gl.glTranslatef(-xCenterTranslation, -yCenterTranslation, 0);//translate to center
//increment z rotation for next rendering:
rotationAmountZDegrees+=0.04f;