I'm rotating with model in Android's OpenGL.
Why those two examples below don't produce same results? I thought, there is no difference, when I rotate about axis x and then y or y and then x.
gl.glRotatef(_angleY, 0f, 1f, 0f); //ROLL
gl.glRotatef(_angleX, 1f, 0f, 0f); //ELEVATION
gl.glRotatef(_angleZ, 0f, 0f, 1f); //AZIMUTH
gl.glRotatef(_angleX, 1f, 0f, 0f); //ELEVATION
gl.glRotatef(_angleY, 0f, 1f, 0f); //ROLL
gl.glRotatef(_angleZ, 0f, 0f, 1f); //AZIMUTH
Unless those rotations are all applied simultaneously, I would think order definitely would matter.
If I had a cube and I rotated it around the x axis and moved the front face to the top, after rotating around the y axis, the original front face would still be on the top.
If instead I first rotated around the y axis then the original front face would be moved aside so when I then rotated around the x axis the original front face would NOT be rotated to the top.
I believe that order of rotation does matter.
Related
I'm trying to write an VR application using opengl on Android. I know it's very simple with Google Cardboard SDK but I want to do it entirely in OpenGL to understand clearly. Now, I have something that I am not clearly. I hope someone help me to clarify.
What is off-axis and on-axis projection? Do Google Cardboard use off-axis projection?
I know that in order to create stereo view for VR, camera should be translate d/2 with d is distance between two eyes. I tried something like this
Matrix.setLookAtM(mViewMatrix, 0, 1, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);//translate 1 according x axis for right eye
Matrix.setLookAtM(mViewMatrix, 0, -1, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);//translate -1 according x axis for left eye
Now, suppose real value of d is 5 cm or d/2 = 2.5cm. How I have to translate camera to correctly? I don't know map 5cm in real world into OpenGL coordinate.
I'm looking forward to the help. Sorry because of my bad English. Thank you!
I am developing a 2d game where I cant rotate my view using the rotation sensor and view different textures on screen.
I am scattering all the textures using this method :
public void position(ShaderProgram program, float[] rotationMatrix , float[] projectionMatrix , float longitude , float latitude , float radius)
{
this.radius = radius;
viewMat = new float[MATRIX_SIZE];
mvpMatrix = new float[MATRIX_SIZE];
// correct coordinate system to fit landscape orientation
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, viewMat);
//correct the axis so that the direction of Y axis is to the sky and Z is to the front
Matrix.rotateM(viewMat, 0, -90f, 1f, 0f, 0f);
// first rotation - longitude
Matrix.rotateM(viewMat, 0, longitude, 0f, 1f, 0f);
//second rotation - latitude
Matrix.rotateM(viewMat, 0, latitude, 1f, 0f, 0f);
// used to control the distance of viewing the texture (currently only z translation is used)
Matrix.translateM(viewMat, 0 , 0f , 0f , radius);
//multiply the adjusted view matrix with projection matrix
Matrix.multiplyMM(mvpMatrix, 0, projectionMatrix, 0, viewMat, 0);
//send mvp matrix to shader
GLES20.glUniformMatrix4fv(program.getMatrixLocation(), 1, false, mvpMatrix, 0);
}
however when I render large amount of textures , the framerate becomes very laggy . so I thought about using culling.
how should I perform the culling test after I have a different view matrix for every texture?
what I mean is , how do I compare if the matrix that represent where I'm viewing right now intersects with the matrix represents each texture so I'll decide if to draw it or not ?
There are many ways on doing this but each of them will need more then just a matrix. A matrix (assuming the center of the object is at 0,0 without applying any matrix) alone will not handle cases where you may see only a part of the object.
You may define boundaries of the original object with 8 points such as a cube. Imagine if you draw these 8 points with the same matrix as the object the points will appear around the object so that they can define a surface which will box the object itself.
So these points may then be multiplied with your resulting matrix (the whole MVP matrix) which will project them to the openGL drawable part of the coordinate system. Now you only need to check that if any of these points is inside [-1,1] in every axis then you must draw the object. So x, y and z must be between -1 and 1.
Update:
Actually that will not be enough as the intersection may happen even if all of the 8 points are outside those coordinates. You will need a proper algorithm to find the intersection of the 2 shapes...
I am very confused about the use on GLU.gLookAt(eyeX,eyeY,eyeZ,Xpos,Ypos,Zpos,upX,upY,upZ) method. All I want is to zoom the 3d cube.
When I increase/decrease value of eyeZ, the camera moves forward/backward to the cube. Its all fine up to a certain limit of eyeZ, but when I increase the eyeZ value beyond that limit, it starts reverting the effect i.e. instead of zooming in it starts zooming out.
I might not be good in openGL to understand above method but could anyone tell me whats the basic reason behind this.
I referred to this link
http://jerome.jouvie.free.fr/opengl-tutorials/Tutorial8.php
If you want I can post my code over here..
public void onDrawFrame(GL10 gl)
{
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
//gl.glTranslatef(xPos, yPos, -zoomFactor);
GLU.gluLookAt(gl, eyeX, eyeZ, eyeZ, 0f, 0f, 0f , 0f, 1f, 0f);
gl.glRotatef(mAngleX, 0, 1, 0);
gl.glRotatef(mAngleY, -1, 0, 0);
// Draw the model
cube.draw(gl);
}
this is the method where i am using gLookAt method..
GLU.gluLookAt(gl, eyeX, eyeZ, eyeZ, 0f, 0f, 0f , 0f, 1f, 0f); is a function that puts your camera looking at a particular spot, in this case its (0,0,0) (i think, cant remember which way round the parameters are, but im assuming the last 3 are your up vector). So if you move your camera towards what you are looking at, eventually it will go through it and out the other side, and since you are using GLU.gluLookAt it will turn to face the object behind it, thus giving you the impression that you are zooming out when you carry on moving in the same direction.
Would there be a performance increase if my app was to modifying the objects vertices instead of using glTranslatef?
The vertices of the NPC object are set as the following; this allows them to be 1/10th of the screen width because of a previous call to gl.glScalef()
protected float[] vertices = {
0f, 0f, -1f, //Bottom Left
1f, 0f, -1f, //Bottom Right
0f, 1f, -1f, //Top Left
1f, 1f, -1f //Top Right
};
At the moment I have a collection of NPC objects which are drawn on the screen, when they move the X and Y values are updated, which my onDraw accesses to draw the NPCs in the correct place.
onDraw(GL10 gl){
for(int i=0; i<npcs.size(); i++){
NPC npc = npcs.get(i);
npc.move();
translate(npc.x, npc.y);
npc.draw(gl);
}
}
translate(x,y) - pushes and pops the matrix while calling the method gl.glTranslatef() making calculations in relation to the screen size and ratio
npc.draw(gl) - enables client state and draws arrays
Would there be an increase in performance if the move function changed the vertices and of the NPC object? for example;
move(){
// ... do normal movement calculations
float[] vewVertices = {
x, y, 0f,
x+npc.width, y, z,
x, y+npc.height, z,
x+npc.width, y+npc.height, z
}
vertexBuffer.put(newVertices);
vertexBuffer.position(0);
}
I am about to create a short test to see if I can see any performance increase, but I wanted to ask if anyone had any previous experience with this.
The best way is simply to use the translate function since a transformation of the model view matrix during a translation consists in the manipulation of 3 float values while a change in the vertices information is directly proportional to the number of vertices you have.
With all the due respect, the way you proposed is very inconvenient and you should stick to matrix manipulation in place of vertices manipulation.
You can refer to this document for more information about matrix changes during translation operations:
http://www.songho.ca/opengl/gl_transform.html
Cheers
Maurizio
After creating a test state in my current Open GL app there seems to be no performance increase when changing the vertices directly, over using gl.glTranslatef()
Like Maurizio Benedetti pointed out, you will start to see a difference only when your vertex count is sufficiently large.
I am showing a textured sqad, centered around [0,0,-10], width and height = 10000. The camera is positioned at [0,0,0] and looks down the negative z-axis (eyepoint=[0,0,0], center=[0,0,-1]):
GLU.gluLookAt(gl, 0f, 0f, 0f, 0f, 0f, -1f, 0f, 1f, 0f);
Lighting and Depth-Test are disabled.
In orthographic mode, the squad is displayed perfectly, with texture and all - I can even zoom and pan around.
However, when switching to perspective mode, via:
GLU.gluPerspective(gl, 60.0f, w / h, 1.0f, 1000.0f);
then the view is just blank. Has anybody got any idea what could cause this?
UPDATE:
Using glFrustum instead of gluPerspective, it works:
gl.glFrustumf(-scaledHalfW, scaledHalfW, -scaledhalfH, scaledhalfH, 1.0f, 100.0f);
But why does gluPerspective not show anything?
Is w / h an integer division maybe?