Is it necessary to use a projection matrix like so:
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
and
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation and store results in mMVPMatrix
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
I'm having no end of trouble doing a simple 2d (sprite) rotation around the z axis.
The most success I've had so far is to manipulate the rotation matrix (rotate and translate) and pass it directly to the vertex shader.
It's not perfect and carries with it some shearing/skewing/distortion but at least it allows me to move the 'pivot'/centre point of the quad. If I put the above lines in the whole thing breaks and I get all kinds of odd results.
What is the actual purpose of the lines above (I have read the android docs but I dont understand them) and are they necessary? Do people write OpenGl apps without them?
Thanks!!
OpenGL is a C API but many frameworks will wrap its functions into other functions to make life easier. For example, in OpenGL ES 2.0 you must create and pass matrices to OpenGL. But OpenGL does not provide you with any tools to actually build and calculate these matrixes. This is where many other libraries exist to do this matrix creation for you, and then you pass these constructed matrixes to OpenGL -- or the function may very well pass the matrix to OpenGL for you, after making the calculation. Just depends on the library.
You can easily not use these frameworks and do it yourself, which is a great way to learn the math in 3D graphics -- and the math is really key to everything in this area.
I'm sure you have direct access to the OpenGL API in Android, but you are choosing to use a library that perhaps Android provides natively (similar to how Apple provides GLKit, a recent addition to their frameworks for iOS). But that doesn't mean you must use that library, but it might provide faster development if you know what the library is doing.
In this case, the three functions above appear to be pretty generic matrix/graphics utilities. You have a frustrum function that sets the projection in 3D space. You have the lookAt function that determines what the view of the camera is -- where is it looking and where is the camera while it looks there.
And you have a matrix multiplication function, since in the end all matrices must be combined before they are applied to the vertices of your 3D object.
It's important to understand that a typical modelview matrix will include the camera orientation/location but it will also include the rotation and scaling of your object. So just sending a modelview based on the camera (from LookAt) is not enough, unless you want your object to remain at the center of the screen, with no rotation.
If you were to expand all the math that goes into matrix multiplication, it might look like this for a typical setup:
Frustum * Camera * Translation * Rotation * Vertices
Those middle three, Camera, Translation, Rotation, are usually combined together into your modelview, so multiply those together for that particular matrix, then multiply the modelview by your frustum projection matrix, and this whole result can be applied to your vertices.
You must be very careful about the order of the matrix multiplication. Multiplying a frustum by a modelview is not the same as multiplying a modelview by a frustum.
Now you mention skewing, distortion, etc. One possible reason for this is your viewport. I'm sure somewhere in your API is an option to set the viewport's height and width, which are usually the height and width of your screen. If they are set differently, you will get an improper aspect ratio and some skewing that you see. Just one possible explanation. Or it could be that your parameters to your frustum aren't quite right, since that will certainly affect things like skew also.
Related
I am learning OpenGL ES 2.0, without ever having learned OpenGL or OpenGL ES 1.x.
I'm applying non-uniform scaling to my modelViewMatrix, so the tutorials tell me that I need to take special steps to compute a normalMatrix. In my application the modelViewMatrix has dimension 4x4.
Some tutorials say that for the normalMatrix I need to simply calculate transpose(inverse(modelViewMatrix)).
Other instructions say that I need to first take the upper left 3x3 sub-matrix of my modelViewMatrix and then compute transpose(inverse(submatrix)).
Is there any difference? Do they lead to the same result?
Right now I'm using method 1, and then in the vertex shader I extract a vec3 after applying the transformation:
vec3 vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
I am doubting this because I see strange effects in my diffuse lighting. I'm thinking about trying method 2, but the android.opengl.Matrix class does not offer methods for inverting or transposing 3x3 matrices...
My actual code in renderFrame() is as follows:
final float[] normalMatrix=new float[16];
final float[] unscaledModelViewMatrix=modelViewMatrix_Vuforia.getData();
Matrix.invertM(normalMatrix, 0, unscaledModelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
A 3x3 matrix is enough to transform the normals.
The primary purpose of using 4x4 matrices that operate on homogenous coordinates for positions is that they can express translations. A 3x3 matrix applied to a 3-member vector can not express translations. You can easily confirm that because it will always map the origin back to the origin.
Since normals are vectors, and not positions, we specifically do not want to apply the translation part of the modelview matrix to them. They describe directions, and directions do not change when a translation is applied.
So the cleanest approach is to use a 3x3 matrix for normals, and set it to the inverse-transpose of the top-left 3x3 elements of the modelview matrix.
In the case where you only have rotations and uniform scaling in the modelview matrix (which does not apply to your specific situation), people sometimes use the same modelview matrix that they also use for the positions. Which is correct, as long as the translation part is not applied. This can be done by setting the w component of the vector to zero for the multiplication:
vec3 transformedNormal = (modelViewMatrix * vec4(originalNormal, 0.0)).xyz
With the w component being zero, the matrix elements that encode the translation have no effect on the result, and this corresponds to using only the top-left 3x3 part of the matrix. As long as this is the same as the inverse-transpose of the matrix, which is the case for rotations and uniform scaling, this is a valid shortcut.
I'm working on an OpenGL application, specifically for Android with OpenGL ES (2.0 I think). I can currently draw objects independently and rotate the scene all at once. I need to be able to translate/rotate individual objects independently and then rotate/translate the whole scene together. How can I accomplish this? I've read several threads explaining how to push/pop matrices but I'm pretty sure this functionality was deprecated along with the fixed function pipeline of OpenGL 1.1.
To give some perspective, below is the onDrawFrame method for my renderer. Field, Background and Track are all classes I've made that encapsulate vertex data and the draw method draws the appropriate matrices to the supplied context, in this case GL10 'gl'.
//clear the screen
gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT);
// TODO Auto-generated method stub
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 60, 0, 0, 0, 0, 2, 0);//setup camera
//apply rotations
long time = SystemClock.uptimeMillis();
float angle = ((int)time)/150.0f;
gl.glRotatef(55.0f, -1, 0, 0);//rotates whole scene
gl.glRotatef(angle, 0, 0, -1);//rotates whole scene
MainActivity.Background.draw(gl);
MainActivity.Track.draw(gl);
MainActivity.Field.draw(gl);
*Update: As it turns out, I can push and pop matrices. Is there anything wrong with the pushing/popping method? It seems to be a very simple way of independently rotating and translating objects which is exactly what I need. *
There should be nothing wrong with using glPushMatrix and glPopMatrix. The Android implementation of ES 2.0, to my understanding, has default shaders which behave exactly like the default fixed-function pipeline, and you can use the fixed-function pipeline functions. They will behave identically.
This question is about OpenGL ES 1.x programming for Android.
I followed this tutorials and tested code on Samsung Galaxy Ace and it lagged a bit.
Some code of that tutorial:
public void onDrawFrame(GL10 gl) {
// Clears the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Replace the current matrix with the identity matrix
gl.glLoadIdentity();
// Translates 10 units into the screen.
gl.glTranslatef(0, 0, -10);
// SQUARE A
// Save the current matrix.
gl.glPushMatrix();
// Rotate square A counter-clockwise.
gl.glRotatef(angle, 0, 0, 1);
// Draw square A.
square.draw(gl);
// Restore the last matrix.
gl.glPopMatrix();
// SQUARE B
// Save the current matrix
gl.glPushMatrix();
// Rotate square B before moving it, making it rotate around A.
gl.glRotatef(-angle, 0, 0, 1);
// Move square B.
gl.glTranslatef(2, 0, 0);
// Scale it to 50% of square A
gl.glScalef(.5f, .5f, .5f);
// Draw square B.
square.draw(gl);
// SQUARE C
// Save the current matrix
gl.glPushMatrix();
// Make the rotation around B
gl.glRotatef(-angle, 0, 0, 1);
gl.glTranslatef(2, 0, 0);
// Scale it to 50% of square B
gl.glScalef(.5f, .5f, .5f);
// Rotate around it's own center.
gl.glRotatef(angle*10, 0, 0, 1);
// Draw square C.
square.draw(gl);
// Restore to the matrix as it was before C.
gl.glPopMatrix();
// Restore to the matrix as it was before B.
gl.glPopMatrix();
// Increse the angle.
angle++;
}
What are the week parts here?
What should one do to optimize OpenGL ES program for Android?
Should I rather use NDK in big graphics projects?
Is it worth goind direct to OpenGL ES 2.0?
As far as I didn't find any good and complex book on OpenGL ES 1.x programming for Android, I adress this question to honorable users of Stackoverflow.
Would appreciate any help.
Define lag? It might be helpful to look at framerate to get a better sense of performance.
But TBH, so long as square.draw(gl) is doing what it implies, then this is a very simple program. There is nothing performance heavy about this code.
I get the sense though that this is more of a speculative question for a bigger project. Some things to consider is what kind of graphical effects you will be trying to achieve. Will OpenGL ES 1.x be powerful enough for you? If you need to write custom shader code, you must use ES 2.0. Remember though, 2.0 requires you to write everything as a shader. It rips out many of the 1.0 features and gives those features to the developer to implement and customize. So development will be more complex and more time consuming.
As a warining, do not dive straight into the NDK as a starting point. All of these OpenGL calls are already native. It will be much (much much) easier to write an Android app in Java land than in C/C++ using JNI.
As a final word, early optimization is the root of all evil. Once you have selected your technologies, implemented a solution, and measured its performance, you can then worry about optimizing the code!
I have a Opengl ES 1.x ANdroid 1.5 app that shows a Square with Perspective projection, on the center of the screen.
I need to move the camera (NOT THE SQUARE) when the user moves the finger on the screen, for example, if the user moves the finger to the right, the camera must be moved to the left, it must be shown like if the user is moving the square.
I need to do it without translating the square. The square must be on the opengl position 0,0,-1 allways.
I DONT WANT to rotate the camera arround the square, no, what i want is to move the camera side to side. Code examples are welcome, my opengl skills are very low, and i can't find good examples for this in google
I know that i must use this function: public static void gluLookAt (GL10 gl, float eyeX, float eyeY, float eyeZ, float centerX, float centerY, float centerZ, float upX, float upY, float upZ), but i dont understand where and how to get the values for the parameters. Because this, i will apreciate code examples for doing this.
for example:
I have a cube on the position 0,0,-1. I want that my camera points the cube. I tryed with this: GLU.gluLookAt(gl, 0, 0, 2, 0, 0, 0, 0, 0, 1);, but the cube is not in the screen, i just donmt understand what im doing wrong
First of all, you have to understand that in OpenGL there are not distinct model and view matrices. There is only a combined modelview matrix. So OpenGL doesn't care (or even know) if you translate the camera (what is a camera anyway?) or the object, so your requirement not to move the square is entirely artificial. Though it may be that this is a valid requirement and the distinction between model and view transformation often is very practical, just don't think that translating the square is any different from translating the camera from OpenGL's point of view.
Likewise don't you neccessarily need to use gluLookAt. Like glOrtho, glFrustum or gluPerspective this function just modifies the currently selected matrix (usually the modelview matrix), nothing different from the glTranslate, glRotate or glScale functions. The gluLookAt function comes in handy when you want to position a classical camera, but its functionality can also be achieved by calls to glTranslate and glRotate without problems and sometimes (depending on your requirements) this is even easier than artificially mapping your view parameters to gluLookAt parameters.
Now to your problem, which is indeed solvable quite easily without gluLookAt: What you want to do is move the camera in a direction parallel to the screen plane and this in turn is equivalent to moving the camera in the x-y-plane in view space (or camera space, if you want). And this in turn is equivalent to moving the scene in opposite direction in the x-y-plane in view space.
So all that needs to be done is
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, 0.0f);
//camera setup...
Where (x, y) is the movement vector determined from the touch events, appropriately scaled (try dividing the touch coords you get by the screen dimensions or something similar for example). After this glTranslate comes whatever other camera or scene transformations you already have (be it gluLookAt or just some glTranslate/glRotate/glScale calls). Just make sure that the glTranslate(x, y, ...) is the first transformation you do on the modelview matrix after setting it to identity, since we want to move in view space.
So you don't even need gluLookAt. From your other questions I know your code already looks something like
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, z);
glRotatef(...);
...
So everything you need to do is plug the x and y values determined from the touch movement into the first glTranslate call (or add them to already existing x and y values), since multiple translations are perfectly commutative.
For more insight into OpenGL's transformation pipeline (which is definitely needed before progressing further), you may also look at the asnwers to this question.
EDIT: If you indeed want to use gluLookAt (be it instead or after the above mentioned translation), here some small words about its workings. It defines a camera using three 3d vectors (passed in as 3 consecutive values each). First the camera's position (in your case (0, 0, 2)), then the point at which the camera looks (in your case (0, 0, 0), but (0, 0, 1) or (0, 0, -42) would result in the same camera, the direction matters). And last comes an up-vector, defining the approximate up-direction of the camera (which is further orthogonalized by gluLookAt to make an appropriate orthogonal camera frame).
But since the up-vector in your case is the z-axis, which is also the negative viewing direction, this results in a singular matrix. You probably want the y-axis as up-direction, which would mean a call to
gluLookAt(0,0,2, 0,0,0, 0,1,0);
which is in turn equivalent to a simple
glTranslate(0, 0, -2);
since you use the negative z-axis as viewing direction, which is also OpenGL's default.
I´m trying to rotate a sprite using drawtexture but nothing happens. I´m using the following code:
gl.glRotatef(90, 0, 0, 1.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, TextureID);
(GL11Ext) gl).glDrawTexfOES(x, y, z, width, height);
The texture is drawn to the screen but it is not rotated... Anyone? :)
From the OES_draw_texture extension:
Xs and Ys are given directly in window (viewport) coordinates.
So the passed in coordinates are not transformed by the modelview and projection matrices, which is what glRotatef changes. In short, this extension does not support rotated sprites.
If you want those, the simplest is to draw standard rotated quads instead.
After testing quite a bit og different ways to do this, I found the answer was right in front of me the whole time... I was using the SpriteMethodTest example as my codebase, but I ignored the VBO extension part there, wich basically has all the needed functionality.
SpriteMethodTest: http://code.google.com/p/apps-for-android/source/browse/trunk/#trunk/SpriteMethodTest