I'm working on an OpenGL application, specifically for Android with OpenGL ES (2.0 I think). I can currently draw objects independently and rotate the scene all at once. I need to be able to translate/rotate individual objects independently and then rotate/translate the whole scene together. How can I accomplish this? I've read several threads explaining how to push/pop matrices but I'm pretty sure this functionality was deprecated along with the fixed function pipeline of OpenGL 1.1.
To give some perspective, below is the onDrawFrame method for my renderer. Field, Background and Track are all classes I've made that encapsulate vertex data and the draw method draws the appropriate matrices to the supplied context, in this case GL10 'gl'.
//clear the screen
gl.glClear(GL10.GL_COLOR_BUFFER_BIT |GL10.GL_DEPTH_BUFFER_BIT);
// TODO Auto-generated method stub
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 60, 0, 0, 0, 0, 2, 0);//setup camera
//apply rotations
long time = SystemClock.uptimeMillis();
float angle = ((int)time)/150.0f;
gl.glRotatef(55.0f, -1, 0, 0);//rotates whole scene
gl.glRotatef(angle, 0, 0, -1);//rotates whole scene
MainActivity.Background.draw(gl);
MainActivity.Track.draw(gl);
MainActivity.Field.draw(gl);
*Update: As it turns out, I can push and pop matrices. Is there anything wrong with the pushing/popping method? It seems to be a very simple way of independently rotating and translating objects which is exactly what I need. *
There should be nothing wrong with using glPushMatrix and glPopMatrix. The Android implementation of ES 2.0, to my understanding, has default shaders which behave exactly like the default fixed-function pipeline, and you can use the fixed-function pipeline functions. They will behave identically.
Related
As for OpenGL ES 2 I have understood that there no longer are any Matrices (matrix stack) in it. So I have to create my own matrices.
What I want to do is just draw some simple 2D graphics, like a couple of rectangles.
Lots of code I find use OpenGL ES 1 or older OpenGL where there still was a matrix stack, so I can't use it directly in 2.0.
I believe I want code that does something like this
public void onSurfaceCreated(GL10 unused, EGLConfig eglConfig) {
// Set the background frame color
GLES20.glClearColor(0.1f, 0.3f, 0.5f, 1.0f);
// Set 2D drawing mode
GLES20.glViewport(0, 0, windowWidth, windowHeight);
GLES20.glMatrixMode(GL_PROJECTION);
GLES20.glLoadIdentity();
GLES20.glOrtho(0, windowWidth, windowHeight, 0, -1, 1);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
}
but there are no longer any methods glMatrixMode, glLoadIdentity, glOrtho.
How would I translate this into OpenGL ES 2 to set it up for 2D drawing? I believe I can use the Matrix class procvided by android, but I am not sure how.
Basically, you don't "set" any matrices with OpenGL ES 2.0 (as you setup other things, like the viewport, disable GL_DEPTH_TEST, etc). Instead, you create and manage the matrices yourself, passing them to your shaders on each frame render.
You can just create an orthographic projection matrix, then pass it to your shader as a uniform (eg: glUniformMatrix4fv).
I cannot comment on exactly how to do that with Android, but if you have a Matrix class, it should have functions to create an orthographic projection matrix. Then you would just pass a pointer to the data (ie: the 16 floats - 4x4 matrix) to glUniformMatrix4fv before calling glDrawArrays/glDrawElements/etc.
So, your above setup function would be much smaller..
public void onSurfaceCreated(GL10 unused, EGLConfig eglConfig) {
// Set the background frame color
GLES20.glClearColor(0.1f, 0.3f, 0.5f, 1.0f);
// Set 2D drawing mode
GLES20.glViewport(0, 0, windowWidth, windowHeight);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
}
But your render functions would look different (you could still create your ortho projection matrix above... just making sure to update it if necessary.. ie: screen resizing/moving/etc).
This page covers it pretty well for Android:
http://www.learnopengles.com/android-lesson-one-getting-started/
Is it necessary to use a projection matrix like so:
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
and
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation and store results in mMVPMatrix
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
I'm having no end of trouble doing a simple 2d (sprite) rotation around the z axis.
The most success I've had so far is to manipulate the rotation matrix (rotate and translate) and pass it directly to the vertex shader.
It's not perfect and carries with it some shearing/skewing/distortion but at least it allows me to move the 'pivot'/centre point of the quad. If I put the above lines in the whole thing breaks and I get all kinds of odd results.
What is the actual purpose of the lines above (I have read the android docs but I dont understand them) and are they necessary? Do people write OpenGl apps without them?
Thanks!!
OpenGL is a C API but many frameworks will wrap its functions into other functions to make life easier. For example, in OpenGL ES 2.0 you must create and pass matrices to OpenGL. But OpenGL does not provide you with any tools to actually build and calculate these matrixes. This is where many other libraries exist to do this matrix creation for you, and then you pass these constructed matrixes to OpenGL -- or the function may very well pass the matrix to OpenGL for you, after making the calculation. Just depends on the library.
You can easily not use these frameworks and do it yourself, which is a great way to learn the math in 3D graphics -- and the math is really key to everything in this area.
I'm sure you have direct access to the OpenGL API in Android, but you are choosing to use a library that perhaps Android provides natively (similar to how Apple provides GLKit, a recent addition to their frameworks for iOS). But that doesn't mean you must use that library, but it might provide faster development if you know what the library is doing.
In this case, the three functions above appear to be pretty generic matrix/graphics utilities. You have a frustrum function that sets the projection in 3D space. You have the lookAt function that determines what the view of the camera is -- where is it looking and where is the camera while it looks there.
And you have a matrix multiplication function, since in the end all matrices must be combined before they are applied to the vertices of your 3D object.
It's important to understand that a typical modelview matrix will include the camera orientation/location but it will also include the rotation and scaling of your object. So just sending a modelview based on the camera (from LookAt) is not enough, unless you want your object to remain at the center of the screen, with no rotation.
If you were to expand all the math that goes into matrix multiplication, it might look like this for a typical setup:
Frustum * Camera * Translation * Rotation * Vertices
Those middle three, Camera, Translation, Rotation, are usually combined together into your modelview, so multiply those together for that particular matrix, then multiply the modelview by your frustum projection matrix, and this whole result can be applied to your vertices.
You must be very careful about the order of the matrix multiplication. Multiplying a frustum by a modelview is not the same as multiplying a modelview by a frustum.
Now you mention skewing, distortion, etc. One possible reason for this is your viewport. I'm sure somewhere in your API is an option to set the viewport's height and width, which are usually the height and width of your screen. If they are set differently, you will get an improper aspect ratio and some skewing that you see. Just one possible explanation. Or it could be that your parameters to your frustum aren't quite right, since that will certainly affect things like skew also.
This question is about OpenGL ES 1.x programming for Android.
I followed this tutorials and tested code on Samsung Galaxy Ace and it lagged a bit.
Some code of that tutorial:
public void onDrawFrame(GL10 gl) {
// Clears the screen and depth buffer.
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Replace the current matrix with the identity matrix
gl.glLoadIdentity();
// Translates 10 units into the screen.
gl.glTranslatef(0, 0, -10);
// SQUARE A
// Save the current matrix.
gl.glPushMatrix();
// Rotate square A counter-clockwise.
gl.glRotatef(angle, 0, 0, 1);
// Draw square A.
square.draw(gl);
// Restore the last matrix.
gl.glPopMatrix();
// SQUARE B
// Save the current matrix
gl.glPushMatrix();
// Rotate square B before moving it, making it rotate around A.
gl.glRotatef(-angle, 0, 0, 1);
// Move square B.
gl.glTranslatef(2, 0, 0);
// Scale it to 50% of square A
gl.glScalef(.5f, .5f, .5f);
// Draw square B.
square.draw(gl);
// SQUARE C
// Save the current matrix
gl.glPushMatrix();
// Make the rotation around B
gl.glRotatef(-angle, 0, 0, 1);
gl.glTranslatef(2, 0, 0);
// Scale it to 50% of square B
gl.glScalef(.5f, .5f, .5f);
// Rotate around it's own center.
gl.glRotatef(angle*10, 0, 0, 1);
// Draw square C.
square.draw(gl);
// Restore to the matrix as it was before C.
gl.glPopMatrix();
// Restore to the matrix as it was before B.
gl.glPopMatrix();
// Increse the angle.
angle++;
}
What are the week parts here?
What should one do to optimize OpenGL ES program for Android?
Should I rather use NDK in big graphics projects?
Is it worth goind direct to OpenGL ES 2.0?
As far as I didn't find any good and complex book on OpenGL ES 1.x programming for Android, I adress this question to honorable users of Stackoverflow.
Would appreciate any help.
Define lag? It might be helpful to look at framerate to get a better sense of performance.
But TBH, so long as square.draw(gl) is doing what it implies, then this is a very simple program. There is nothing performance heavy about this code.
I get the sense though that this is more of a speculative question for a bigger project. Some things to consider is what kind of graphical effects you will be trying to achieve. Will OpenGL ES 1.x be powerful enough for you? If you need to write custom shader code, you must use ES 2.0. Remember though, 2.0 requires you to write everything as a shader. It rips out many of the 1.0 features and gives those features to the developer to implement and customize. So development will be more complex and more time consuming.
As a warining, do not dive straight into the NDK as a starting point. All of these OpenGL calls are already native. It will be much (much much) easier to write an Android app in Java land than in C/C++ using JNI.
As a final word, early optimization is the root of all evil. Once you have selected your technologies, implemented a solution, and measured its performance, you can then worry about optimizing the code!
This is my first post here, therefore apologize for any blunders.
I'm developing a simple action game with the usage of OpenGL ES 2.0 and Android 2.3. My game framework on which I'm currently working on is based on two dimensional sprites which exists in three dimensional world. Of course my world entities possess information such as position within the imaginary world, rotational value in form of float[] matrix, OpenGL texture handle as well as Android's Bitmap handle (I'm not sure if the latter is necessary as I'm doing the rasterisation with the usage of OpenGl machine, but for the time being it is just there, for my convenience). This is briefly the background, now to the problematic issue.
Presently I'm stuck with the pixel based collision detection as I'm not sure which object (here OGL texture, or Android Bitmap) I need to sample. I mean, I've already tried to sample Android's Bitmap, but it completely didn't worked for me - many run-time crashes in relation to reading outside of the bitmap. Of course to be able to read the pixels from the bitmap, I've used Bitmap.create method to obtain properly rotated sprite. Here's the code snippet:
android.graphics.Matrix m = new android.graphics.Matrix();
if(o1.angle != 0.0f) {
m.setRotate(o1.angle);
b1 = Bitmap.createBitmap(b1, 0, 0, b1.getWidth(), b1.getHeight(), m, false);
}
Another issue, which might add to the problem, or even be the main problem, is that my rectangle of intersection (rectangle indicating two dimensional space mutual for both objects) is build up from parts of two bounding boxes which were computed with the usage of OpenGL matrices Matrix.multiplyMV functionality (code below). Could it be, that those two Android and OpenGL matrices computation methods aren't equal?
Matrix.rotateM(mtxRotate, 0, -angle, 0, 0, 1);
// original bitmap size, equal to sprite size in it's model space,
// as well as in world's space
float[] rect = new float[] {
origRect.left, origRect.top, 0.0f, 1.0f,
origRect.right, origRect.top, 0.0f, 1.0f,
origRect.left, origRect.bottom, 0.0f, 1.0f,
origRect.right, origRect.bottom, 0.0f, 1.0f
};
android.opengl.Matrix.multiplyMV(rect, 0, mtxRotate, 0, rect, 0);
android.opengl.Matrix.multiplyMV(rect, 4, mtxRotate, 0, rect, 4);
android.opengl.Matrix.multiplyMV(rect, 8, mtxRotate, 0, rect, 8);
android.opengl.Matrix.multiplyMV(rect, 12, mtxRotate, 0, rect, 12);
// computation of object's bounding box (it is necessary as object has been
// rotated second ago and now it's bounding rectangle doesn't match it's host
float left = rect[0];
float top = rect[1];
float right = rect[0];
float bottom = rect[1];
for(int i = 4; i < 16; i += 4) {
left = Math.min(left, rect[i]);
top = Math.max(top, rect[i+1]);
right = Math.max(right, rect[i]);
bottom = Math.min(bottom, rect[i+1]);
};
Cheers,
first note that there is a bug in your code. You can not use Matrix.multiplyMV() with source and destination vector being the same (the function will correctly calculate an x coordinate which it will overwrite in the source vector. However, it needs the original x to calculate the y, z and w coordinates - which are in turn flawed). Also note that it would be easier for you to use bounding spheres for the first detection collision step, as they do not require such a complicated code to perform matrix transformation.
Then, the collision detection. You should not read from bitmaps nor textures. What you should do is to build a silhouette for your object (that is pretty easy, silhouette is just a list of positions). After that you need to build convex objects that fill the (non-convex) silhouette. It can be acheived by eg. ear clipping algorithm. It may not be the fastest, but it is very easy to implement and will be done only one time. Once you have the convex objects, you can transform their coordinates using a matrix and detect collisions with your world (there are many nice articles on ray-triangle intersections you can use), and you get the same precision as if you were to use pixel-based collision detection.
I hope it helps ...
I´m trying to rotate a sprite using drawtexture but nothing happens. I´m using the following code:
gl.glRotatef(90, 0, 0, 1.0f);
gl.glBindTexture(GL10.GL_TEXTURE_2D, TextureID);
(GL11Ext) gl).glDrawTexfOES(x, y, z, width, height);
The texture is drawn to the screen but it is not rotated... Anyone? :)
From the OES_draw_texture extension:
Xs and Ys are given directly in window (viewport) coordinates.
So the passed in coordinates are not transformed by the modelview and projection matrices, which is what glRotatef changes. In short, this extension does not support rotated sprites.
If you want those, the simplest is to draw standard rotated quads instead.
After testing quite a bit og different ways to do this, I found the answer was right in front of me the whole time... I was using the SpriteMethodTest example as my codebase, but I ignored the VBO extension part there, wich basically has all the needed functionality.
SpriteMethodTest: http://code.google.com/p/apps-for-android/source/browse/trunk/#trunk/SpriteMethodTest