How to draw Open GL Elements more efficienty - android

I've posted my drawing method which is called each frame.
I change the vertices each frame to move the object (which is basically a sprite/textured quad).
As you can see I was initially creating an array each frame, but I have changed this now and I create the array initially and just update it every frame, however I'm wondering if I can do anything more to improve the efficiency? (Although I'm getting about 90fps the sprite does not move smoothly all the time, every now and then it just pauses for a split second). I can't see garbage collector running but I'm guessing it's due to allocation).
As I add more sprites/quads the jerkiness gets worse, but event at 100+ quads, although the smoothness has all but gone, my frame rate is still around 60fps so I can't understand what is slowing this down?
I've also added a screencap from Allocation Tracker
Any help would be appreciated.
public void drawTest(float x, float y, float[] mvpMatrix){
//Convert Co-ordinates
//Left
xPlotLeft = (-MyGLRenderer.ratio)+((x)*MyGLRenderer.coordStepAmountWidth);
//Top
yPlotTop = +1-((y)*MyGLRenderer.coordStepAmountHeight);
//Right
xPlotRight = xPlotLeft+((quadWidth)*MyGLRenderer.coordStepAmountWidth);
//Bottom
yPlotBottom = yPlotTop-((quadHeight)*MyGLRenderer.coordStepAmountHeight);
// Following has been changed as per below. I am now declaring the array initially and just updating it every frame.
// float[] vertices = {
//Top Left
// xPlotLeft,yPlotTop,0, 0,0,
//Top Right
// xPlotRight,yPlotTop,0, 1,0,
//Bottom Left
// xPlotLeft,yPlotBottom,0, 0,1,
//Bottom Right
// xPlotRight,yPlotBottom,0, 1,1
// };
vertices[0]=xPlotLeft;
vertices[1]=yPlotTop;
vertices[2]=0;
vertices[3]=0;
vertices[4]=0;
vertices[5]=xPlotRight;
vertices[6]=yPlotTop;
vertices[7]=0;
vertices[8]=1;
vertices[9]=0;
vertices[10]=xPlotLeft;
vertices[11]=yPlotBottom;
vertices[12]=0;
vertices[13]=0;
vertices[14]=1;
vertices[15]=xPlotRight;
vertices[16]=yPlotBottom;
vertices[17]=0;
vertices[18]=1;
vertices[19]=1;
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
//GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
//Bind texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
//Use program
GLES20.glUseProgram(iProgId);
// Combine the rotation matrix with the projection and camera view
Matrix.multiplyMM(mvpMatrix2, 0, mvpMatrix, 0, mRotationMatrix, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix2, 0);
//Set starting position for vertices (0 for position)
vertexBuf.position(0);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for position
GLES20.glEnableVertexAttribArray(iPosition);
//Set starting position for vertices (3 for texture)
vertexBuf.position(3);
//Specify attributes for vertex
GLES20.glVertexAttribPointer(iTexCoords, 2, GLES20.GL_FLOAT, false, 5 * 4, vertexBuf);
//Enable attribute for texture
GLES20.glEnableVertexAttribArray(iTexCoords);
//Enable Alpha blending and set blending function
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Draw
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
//Disable Alpha blending
GLES20.glDisable(GLES20.GL_BLEND);
}

ByteBuffer.allocateDirect() allocates a new buffer in memory every frame, you can create an initial buffer and overwrite the contents instead. Just use rewind() or position(0) before put().
To improve matters further, use a VBO (vertex buffer object, there are many tutorials online, and several questions on SO on this topic) and glBufferSubData to update the buffer.

Related

What is the proper way to enable and disable GLES20 atributes?

Firsty what am I creating: 2d tile based rpg game.
What I am currently doing I will post here, and comment some spots that I am not sure if I am using them correctly.
In GlSurfaceViewRenderer:
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
//I enable some attributes
//Don't know if its needed in GLES20, there isnt GL20.GL_PERSPECTIVE_CORRECTION_HINT attribute at all
GLES20.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_FASTEST);
//next I don't really know if I need them all:
GLES20.glClearColor(0, 0, 0, 1);
GLES20.glClearDepthf(1.0f);
GLES20.glDisable(GLES20.GL_CULL_FACE);// No culling of back faces
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_TEXTURE_2D);
GLES20.glDisable(GLES20.GL_DITHER);
GLES20.glDisable(GL10.GL_LIGHTING);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// for transperent pixels
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
//Load shaders: (1 vertex and fragment shader I hope is enouph)
iProgId = Utils.LoadProgram(vertexShaderCode, fragmentShaderCode);
GLES20.glUseProgram(iProgId);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(iProgId, "vPosition");
// get handle to textures shader's a_TexCoordinate member
mTextureCoordinateHandle = GLES20.glGetAttribLocation(iProgId, "a_TexCoordinate");
// get handle to transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(iProgId, "uMVPMatrix");
//Now in here not sure: some people enable arrays in every draw frame, I do it once:
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Enable a handle to the texture vertices
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
}
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
GLES20.glViewport(0, 0, width, height);
Matrix.frustumM(mProjMatrix, 0, ratio, -ratio, 1, -1, 1, 10000);
//.... and other
}
#Override
public void onDrawFrame(GL10 gl) {
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//set camera position
Matrix.setLookAtM(...);
Matrix.multiplyMM(...);
//begin drawing:
for(Sprite spr : Sprite_list){
spr.draw(){
//whats happening in draw:
//First setting the location of the sprite:
Matrix.setIdentityM(...x & y...);
Matrix.translateM(...);
Matrix.setIdentityM(...);
Matrix.multiplyMM(...);
//Some developers enables vertex attrib arrays here and then disables at the end of this drawing method. But I enable it in on surface created and don't disable it, maybe it's faster this way, not sure.
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(GLRenderer.mPositionHandle, DIMENSION, GLES20.GL_FLOAT, false, vertexStride, vertexBuffer);
// Prepare the triangle coordinate data
GLES20.glVertexAttribPointer(GLRenderer.mTextureCoordinateHandle, DIMENSION, GLES20.GL_FLOAT, false, vertexStride, textureBuffer);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textID);
GLES20.glUniformMatrix4fv(GLRenderer.mMVPMatrixHandle, 1, false, GLRenderer.mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
}
}
And everything works, for now, but I really not sure if I am doing any mistakes here, can someone elaborate?
The code itself does not necessarily do anything incorrect from what I can see.
Apart from GLES20.glDisable(GL10.GL_LIGHTING); as it's not a feature available in ES2.
Although the code is very limited to what it will be able to do. Reason is that you're uploading most of the rendering states in surfaceCreated, not when you actually might need them/change renderstates.
For example:
//Now in here not sure: some people enable arrays in every draw frame, I do it once:
// Enable a handle to the triangle vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Enable a handle to the texture vertices
GLES20.glEnableVertexAttribArray(mTextureCoordinateHandle);
It's perfectly fine to just enable the attribute arrays at one place, however if you would like to utilize a different shader program for a particular piece of geometry in your scene, this would become quite problematic.
Consider this, if you would want to change program, you need to potentially Enable/Disable the particular attributes for that program. You would also need to bind the program (glUseProgram)
Therefore as you referenced yourself in the code comment, that others normally enable/disable these during rendering is for that specific reason.
However it's not just for attribute streams but also all types of renderstates like changing program, enabling Cullmode, blending and so forth.
Now one shouldn't go crazy and just upload and change all of these before every draw call, as changing states are expensive.
One will try to batch all the draw calls that will use the same type of renderstates and resources such as textures/programs and so forth together to minimize number of renderstates changes.
So in your renderloop you would then render the sorted scene of renderable objects, then upload any renderstate that might need to change or been invalidated.

Low FPS with OpenGL on Android

I contact because, I try to use openGL with android, in order to make a 2D game :)
Here is my way of working:
I have a class GlRender
public class GlRenderer implements Renderer
In this class, on onDrawFrame I do
GameRender() and GameDisplay()
And on gameDisplay() I have:
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Reset the Modelview Matrix
gl.glMatrixMode(GL10.GL_PROJECTION); //Select The Modelview Matrix
gl.glLoadIdentity(); //Reset The Modelview Matrix
// Point to our buffers
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Set the face rotation
gl.glFrontFace(GL10.GL_CW);
for(Sprites...)
{
sprite.draw(gl, att.getX(), att.getY());
}
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
And in the draw method of sprite I have:
_vertices[0] = x;
_vertices[1] = y;
_vertices[3] = x;
_vertices[4] = y + height;
_vertices[6] = x + width;
_vertices[7] = y;
_vertices[9] = x + width;
_vertices[10] = y + height;
if(vertexBuffer != null)
{
vertexBuffer.clear();
}
// fill the vertexBuffer with the vertices
vertexBuffer.put(_vertices);
// set the cursor position to the beginning of the buffer
vertexBuffer.position(0);
// bind the previously generated texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
// Point to our vertex buffer
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer.mByteBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer.mByteBuffer);
// Draw the vertices as triangle strip
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, _vertices.length / 3);
My problem is that I have a low frame rate, even at 30 FPS I loose some frame sometimes with only 1 sprite (but it is the same with 50)
Am I doing something wrong? How can I improve FPS?
In general, you should not be changing your vertex buffer for every sprite drawn. And by "in general", I pretty much mean "never," unless you're making a particle system. And even then, you would use proper streaming techniques, not write a quad at a time.
For each sprite, you have a pre-built quad. To render it, you use shader uniforms to transform the sprite from a neutral position to the actual position you want to see it on screen.

OpenGL ES 1.1 strange lighting problems

I am examining an interesting problem I'm facing with OpenGL lighting on Android. I'm working on a 3D Viewer where you can add and manipulate 3D objects. You can also set a light with different attributes. The problem I was facing with my Viewer was that the highlight on the 3D objects from the light (it is a point light) behaved strangely. If the light source was in the exact same point as the camera, the highlight would move in the opposite direction you would expect. (So if you move the object to the left, the highlight moves to the leftedge of the object as well, instead of the right, which is what I was expecting.)
So to further narrow the problem down I've created a small sample application that only renders a square and then I rotate that square around the camera position (the origin), which is also where the light is placed. This should result in all squares facing the camera directly, so that they would be completely highlighted. The result though looked like that:
Can it be that these artifacts appear because of the distortion you get on the border due to the projection?
In the first image the distance between the sphere and the camera is about 20 units and the size of the sphere is about 2. If I move the light closer to the object the highlight looks a lot better, in the way I'm expecting it.
In the second image the radius in which the squares are located is 25 units.
I'm using OpenGL ES 1.1 (since I was struggling to get it to work with shaders in ES 2.0) on Android 3.1
Here is some of the code I'm using:
public void onDrawFrame(GL10 gl) {
// Setting the camera
GLU.gluLookAt(gl, 0, 0, 0, 0f, 0f, -1f, 0f, 1.0f, 0.0f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
for (int i = 0; i < 72; i++) {
gl.glPushMatrix();
gl.glRotatef(5f * i, 0, 1, 0);
gl.glTranslatef(0, 0, -25);
draw(gl);
gl.glPopMatrix();
}
}
public void draw(GL10 gl) {
setMaterial(gl);
gl.glEnable(GL10.GL_NORMALIZE);
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glFrontFace(GL10.GL_CCW);
// Enable the vertex and normal state
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mVertexBuffer);
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, mIndexBuffer.capacity(), GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_NORMAL_ARRAY);
}
// Setting the light
private void drawLights(GL10 gl) {
// Point Light
float[] position = { 0, 0, 0, 1 };
float[] diffuse = { .6f, .6f, .6f, 1f };
float[] specular = { 1, 1, 1, 1 };
float[] ambient = { .2f, .2f, .2f, 1 };
gl.glEnable(GL10.GL_LIGHTING);
gl.glEnable(GL10.GL_LIGHT0);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glLightfv(GL10.GL_LIGHT0, GL_POSITION, position, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_DIFFUSE, diffuse, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_AMBIENT, ambient, 0);
gl.glLightfv(GL10.GL_LIGHT0, GL_SPECULAR, specular, 0);
}
private void setMaterial(GL10 gl) {
float shininess = 30;
float[] ambient = { 0, 0, .3f, 1 };
float[] diffuse = { 0, 0, .7f, 1 };
float[] specular = { 1, 1, 1, 1 };
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, diffuse, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, ambient, 0);
gl.glMaterialfv(GL_FRONT_AND_BACK, GL_SPECULAR, specular, 0);
gl.glMaterialf(GL_FRONT_AND_BACK, GL_SHININESS, shininess);
}
I'm setting the light in the beginning, when the activity is started (in onSurfaceCreated) and the material everytime I draw a square.
The effect in your second example (with the squares) is rather due to the default non-local viewer that OpenGL uses. By default the eye-space view vector (the vector from vertex to camera, used for the specular highlight computation) is just taken to be the (0, 0, 1)-vector, instead of the normalized vertex position. This approximation is only correct if the vertex is in the middle of the screen, but gets more and more incorrect the farther you move to the boundary of the srceen.
To change this and let OpenGL use the real vector from the vertex to the camera, just use the glLightModel function, especially
glLightModeli(GL_LIGHT_MODEL_LOCAL_VIEWER, GL_TRUE);
I'm not sure if this is also the cause for your first problem (with the sphere), but maybe, just try it.
EDIT: It seems you cannot use GL_LIGHT_MODEL_LOCAL_VIEWER in OpenGL ES. In this case there is no way around this problem, except switching to OpenGL ES 2.0 and doing all lighting computations yourself, of course.
Your light is probably moving when you're moving your object.
Take a look at this answer http://www.opengl.org/resources/faq/technical/lights.htm#ligh0050

Increasing drawing performance with a hexagon grid

I'm working on my very first openGL game, inspired by the game "Greed Corp" on the playstation network. It's a turn based strategy game that is based on a hex grid. Each hexagon tile has it's own height and texture.
I'm currently drawing a hexagon based on some examples and tutorials I've read. Here's my hextile class:
public class HexTile
{
private float height;
private int[] textures = new int[1];
private float vertices[] = { 0.0f, 0.0f, 0.0f, //center
0.0f, 1.0f, 0.0f, // top
-1.0f, 0.5f, 0.0f, // left top
-1.0f, -0.5f, 0.0f, // left bottom
0.0f, -1.0f, 0.0f, // bottom
1.0f, -0.5f, 0.0f, // right bottom
1.0f, 0.5f, 0.0f, // right top
};
private short[] indices = { 0, 1, 2, 3, 4, 5, 6, 1};
//private float texture[] = { };
private FloatBuffer vertexBuffer;
private ShortBuffer indexBuffer;
//private FloatBuffer textureBuffer;
public HexTile()
{
ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(vertices);
vertexBuffer.position(0);
ByteBuffer ibb = ByteBuffer.allocateDirect(indices.length * 2);
ibb.order(ByteOrder.nativeOrder());
indexBuffer = ibb.asShortBuffer();
indexBuffer.put(indices);
indexBuffer.position(0);
/*ByteBuffer tbb = ByteBuffer.allocateDirect(texture.length * 4);
tbb.order(ByteOrder.nativeOrder());
textureBuffer = tbb.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);*/
}
public void setHeight(float h)
{
height = h;
}
public float getHeight()
{
return height;
}
public void draw(GL10 gl)
{
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_FAN, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
}
public void loadGLTexture(GL10 gl, Context context)
{
textures[0] = -1;
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.hex);
while(textures[0] <= 0)
gl.glGenTextures(1, textures, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
}
Every frame I'm looping through all visible tiles to draw them, that would be 11 * 9 tiles MAX. This however drops my framerate to 38, and that is without even drawing textures on them, just the flat hexagons.
Now I'm trying to figure out how to increase performance. I figured drawing the whole grid at once could be faster, but I have no idea how to do that, since each tile can have a different height, and will most likely have a different texture than a neighboring tile.
I'd really appreciate some help on this, because I'd like to get started on the actual game ^.^
Assuming your hex grid is static you can just spin over all your hexagons once, generate their geometry, and append everything to one (or more, if you have more than 2^16 vertices) large VBO that you can draw in one go.
As far as textures go you may be able to use a texture atlas.
I'm currently learning OpenGL as well, I've produced one Android OpenGL application called 'Cloud Stream' where I found similar issues with performance.
As a general answer to performance concerns, there are a few things which can help. The Vertex pipeline for Graphics at a hardware level apparently is more efficient when passed larger amounts of vertices at once. Calling glVertexPointer for each Hex tile is not as efficient as calling it once with the vertices of all tiles.
This makes things harder to code as you essentially draw all your tiles at once, but it does seem to speed things up a bit. In my application, all of the clouds are drawn in the one call.
Other avenues to try would be to save the Vertice positions in a VBO which I found to be quite tricky, at least it was when trying to cater for 2.1 users. These days things might be easier, I'm not sure. With that the idea is to save the Vertice array for your tile into Video Memory and you get back a pointer like you do with your Textures. As you can imagine, not sending your Vertice Array Buffer up each frame speeds things up a little for each tile draw. Even if things aren't static its a good approach as I doubt things are changing for each frame.
Another suggestion I came across online was to use Short instead of Float for your Vertices. And then to change the scale of your finished rendering to get your desired size. This would lower the size of your vertices and speed things up a little... not a huge amount but still worth trying I would say.
One last thing I would like to add, if you end up using any Transparency... be aware that you must paint back to front for it to work which has a performance impact as well. If you draw front to back, the Rendering engine automatically knows not to draw when coordinates are not visible... drawing back to front means everything is drawn. Keep this in mind and always try to draw front to back when possible.
Good luck with your game, I'd love to know how you got on... I'm just starting my game and I'm quite excited to start. If you haven't already come across this... I think it's worth a read. http://www.codeproject.com/KB/graphics/hexagonal_part1.aspx

Problems drawing in OpenGL ES 2D Orthographic (Ortho) mode

I've been beating my head against the desk trying to figure this out for days now, and after scouring Stack Overflow and the web, I haven't found any examples that have worked for me. I've finally got code that's seems close, so maybe you guys (and gals?) can help me figure this out.
My first problem is that I'm trying to implement a motion blur by taking a screen grab as a texture, then drawing the texture over the next frame with transparency -- or use more frames for more blur. (If anyone's interested, this is the guide I followed: http://www.codeproject.com/KB/openGL/MotionBlur.aspx)
I've got the screen saving as a texture working fine. The issue I'm having is drawing in Ortho mode on top of the screen. After much head banging, I finally got a basic square drawing, but my lack of OpenGL ES understanding and an easy to follow example are holding me back now. I need to take the texture I saved, and draw it into the square I drew. Nothing I've been doing seems to work.
Also, my second problem, is drawing more complex 3d models into Ortho mode. I can't seem to get any models to draw. I'm using the (slightly customized) min3d framework (http://code.google.com/p/min3d/), and I'm trying to draw Object3d's in Ortho mode just like I draw them in Perspective mode. As I understand it, they should draw the same, they should just not have depth. Yet I don't seem to see them at all.
Here's the code I'm working with. I've tried a ton of different things and this is the closest I've gotten (actually drawing something on the screen that can be seen). I still have no idea how to get a proper 3d model drawing in the ortho view. I'm sure I'm doing something horribly wrong and probably completely misunderstanding some basic aspects of OpenGL drawing. Let me know if there's any other code I need to post.
// Gets called once, before all drawing occurs
//
private void reset()
{
// Reset TextureManager
Shared.textureManager().reset();
// Do OpenGL settings which we are using as defaults, or which we will not be changing on-draw
// Explicit depth settings
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glClearDepthf(1.0f);
_gl.glDepthFunc(GL10.GL_LESS);
_gl.glDepthRangef(0,1f);
_gl.glDepthMask(true);
// Alpha enabled
_gl.glEnable(GL10.GL_BLEND);
_gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);
// "Transparency is best implemented using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
// with primitives sorted from farthest to nearest."
// Texture
_gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); // (OpenGL default is GL_NEAREST_MIPMAP)
_gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); // (is OpenGL default)
// CCW frontfaces only, by default
_gl.glFrontFace(GL10.GL_CCW);
_gl.glCullFace(GL10.GL_BACK);
_gl.glEnable(GL10.GL_CULL_FACE);
// Disable lights by default
for (int i = GL10.GL_LIGHT0; i < GL10.GL_LIGHT0 + NUM_GLLIGHTS; i++) {
_gl.glDisable(i);
}
//
// Scene object init only happens here, when we get GL for the first time
//
}
// Called every frame
//
protected void drawScene()
{
if(_scene.fogEnabled() == true) {
_gl.glFogf(GL10.GL_FOG_MODE, _scene.fogType().glValue());
_gl.glFogf(GL10.GL_FOG_START, _scene.fogNear());
_gl.glFogf(GL10.GL_FOG_END, _scene.fogFar());
_gl.glFogfv(GL10.GL_FOG_COLOR, _scene.fogColor().toFloatBuffer() );
_gl.glEnable(GL10.GL_FOG);
} else {
_gl.glDisable(GL10.GL_FOG);
}
// Sync all of the object drawing so that updates in the mover
// thread can be synced if necessary
synchronized(Renderer.SYNC)
{
for (int i = 0; i < _scene.children().size(); i++)
{
Object3d o = _scene.children().get(i);
if(o.animationEnabled())
{
((AnimationObject3d)o).update();
}
drawObject(o);
}
}
//
//
//
// Draw the blur
// Set Up An Ortho View
_switchToOrtho();
_drawMotionBlur();
// Switch back to the previous view
_switchToPerspective();
_saveScreenToTexture("blur", 512);
}
private void _switchToOrtho()
{
// Set Up An Ortho View
_gl.glDisable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
_gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
//_gl.glOrthof(0f, 480f, 0f, 800f, -100f, 100f);
_gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
}
private void _switchToPerspective()
{
// Switch back to the previous view
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION);
_gl.glPopMatrix();
_gl.glMatrixMode(GL10.GL_MODELVIEW);
_gl.glPopMatrix(); // Pop The Matrix
}
private void _saveScreenToTexture(String $textureId, int $size)
{
// Save the screen as a texture
_gl.glViewport(0, 0, $size, $size);
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId($textureId));
_gl.glCopyTexImage2D(GL10.GL_TEXTURE_2D,0,GL10.GL_RGB,0,0,512,512,0);
_gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
_gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
_gl.glViewport(0, 0, 480, 800);
}
private void _drawMotionBlur()
{
// Vertices
float squareVertices[] = {
-3f, 0f, // Bottom Left
475f, 0f, // Bottom Right
475f, 800f, // Top Right
-3f, 800f // Top Left
};
ByteBuffer vbb = ByteBuffer.allocateDirect(squareVertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(squareVertices);
vertexBuffer.position(0);
//
//
// Textures
FloatBuffer textureBuffer; // buffer holding the texture coordinates
float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(squareVertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
//
//
//
_gl.glLineWidth(3.0f);
_gl.glTranslatef(5.0f, 0.0f, 0.0f);
_gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
_gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
//_gl.glTranslatef(100.0f, 0.0f, 0.0f);
//_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
//_gl.glTranslatef(100.0f, 0.0f, 0.0f);
//_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
_gl.glEnable(GL10.GL_TEXTURE_2D);
_gl.glEnable(GL10.GL_BLEND);
_gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
_gl.glLoadIdentity();
//
//
//
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId("blur"));
_gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
_gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
//
//
//
_gl.glDisable(GL10.GL_BLEND);
_gl.glDisable(GL10.GL_TEXTURE_2D);
_gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
}
EDIT: Here's a simpler example, it's all in the one function and doesn't include any of the saving the screen to a texture stuff. This is just drawing a 3d scene, switching to Ortho, drawing a square with a texture, then switching back to perspective.
// Called every frame
//
protected void drawScene()
{
// Draw the 3d models in perspective mode
// This part works (uses min3d) and draws a 3d scene
//
for (int i = 0; i < _scene.children().size(); i++)
{
Object3d o = _scene.children().get(i);
if(o.animationEnabled())
{
((AnimationObject3d)o).update();
}
drawObject(o);
}
// Set Up The Ortho View to draw a square with a texture
// over the 3d scene
//
_gl.glDisable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION); // Select Projection
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
_gl.glOrthof(0f, 480f, 0f, 800f, -1f, 1f);
_gl.glMatrixMode(GL10.GL_MODELVIEW); // Select Modelview Matrix
_gl.glPushMatrix(); // Push The Matrix
_gl.glLoadIdentity(); // Reset The Matrix
// Draw A Square With A Texture
// (Assume that the texture "blur" is already created properly --
// it is as I can use it when drawing my 3d scene if I apply it
// to one of the min3d objects)
//
float squareVertices[] = {
-3f, 0f, // Bottom Left
475f, 0f, // Bottom Right
475f, 800f, // Top Right
-3f, 800f // Top Left
};
ByteBuffer vbb = ByteBuffer.allocateDirect(squareVertices.length * 4);
vbb.order(ByteOrder.nativeOrder());
FloatBuffer vertexBuffer = vbb.asFloatBuffer();
vertexBuffer.put(squareVertices);
vertexBuffer.position(0);
FloatBuffer textureBuffer; // buffer holding the texture coordinates
float texture[] = {
// Mapping coordinates for the vertices
0.0f, 1.0f, // top left (V2)
0.0f, 0.0f, // bottom left (V1)
1.0f, 1.0f, // top right (V4)
1.0f, 0.0f // bottom right (V3)
};
ByteBuffer byteBuffer = ByteBuffer.allocateDirect(squareVertices.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
byteBuffer = ByteBuffer.allocateDirect(texture.length * 4);
byteBuffer.order(ByteOrder.nativeOrder());
textureBuffer = byteBuffer.asFloatBuffer();
textureBuffer.put(texture);
textureBuffer.position(0);
_gl.glLineWidth(3.0f);
_gl.glTranslatef(5.0f, 0.0f, 0.0f);
_gl.glVertexPointer(2, GL10.GL_FLOAT, 0, vertexBuffer);
_gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
_gl.glDrawArrays(GL10.GL_LINE_LOOP, 0, 4);
_gl.glEnable(GL10.GL_TEXTURE_2D);
_gl.glEnable(GL10.GL_BLEND);
_gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
_gl.glLoadIdentity();
_gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureManager.getGlTextureId("blur"));
_gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
_gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
_gl.glDisable(GL10.GL_BLEND);
_gl.glDisable(GL10.GL_TEXTURE_2D);
_gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
// Switch Back To The Perspective Mode
//
_gl.glEnable(GL10.GL_DEPTH_TEST);
_gl.glMatrixMode(GL10.GL_PROJECTION);
_gl.glPopMatrix();
_gl.glMatrixMode(GL10.GL_MODELVIEW);
_gl.glPopMatrix(); // Pop The Matrix
}
EDIT2: Thanks to Christian's answer, I removed the second glVertexPointer and _gl.glBlendFunc (GL10.GL_ONE, GL10.GL_ONE); (I deleted them from the sample code above as well so it wouldn't confuse the question). I now have a texture rendering, but only in one of the triangles that make up the square. So I'm seeing a triangle in the left portion of the screen that has the texture applied. Why is it not being applied to both halves of the square? I think it's because I have only one of these calls: gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4); so I'm literally only drawing one triangle.
First, you set the blend function to (GL_ONE, GL_ONE), which will just add the blur texture to the framebuffer and make the whole scene overbright. You probalby want to use (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), but then you have to make sure your blur texture has the correct alpha, by configuring the texture environment to use a constant value for the alpha (instead of the texture's) or use GL_MODULATE with a (1,1,1,0.5) coloured square. Alternatively use a fragment shader.
Second, you specify a size 3 in the second call to glVertexPointer, but your data are 2d vectors (the first call is right).
glOrtho is not neccessarily 2D, its just a camera without perspective distortion (farther objects don't get smaller). The parameters to glOrtho specify your screen plane size in view coordinates. Thus if your scene covers the world in the unit cube, an ortho of 480x800 is just too large (this is no problem if you draw other objects than in perspective, as your square or UI elements, but when you want to draw your same 3d objects the scales have to match). Another thing is that in ortho the near and far distances still matter, everything that falls out is clipped away. So if your camera is at (0,0,0) and you view along -z with a glOrtho of (0,480,0,800,-1,1), you will only see those objects that intersect the (0,0,-1)-(480,800,1)-box.
So keep in mind, that glOrtho and glFrustum (or gluPerspective) all define a 3d viewing volume. In ortho its a box and in frustum its, guess a frustum (capped pyramid). consult some more introductory texts on transformations and viewing if this was not clear enough.

Categories

Resources