GLES20 calls in onDrawFrame causes long delay - android

I have written a game in which when onDrawFrame() is called I update the game state first (game logic and the buffers for drawing) and then proceed to do the actual drawings. On the Moto G and Nexus 7 everything works smoothly and each onDrawFrame() call takes only between 1-5ms. However, on the Samsung Galaxy S3, 90% of the time the onDrawFrame() call takes as long as 30-50ms to update.
Further investigating the issue I found the problem wholly lies on the first render method which I attach below:-
(Edit: the block is now on glclear(); having removed the unnecessary calls to get the handles, please refer to the comments)
public void render(float[] m, FloatBuffer vertex, FloatBuffer texture, ShortBuffer indices, int TextureNumber) {
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, 0, vertex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mTexCoordLoc, 2, GLES20.GL_FLOAT, false, 0, texture);
GLES20.glEnableVertexAttribArray(mTexCoordLoc);
GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, m, 0);
int mSamplerLoc = GLES20.glGetUniformLocation(fhGraphicTools.sp_Image, "s_texture");
GLES20.glUniform1i(mSamplerLoc, TextureNumber);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
// Disable vertex arrays used
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTexCoordLoc);
}
The above method is called 5 times in each onDrawFrame() to draw things on different texture atlas. (the first 4 are actually the game's background which is 1 long rectangle). Having logged the time it takes in each line of the code I found that the lag I am having on the S3 always reside in one of the below lines:
int mPositionHandle = GLES20.glGetAttribLocation(fhGraphicTools.sp_Image, "vPosition");
or
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
The lag only occurs in the first call of render() as the subsequent calls takes about 1-2ms each. I have tried disabling the first render() call but then the second which before only take 1-2ms now become the source of the lag, under the same lines.
Does anyone have an idea of what is wrong in my code which the S3 could not handle? It seem that the S3 have problem commencing GL calls at the beginning of each onDrawFrame(), but why is there such behavior is puzzling me. What can be done to decrease the "startup" delay?
Many thanks for your patience in reading.
Edited code to take Selvin's recommendation.

I have fixed the issue when I moved all my png image files from /drawable to /drawable-nodpi the game speed goes back to normal on the S3.
I assume the auto-scaling done by Android causes the raw bitmap I supplied to opengl in "bad" sizes causing unnecessary texture filtering work on each onDrawFrame, causing the call to miss out vsync and giving a long delay for the glClear while waiting for next vsync. Please correct me if I am wrong, still a newbie to graphics.

Related

Additive blending without glClear

I want to do additive blending on camera preview's surface texture binded to my opengl context.
I am getting weirdly rendered texture when i enable blending(20 noisy squares getting rendered), if I call GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); preview is proper but i lose my additive blending as i have cleared the buffer!
With glClear call
Without glClear call
I have no clue whats the problem is, i am newbie to opengl-es, Any suggestions?
If any piece of code is needed i can provide to better understand the issue.
putting relevant code only. Ask for any other part of code if necessary.
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
//compile shader, link program.... etc omitted for brevity
//enable additive blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);
GLES20.glBlendEquation(GLES30.GL_MAX);
}
public void onDrawFrame(GL10 unused) {
// Do not want to do this, but if i do preview is normal but i lose my blending
//GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
surfaceTexture.updateTexImage();
GLES20.glUseProgram(_onscreenShader);
int th = GLES20.glGetUniformLocation(_onscreenShader, "sTexture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, _rawVideoTexture);
GLES20.glUniform1i(th, 0);
GLES20.glVertexAttribPointer(_onscreenPositionAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pVertex);
GLES20.glVertexAttribPointer(_onscreenUVAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pTexCoord);
GLES20.glEnableVertexAttribArray(_onscreenPositionAttribute);
GLES20.glEnableVertexAttribArray(_onscreenUVAttribute);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
Most Android devices have a PowerVR, Mali, Adreno or Vivante GPU core which are all deferred, tiled-based renderers. This architecture requires the glClear operation at the start of each frame to tell the OpenGL ES driver when to clear internal triangle binning queues as well as the frame buffer. If you don't do the glClear, this caching design does not work properly and you will get weird results that will differ from one GPU type to another. So, you really must do the glClear.

Starfield optimization in libgdx

I want to create a static starfield in libgdx.
My first way was: create a Decal and a DecalBatch over it.
When I draw the Decal I use a Billboarding technic on the Decal
star.decal.setRotation(camera.direction, camera.up);
next: I wanted to animate the alphas on the decals, so I created on a random way some time:
star.decal.setColor(1, 1, 1, 0.6f+((float) Math.random()*0.4f) );
It is working, but my FPS went down from 55 FPS to 25 FPS (because of my 500-1000 stars)
Can I use only one batch call in any way? Maybe a particleMaterial with only one Vertex list and with a GL_POINT mode that is always face to front of my camera?
How can I do this in libgdx?
The Batch is way to complex than what you need , on every frame it needs to copy all the vertices of the sprites in another array and do calculations on them to find the scale rotation etc..
As you suspect GL_POINT sprites will be way faster and in a medium range device it should be able to render in 60 fps like 2000 points that have different position and color
here is some old code of mine ,its in c and it uses opengl es 1.1 and propably there will be a more simple way to do it in libgdx
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable (GL_POINT_SPRITE_OES);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TXTparticle);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(30);
glColorPointer(4, GL_FLOAT, 32, particlesC);//particlesC the vertices color
glVertexPointer(3, GL_FLOAT, 24, particlesV);//particlesV the vertices
glDrawArrays(GL_POINTS, 0, vertvitLenght/6);
glDisable( GL_POINT_SPRITE_OES );
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

Using VBOs/IBOs in OpenGL ES 2.0 on Android

I am trying to create a simple test program on android (API 10) using OpenGL ES 2.0 to draw a simple rectangle. I can get this to work with float buffers referencing the vertices directly, but i would rather do it with VBOs/IBOs. I have looked for countless hours trying to find a simple explanation (tutorial), but have yet to come across one. My code compiles and runs just fine, but nothing is showing up on the screen other than the clear color.
Here are some code chunks to help explain how I have it set up right now.
Part of onSurfaceChanged():
int[] buffers = new int[2];
GLES20.glGenBuffers(2, buffers, 0);
rectVerts = buffers[0];
rectInds = buffers[1];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, (rectBuffer.limit()*4), rectBuffer, GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, (rectIndices.limit()*4), rectIndices, GLES20.GL_STATIC_DRAW);
Part of onDrawFrame():
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glEnableVertexAttribArray(0);
GLES20.glVertexAttribPointer(0, 3, GLES20.GL_FLOAT, false, 0, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, 6, GLES20.GL_INT, 0);
I don't see anything immediately wrong, but here's some ideas you can touch on.
1) 'Compiling and running fine' is a useless metric for an opengl program. Errors are reported through actively calling glGetError and checking compile and link status of the shaders with glGet(Shader|Program)iv. Do you check for errors anywhere?
2) You shouldn't be assuming that 0 is the correct index for vertices. It may work now but will likely break later if you change your shader. Get the correct index with glGetAttribLocation.
3) You're binding the verts buffer onDraw, but I don't see anything about the indices buffer. Is that always bound?
4) You could also try drawing your VBO with glDrawArrays to get the index buffer out of the equation, just to help debugging to see which part is wrong.
Otherwise what you have looks correct as far as I can tell in that small snippet. Maybe something else outside of it is going wrong.

OpenGL Depth Buffer issue on Android

I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!
I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.
try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );

(android - Opengl-es) GlTexSubImage2D Hiccup The First Time Called After The First Draw

i have implemented a tile based layer in my game, but after the first time that
it was drawn, just the first time i try to update it (to add tiny decals, like blood splats, craters, etc. thingies that are added to the map and that i don't want to draw separately every loop) i have a huge hiccup of 3~ seconds.
after some test, i have find the only call that hangs.
gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer);
the decal is really tiny(32 * 32 pixel), there is no opengl error after this call, and i really don't get the whole thing (i mean.....the creation of the whole tile layer actually takes much lesser than 1 second, and is done by a thousand of glTexSubImage2D command on a large blank texture.....3 seconds is pretty huge).
i already have a workaround (a "fake" update just before the splash screen goes off) but i really want to understand this odd(for me, at least) behaviour.. :|
(i'm deeply sorry for my enGrish, i hope that's understandable)
METHOD:
public static void replaceSubImg(GL10 gl, int hwdId , int x,int y,int width, int height,Buffer pixelBuffer) {
gl.glFinish();
gl.glBindTexture(GL10.GL_TEXTURE_2D, hwdId);
long time = System.currentTimeMillis();
gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer);
Log.d("testApps","openglUtil,######################### subImg replaced in "+(System.currentTimeMillis()-time)+" ms");
}
LOG:
DEBUG/testApps(3809): openglUtil,######################### subImg replaced in 2811 ms
DEBUG/testApps(3809): openglUtil,######################### subImg replaced in 1 ms(this is the 2d run)
after a ton of tests, i have isolated the whole thing and this looks like just a bug in the implementation of the opengl in the last cyanogenmod(or just gingerbread, i don't have tested this yet....) :
cyanogen(gingerbread, 7rc2) :
uploading the texture, ~1ms,
altering the texture, ~1ms,
drawin the texture, ~1ms,
any edit, even 1px for 1px, for every texture,a huge delay( depending on how much is big the texture), in my case ~3000ms
any further edit,regardless of the size, back to ~1ms
on froyo ( stock, frf91) :
uploading the texture, ~1ms,
altering the texture, ~1ms
drawing the texture, ~1ms
any edit, even 1px for 1px, for every texture, a little delay( depending on how much
is big the texture), in my case ~80ms
any further edit,regardless of the size, back to ~0ms
there is anyway a little delay, but understandable this time.
I'm not sure if I'm understanding you correctly, but:
Do you have gl.glTexSubImage2D(GL10.GL_TEXTURE_2D, 0, x, y, width, height, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, pixelBuffer); code in every iteration? That call is pretty expensive and it shouldn't be iterated every frame (it's like you're creating the texture every time you draw).
You need to load your textures in the onSurfaceCreated callback, store your GL pointer and assign that pointer to a Texture object (an object that just holds the pointer into OpenGL memory).
After that you can just bind it using glBindTexture(GL10.GL_TEXTURE_2D, yourPointer);.
OpenGL instructions can be pipelined and may not be executed immediately. It seems likely that the glTexSubImage2D call itself is fast, but has to wait for some other operation to complete.
To figure this out, it might be instructive to call glFinish after some major operations, and see when that takes a long time. You may find that the actual time is spent elsewhere... for example, in a thousand glTexSubImage2D calls?

Categories

Resources