I'm writing an OpenGL ES 2.0 game in C++ running on multiple (mobile) platforms.
On IOS,Android,.. basically everything runs fine, except one device:
The computation for one frame in a specific scene take about 8ms on an HTC Desire.
Another device, a Samsung Galaxy Nexus, which is much newer takes 18-20ms.
I digged into the problem and found out that it is related to enable/disable of GL_DEPTH_TEST.
if I comment out all glEnable(GL_DEPTH_TEST)/glDisable(GL_DEPTH_TEST) calls the time needed for one frame drops to 1-2ms on the Nexus.
so I optimized the glEnable()/glDisable() calls to occur only when absolutely needed. I have 3d and 2d parts in my scene and therefore need to render 2d without depth_test and 3d with depth test.
currently I enable depth test, draw 3d, disable depth test, draw 2d
but still the computation takes 18-20ms on the nexus
I also checked if the depth buffer is cleared more than needed. But it is just cleared at the start of the frame.
Is it possible that the switch of the depth test takes that much time?
Does anyone have other ideas what can be checked?
UPDATE
I found out that the 3d object I render is somehow responsible for the slow computation.
If I remove the 3d object the performance is good
but the same 3d object is used in another scene without causing such trouble
and the weirdest thing: the Nexus runs Android 4.2 and it has an option in the developer options to visualize the cpu load as an overlay. If I enable this setting and start the game, the computation time is 5-6ms instead of 18-20ms. How can this be related?
Related
I'm developing an app that renders the camera preview straight to a custom GlSurfaceView I have created.
It's pretty basic for someone who uses OpenGL on regular base.
The problem I'm experiencing is a low fps on some of the devices and I came with a solution - to choose which shader to apply on runtime. Now, I don't want to load a OpenGl program, measure the fps and than change the program to a lighter shader because it would create definite lags.
What I would like to do is somehow determine the GPU strength before I'm linking the GL program(Right after I'm creating the openGL context).
After some hours of investigation I pretty much understood that it won't gonna be very easy - mostly because the rendering time depends on hidden dev elements like - device gpu memory, openGL pipeline which might be implemented differently for different devices.
As I see it I have only 1 or 2 options, render a texture off-screen and measure its rendering time - if its takes longer that 16 millis(recommended fps time by Romain Guy from this post) I'll use the lighter shader.
Or - checking OS version and available RAM (though it really inaccurate).
I really hope for a more elegant and accurate solution.
I am constantly getting 60 frames per second and doing no CPU intensive operations on a simple application displaying a single rotating triangle. Unfortunately, when using OpenGL ES 2.0 on a Samsung Galaxy Express I am recieving slight hiccups in the rendering, as if some frames are not being drawn.
The funny thing is that with OpenGL ES 1.0 there is no such hiccup, so I know it does not have to do with the rotation method or the use of System.nanoTime () as a measure of the elapsed time between frames. I use the exact same method I both, and as per my research here and with "fixing your timestep" have smoothed the game loop. I get the same results.
To make matters even funnier, the spinning triangle example from Google's own developer.android.com introduction to OpenGL has the same issue, as well as the Play Store examples from the website 'learnopengles' both of which are rendering a simple spinning triangle.
As per research into similar threads here, I have tried continuous rendering as well as dirty rendering. The slight flicker remains.
After 3 days I am wondering whether it is my device, or whether all OpenGL ES 2.0 applications have a slight stutter that is more easily noticeable on a single spinning triangle.
I have no other test device and no money for one, so I cannot say whether it is a problem with my device, or the Samsung Galaxy Express in general, or something else.
Is there anything else I can do to fix this issue?
Is this slight stutter normal behavior?
Are there any examples of code I can test myself that does not exhibit this behavior?
Does the Samsung Galaxy express have a known issue with OpenGL 2?
Thank you for reading.
in some android test devices, when rendering in opengl 2.0 ES, the screen flashes.
I was able to track the problem to the GLSurfaceView class at the point the "eglSwapBuffers" is called, so the flashing is produced on each iteration, on one the screen becomes black and the next has the image I've drawn. So it seams that eglSwapBuffers is not preserving the back buffer on each call producing this flashing behaviour.
Is there anyway to preserve the back buffer? I've found around that maybe I could use the EGL_SWAP_BEHAVIOR_PRESERVED_BIT flag but I can't figure it out how to put it in android, and neither how to use it in old API's such as gingerbread.
Thanks
You should not need to modify GLSurfaceView. It's much more likely that your problem is caused by the drivers or configuration of your system. I would try a different test device with different graphics drivers. What happens when you run it on an AVD?
It could be that your test device is not making enough memory available to the underlying linux framebuffer device to get the normal triple buffering. Most systems will fall back to single buffering in that case. I recommend that you check these fb device parameters. The virtual_size should be large enough for 2 or 3 buffers for the display mode you are using:
cat /sys/class/graphics/fb0/mode
U:1024x768p-60
cat /sys/class/graphics/fb0/virtual_size
800,1440
I have a cross-platform code base (iOS and Android) that uses a standard render-to-texture setup. Each frame (after initialization), the following sequence occurs:
glBindFramebuffer of a framebuffer with a texture color attachment
Render some stuff
*
glBindFramebuffer of the default framebuffer (0 on Android, usually 2 on iOS)
glBindTexture of the texture that was the color attachment to the first framebuffer
Render using the bound texture
On iOS and some Android devices (including the emulator), this works fine and as expected. On other devices (currently sitting in front of a Samsung Galaxy Note running 4.0.4), the second-phase rendering that uses the texture looks "jumpy". Other animations continue to run at 60 fps on the same screen as the "jumpy" bits; my conclusion is that the changes to the target texture are not always visible in the second rendering pass.
To test this theory, I insert a glFinish() at the step marked with a * above. On all devices, now, this has the correct behavior. Interestingly, glFlush() does NOT fix the problem. But glFinish() is expensive, and I haven't seen any documentation that suggests that this should be necessary.
So, here's my question: What must I do when finished rendering to a texture to make sure that the most-recently-drawn texture is available in later rendering passes?
The code you describe should be fine.
As long as you are using a single context, and not opting in to any extensions that relax synchronization behavior (such as EXT_map_buffer_range), then every command must appear to execute as if it had executed in exactly the same order specified in the API, and in your API usage you're rendering to the texture before reading from it.
Given that, you are probably encountering driver bugs on those devices. Can you list which devices are encountering the issue? You'll probably find common hardware or drivers.
Our Android game has an issue which appears unique to the Galaxy S2.
Occasionally the render will stutter. By this I mean it basically seems to render the last two frames (as though its swapping the last two render buffers without updating either).
What's really odd about this is that the game continues to update, so say the stutter lasts for 2 seconds, the game will have progressed 2 seconds behind the scenes.
This is odd because our code is basically like this:
function Update()
DoGameLogic()
DoRender()
So this means that if our the game has updated, the game has also rendered. The maximum delta time is capped to 1 frame so there must have been more than one Update and thus multiple renders during the stutter.
My current theory is that on most devices the game lags during render, but on the S2 the render calls are executed but they "fall through" without updating the render buffer.
Has anyone run into this problem? I would really appreciate any suggestions about what this could be.
We found out what the problem is.
The Galaxy S 2 was for some reason running out of GL memory. This wasn't apparent on the devices we were testing with, but on other devices it would crash on some Open GL call - not the offending call mind you.
Eventually we tracked it down to using Point sprite VBO's. As the S 2 is a powerful device, we replaced Point sprites with Quad's mimicking point sprites as a workaround.
Incidentally, SoundPool would also run out of memory on this device, requiring another workaround.