I have a cross-platform code base (iOS and Android) that uses a standard render-to-texture setup. Each frame (after initialization), the following sequence occurs:
glBindFramebuffer of a framebuffer with a texture color attachment
Render some stuff
*
glBindFramebuffer of the default framebuffer (0 on Android, usually 2 on iOS)
glBindTexture of the texture that was the color attachment to the first framebuffer
Render using the bound texture
On iOS and some Android devices (including the emulator), this works fine and as expected. On other devices (currently sitting in front of a Samsung Galaxy Note running 4.0.4), the second-phase rendering that uses the texture looks "jumpy". Other animations continue to run at 60 fps on the same screen as the "jumpy" bits; my conclusion is that the changes to the target texture are not always visible in the second rendering pass.
To test this theory, I insert a glFinish() at the step marked with a * above. On all devices, now, this has the correct behavior. Interestingly, glFlush() does NOT fix the problem. But glFinish() is expensive, and I haven't seen any documentation that suggests that this should be necessary.
So, here's my question: What must I do when finished rendering to a texture to make sure that the most-recently-drawn texture is available in later rendering passes?
The code you describe should be fine.
As long as you are using a single context, and not opting in to any extensions that relax synchronization behavior (such as EXT_map_buffer_range), then every command must appear to execute as if it had executed in exactly the same order specified in the API, and in your API usage you're rendering to the texture before reading from it.
Given that, you are probably encountering driver bugs on those devices. Can you list which devices are encountering the issue? You'll probably find common hardware or drivers.
Related
I have some code that uses glBlendFuncSeparateOES and glBendEquationSeparateOES in order to render onto framebuffers with alpha.
However, I've found that a couple of my target devices do NOT appear to support these functions. They fail silently, and all that happens is that your render mode doesn't get set. My Kinda Fire cheapie tablet and an older Samsung both exhibit this behavior.
Is there a good way, on android, to query if they're actually implemented? I have tried eglGetProcAddress, but it returns an address for any string you throw at it!!!!!
Currently, I just have the game, on startup, do a quick render on a small FBO to see if the transparency is correct or if it has artifacts. It works, but it's a very kludgy method.
I'd much prefer if there was something like glIsBlendFuncSeparateSupported().
You can get a list of all available extensions using glGetString(GL_EXTENSIONS). This returns a space-separated list of supported extensions. For more details see Khronos specification.
in some android test devices, when rendering in opengl 2.0 ES, the screen flashes.
I was able to track the problem to the GLSurfaceView class at the point the "eglSwapBuffers" is called, so the flashing is produced on each iteration, on one the screen becomes black and the next has the image I've drawn. So it seams that eglSwapBuffers is not preserving the back buffer on each call producing this flashing behaviour.
Is there anyway to preserve the back buffer? I've found around that maybe I could use the EGL_SWAP_BEHAVIOR_PRESERVED_BIT flag but I can't figure it out how to put it in android, and neither how to use it in old API's such as gingerbread.
Thanks
You should not need to modify GLSurfaceView. It's much more likely that your problem is caused by the drivers or configuration of your system. I would try a different test device with different graphics drivers. What happens when you run it on an AVD?
It could be that your test device is not making enough memory available to the underlying linux framebuffer device to get the normal triple buffering. Most systems will fall back to single buffering in that case. I recommend that you check these fb device parameters. The virtual_size should be large enough for 2 or 3 buffers for the display mode you are using:
cat /sys/class/graphics/fb0/mode
U:1024x768p-60
cat /sys/class/graphics/fb0/virtual_size
800,1440
I'm writing an OpenGL ES 2.0 game in C++ running on multiple (mobile) platforms.
On IOS,Android,.. basically everything runs fine, except one device:
The computation for one frame in a specific scene take about 8ms on an HTC Desire.
Another device, a Samsung Galaxy Nexus, which is much newer takes 18-20ms.
I digged into the problem and found out that it is related to enable/disable of GL_DEPTH_TEST.
if I comment out all glEnable(GL_DEPTH_TEST)/glDisable(GL_DEPTH_TEST) calls the time needed for one frame drops to 1-2ms on the Nexus.
so I optimized the glEnable()/glDisable() calls to occur only when absolutely needed. I have 3d and 2d parts in my scene and therefore need to render 2d without depth_test and 3d with depth test.
currently I enable depth test, draw 3d, disable depth test, draw 2d
but still the computation takes 18-20ms on the nexus
I also checked if the depth buffer is cleared more than needed. But it is just cleared at the start of the frame.
Is it possible that the switch of the depth test takes that much time?
Does anyone have other ideas what can be checked?
UPDATE
I found out that the 3d object I render is somehow responsible for the slow computation.
If I remove the 3d object the performance is good
but the same 3d object is used in another scene without causing such trouble
and the weirdest thing: the Nexus runs Android 4.2 and it has an option in the developer options to visualize the cpu load as an overlay. If I enable this setting and start the game, the computation time is 5-6ms instead of 18-20ms. How can this be related?
I have a problem with very low rendering time on an android tablet using the NDK and the egl commands. I have timed calls to eglSwapBuffers and is taking a variable amount of time, frequently exceeded the device frame rate. I know it synchronizes to the refresh, but that is around 60FPS, and the times here drop well below that.
The only command I issue between calls to swap is glClear, so I know it isn't anything that I'm drawing causing the problem. Even just by clearing the frame rate drops to 30FPS (erratic though).
On the same device a simple GL program in Java easily renders at 60FPS, thus I know it isn't fundamentally a hardware issue. I've looked through the Android Java code for setting up the GL context and can't see any significant difference. I've also played with every config attribute, and while some alter the speed slightly, none (that I can find) change this horrible frame rate drop.
To ensure the event polling wasn't an issue I moved the rendering into a thread. That thread now only does rendering, thus just calls clear and swap repeatedly. The slow performance still persists.
I'm out of ideas what to check and am looking for suggestions as to what the problem might be.
There's really not enough info (like what device you are testing on, what was you exact config etc) to answer this 100% reliable but this kind of behavior is usually caused by window and surface pixel format mismatch eg. 16bit (RGB565) vs 32bit.
FB_MULTI_BUFFER=3 environment variable will enable the multi buffering on Freescale i.MX 6 (Sabrelite) board with some recent LTIB build (without X). Your GFX driver may needs something like this.
Foreword: This severe bug can cause Android devices to lock up (unable to press Home/Back buttons, needs hard reset). It is associated with OpenGL surfaces and audio playback. Logcat repeats something to the effect of
W/SharedBufferStack( 398): waitForCondition(LockCondition) timed out (identity=9, status=0). CPU may be pegged. trying again.
once every second, hence the name of this bug. The fundamental cause of this is likely due to a deadlock when buffering data, be it sound or graphics.
I occasionally get this bug when testing my app on the Asus EEE Transformer tablet. The crash occurs when the sound thread populates MediaPlayer objects using MediaPlayer.create(context, R.raw.someid); and the GLSurface thread loads textures from bitmaps using
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.textureMap,opts);
gl.glGenTextures(1, texAtlas, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, texAtlas[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
I don't think the cause is the audio, as the audio does in fact still play (the thread which loads the audio then plays it after x amount of time). If so, the cause lies in the OpenGL ES buffering using the above code.
Related Material
This SO post refers to this bug. They use OpenGL ES 2.0 and NDK. I use OpenGL ES 1.1 (albeit most devices emulate 1.1 through 2.0, so technically they are using 2.0) and I do not use the NDK. Further, they use Android 2.1 and my crash occurs on Android 3.2.1.
This site links the bug to the AudioTrack object. However, I do not use that in my app.
The Android Bug Tracker lists this as a known bug, but as of yet there is no solution (and it's not fixed in Honeycomb+).
Common Elements
Freeze occurs when buffering. The thing being buffered is usually quite large, so an image (bug occurs more frequently the larger the image) or audio is typically affected.
Freeze only occurs on some devices.
Freeze is not related to a specific Android version - has been recorded on 2.1 and 3.2.1, among others.
Freeze is not related to use of the NDK.
Freeze is not related to a single programming practice (order of buffering, file types, etc)
My question is pretty simple. Is there a workaround for this issue? If you can't prevent it, is there a way to fail elegantly and prevent the whole device being bricked?
In the case of my game the "waitForCondition" problem has been noticed on Samsung Galaxy S (Android 2.3.3). Of course I don't know if the issue has been noticed on different devices, but probably the problem exists there too. Fortunately I was able to reproduce it as one of my friends has got the device and he was kind enough to lent me one for a week.
In my case the game is practically all written in Java (few calls through NDK to OpenGL functions), so I'm not really sure if this will apply to your problem too.
Anyway it seems that the problem is somehow related to OpenGL internal buffers. In the code presented below the line which has been commented out (1) has been changed to (2) - manual config selection. I didn't test it thoroughly yet, but since that change I haven't noticed any freezes, so there is a hope..
UPDATE 1: As an additional info, I think I read somewhere that somebody had the same problem with his CPU gets pegged and his solution was to set up all the OpenGL Surface components to 8 bits (alpha component too) rather than 565 or 4 bits (I don't remember exactly what was the faulty configuration)
UPDATE 2: Also one may consider to use the following implementation of the EGLConfigChooser: GdxEglConfigChooser.java. If this doesn't help eventually use the approach presented in GLSurfaceView20.java.
UPDATE 3: Additionally simplifying the program shaders as much as it's possible helped a bit too.
// in Activity...
glView = new GLSurfaceView(this);
glView.setEGLContextClientVersion(2); // OpenGL ES 2.0
// glView.setEGLConfigChooser(false); // (1) false - no depth buffer
glView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
glView.setEGLConfigChooser(8,8,8,8,0,0); // (2) TODO: crashes on devices which doesn't support this particular configuration
glView.setRenderer(new MyRenderer(this));
Increasing the virtual memory of the device lowers the occurrences in which this issue happens. Of course this is not an option unless you are the manufacturer of the device.