in some android test devices, when rendering in opengl 2.0 ES, the screen flashes.
I was able to track the problem to the GLSurfaceView class at the point the "eglSwapBuffers" is called, so the flashing is produced on each iteration, on one the screen becomes black and the next has the image I've drawn. So it seams that eglSwapBuffers is not preserving the back buffer on each call producing this flashing behaviour.
Is there anyway to preserve the back buffer? I've found around that maybe I could use the EGL_SWAP_BEHAVIOR_PRESERVED_BIT flag but I can't figure it out how to put it in android, and neither how to use it in old API's such as gingerbread.
Thanks
You should not need to modify GLSurfaceView. It's much more likely that your problem is caused by the drivers or configuration of your system. I would try a different test device with different graphics drivers. What happens when you run it on an AVD?
It could be that your test device is not making enough memory available to the underlying linux framebuffer device to get the normal triple buffering. Most systems will fall back to single buffering in that case. I recommend that you check these fb device parameters. The virtual_size should be large enough for 2 or 3 buffers for the display mode you are using:
cat /sys/class/graphics/fb0/mode
U:1024x768p-60
cat /sys/class/graphics/fb0/virtual_size
800,1440
Related
I have a Google Pixel 4, rooted, and I have the AOSP code building successfully. Inside an Android app, I'd like to gralloc an extra-large area of memory (probably 4x as large as the 1080x2280 screen) and draw a simple graphic (for example, a square) into the middle of the buffer. Then, on each frame, I just want to slide the pointer around on the buffer to make it look like the square is moving around on the screen (since the rest of the buffer will be blank).
I'm not sure if this is feasible. So far, I have a completely native Android app. At the beginning of android_main(), I malloc a region 4x as large as the screen, and I draw a red square in the middle. On each frame, I call
ANativeWindow_lock()
to get the pointer to the gralloced memory, which is accessed in ANativeWindow_Buffer->bits. Then I use memcpy to copy pixels from my big malloced buffer into the bits address, adjusting the src pointer in my memcpy call to slide around within the buffer and make it seem like the square is moving around on the screen. Finally, I call
ANativeWindow_unlockAndPost()
to release CPU access and post the buffer to the display.
However, I'm trying to render as fast as possible. There are 2462400 pixels (2 bytes each) for the screen, so memcpy is copying 5MB of data for each frame, which takes ~10ms. So, I want to avoid the memcpy and access the ORIGINAL pointer to the dma_buf, or whatever it is that Gralloc3 originally allocates for the GraphicBuffer (the ANativeWindow is basically just a Surface which uses a GraphicBuffer, which uses GraphicBufferMapper, which uses GrallocMapper in gralloc). This is complicated by the fact that the system appears to be triple-buffering, putting three gralloced buffers in BufferQueue and rotating between them.
By adding log statements to my AOSP build, I can see when Gralloc3 allocates buffers and how big they are. So I can allocate extra-large buffers. But it's manually adjusting the pointers and having that reflect on the display where I'm getting stuck. ANativeWindow_lock() gets a copy of the original pointer to the pixel buffer, so I figured if I can trace that call all the way down, then I can find the original pointer. I traced it down into hardware/interfaces/graphics/mapper/3.0/IMapper.hal and IAllocator.hal, which are used by Gralloc3 to interact with memory. But I don't know where to go after this. The HAL file is basically a header that's implemented by some other vendor-specific file, I guess....
Checking out ANativewindow_lock() using Android Studio's CPU profiler
Based on this picture, it seems like some QtiMapper.cpp file might be implementing the HAL. There are a few files called QtiMapper, in hardware/qcom. (I'm guessing I should be looking in the sm8150 folder because the Pixel 4 uses Snapdragon 855.) Then lower down, it looks like the IonAlloc::CleanBuffer and BufferManager::LockBuffer might be in the gr_ files in hardware/qcom/display/msmxxxx/gralloc folders. But I can't be sure where the calls are being routed exactly because if I try to modify or add log statements to these files, I get problems with the AOSP build. Any directions on how to mod these would be very helpful, too. If these are the actual files being used by the system, it looks like I could possibly use them for my app because I can see the ioctl and mmap calls in them.
Using the Linux Direct Rendering Manager, I was able to write directly to the display in a C file by shutting down SurfaceFlinger and mmapping some memory. See my demo here. So if I shut down the Android framework, I can accomplish what I want to do. But I want to keep Android up, because I'm looking to use the accelerometers and maybe other APIs in my app. (The goal is to use the accelerometer readings to stabilize text on the display as fast as possible.) It's also annoying because starting up the display server again does some kind of reboot.
First of all, is what I want to do even worth it? It seems like it should be, because the display on the Pixel can refresh every 10 milliseconds, and taking the time to copy the pixel memory is pointless in this case.
Second of all, does anyone know of a way I can, within my app, adjust the low-level pointer to the pixel buffer in memory and still make it push to the display?
Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.
This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)
No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.
I have a cross-platform code base (iOS and Android) that uses a standard render-to-texture setup. Each frame (after initialization), the following sequence occurs:
glBindFramebuffer of a framebuffer with a texture color attachment
Render some stuff
*
glBindFramebuffer of the default framebuffer (0 on Android, usually 2 on iOS)
glBindTexture of the texture that was the color attachment to the first framebuffer
Render using the bound texture
On iOS and some Android devices (including the emulator), this works fine and as expected. On other devices (currently sitting in front of a Samsung Galaxy Note running 4.0.4), the second-phase rendering that uses the texture looks "jumpy". Other animations continue to run at 60 fps on the same screen as the "jumpy" bits; my conclusion is that the changes to the target texture are not always visible in the second rendering pass.
To test this theory, I insert a glFinish() at the step marked with a * above. On all devices, now, this has the correct behavior. Interestingly, glFlush() does NOT fix the problem. But glFinish() is expensive, and I haven't seen any documentation that suggests that this should be necessary.
So, here's my question: What must I do when finished rendering to a texture to make sure that the most-recently-drawn texture is available in later rendering passes?
The code you describe should be fine.
As long as you are using a single context, and not opting in to any extensions that relax synchronization behavior (such as EXT_map_buffer_range), then every command must appear to execute as if it had executed in exactly the same order specified in the API, and in your API usage you're rendering to the texture before reading from it.
Given that, you are probably encountering driver bugs on those devices. Can you list which devices are encountering the issue? You'll probably find common hardware or drivers.
I have a problem with very low rendering time on an android tablet using the NDK and the egl commands. I have timed calls to eglSwapBuffers and is taking a variable amount of time, frequently exceeded the device frame rate. I know it synchronizes to the refresh, but that is around 60FPS, and the times here drop well below that.
The only command I issue between calls to swap is glClear, so I know it isn't anything that I'm drawing causing the problem. Even just by clearing the frame rate drops to 30FPS (erratic though).
On the same device a simple GL program in Java easily renders at 60FPS, thus I know it isn't fundamentally a hardware issue. I've looked through the Android Java code for setting up the GL context and can't see any significant difference. I've also played with every config attribute, and while some alter the speed slightly, none (that I can find) change this horrible frame rate drop.
To ensure the event polling wasn't an issue I moved the rendering into a thread. That thread now only does rendering, thus just calls clear and swap repeatedly. The slow performance still persists.
I'm out of ideas what to check and am looking for suggestions as to what the problem might be.
There's really not enough info (like what device you are testing on, what was you exact config etc) to answer this 100% reliable but this kind of behavior is usually caused by window and surface pixel format mismatch eg. 16bit (RGB565) vs 32bit.
FB_MULTI_BUFFER=3 environment variable will enable the multi buffering on Freescale i.MX 6 (Sabrelite) board with some recent LTIB build (without X). Your GFX driver may needs something like this.
Hi I was researching the possibility to transport the "not rendered" rendering calls to a second screen from Android. While researching I found out that Skia is behind the Surfacefinger, and the Canvas.draw() method. So my question is now, what would be the best interception point to branch off the calls in order to use them for a second screen / machine. The second device mut not be a pure replay device, but can be another Android device.
First I used VNC for that concept, but quickly found out that it badly performs, due to the double buffering effect, it is also possible to manipulate the android code in a sense that it omits the double buffering, but it is still of interest to actually use the pre rendered calles on a second, maybe scaled device.
Thanks