AOSP / Android 7: How is EGL utilized in detail? - android

I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.
I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.
Things I do not understand:
Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?
Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?
I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.
My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.

Related

Is OpenGL function loader needed?

On desktop OSes, OpenGL function loaders like GLEW, GLAD, etc. are used to load functions at runtime. But what about on Android? How are functions loaded? I've looked at a few apps and all of them seem to depend on EGL and GLES. But AFAIK EGL isn't a loading library, but an interface. Well, an interface to an interface as GLES is actually an interface.
This leads to another question: How come Android uses EGL when it is generally not used on desktops?
Back when I used android a bit, you could either link to the GLES 2.0 library, or you could link to the GLES 3.0 library, so kinda as if they provide the function pointers for you. ish. If you used GLES3.0, but the phone you ran it on only supported 2.0, your app would not load. To work around this, I always linked to GLES 2.0, and wrote my own function loader using eglGetProcAddress to extract the GLES3.0 API if available. This is pretty much how function loaders on windows/linux work (using wglGetProcAddress or glxGetProcAddress).
GLES has always been a stripped down version of the full blown desktop GL. It has always targeted a smaller subset of the full blown API (removing all of the legacy cruft), which in turn simplifies the OpenGL driver code somewhat, which in turn should reduce memory usage and save a little bit of battery life. Basically it's just more suited to use on a low power system where battery life is a concern.
How come Android uses EGL when it is generally not used on desktops?
It was widely used on embedded electronics prior to Android, so I suspect it was path of least resistance for a new smartphone OS as all GPU vendors had an EGL implementation already.

Is it possible to debug shaders in Android OpenGL ES 2?

Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.
This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)
No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.

How to prevent opengl context loss when a new activity is loaded

I am developing a 3d android app where i need to do rendering in two different activities( normal rendering in one activity and VR rendering in the other). I could find that once i move from one activity to another my 3d model data(vertices, indices) are being lost. If i come back to the first activity i have to reload the whole data from files. Is there any work around for this specific issue? Also which is the best format in which i could save the models to get the quickest loading speed.
You can use GLSurfaceView.setPreserveEGLContextOnPause. While support for preserving EGL contexts is not guaranteed to be supported, it is widely available on modern Android devices.
As for model loading speed - you're treading dangerously into 'opinion based' territory. But, a model format laid out exactly as your GLES buffers expect on the device could be streamed directly from disk, without any modification - so, likely that would be your fastest loading solution. However, many developers use some other format (eg. FBX/OBJ/etc.), because they are more flexible and export directly from DCC tools.

Accessing the memory of the default framebuffer on Android

I have a setup with OpenGL ES 2.0 and EGL on Android 4.4.2 (API level 19).
My goal is to access the buffer of the window (the default framebuffer in OpenGL terms) directly from the CPU / user space.
I have tried using ANativeWindow_fromSurface to get ANativeWindow from the Surface of a GLSurfaceView. Then trying to get access to the buffer with ANativeWindow_lock fails with status -22. Logcat gives
03-25 10:50:25.363: E/BufferQueue(171): [SurfaceView](this:0xb8d5d978,id:32,api:1,p:6488,c:171) connect: already connected (cur=1, req=2)
From this discussion it seems you can't do that with GLSurfaceView, because EGL has already acquired the surface.
How could you get to the memory of the window? Can you somehow do it through an EGLSurface? I am willing to use android::GraphicBuffer, even tough it is not part of the NDK.
If this is not possible, can you use the other direction, by first creating an android::GraphicBuffer and then binding it to an EGLSurface and the displayed window?
Android devices may not have a framebuffer (i.e. /dev/graphics/fb). It's still widely used by the recovery UI, but it's being phased out.
If it does have a framebuffer, it will be opened and held by the Hardware Composer unless the app framework has been shut down. Since you're trying to use the NDK, I assume the framework is still running.
If your NDK code is running as root or system, you can request a top window from SurfaceFlinger. The San Angeles demo provides an example.
Additional information can be found here, here, and here. If you want to work with graphics at a low level, you should also read the graphics architecture doc.
This is not doable with just NDK API, you will need to pull-in some OS headers, that are not guaranteed to be stable.
You will need to subclass ANativeWindow, similarly to what is done in frameworks/native/include/ui/FramebufferNativeWindow.h.
However you may need to construct your own buffer queue using own-created android::GraphicBuffer objects, and properly respond to all dequeue() and enqueue() requests.
On enqueue() you will need to sync (GPU renders asynchronously) and than map enqueued buffer to CPU memory.
Note that this approach may be underperformant, due to explicit GPU<->CPU sync needed.

how to show an image without surfaceflinger on android

How to draw some stuff before surfacefligner start on android?
In this situation, it's some kind of traditional Linux with framebuffer device, so directly access framebuffer should be OK.
How about to use HWComposer directly and how about to use egl directly?
If SurfaceFlinger isn't running, you can just open the framebuffer device and write to it (assuming you're on a device that has a framebuffer device).
For an example of this, see the implementation of the "recovery" UI. The key file there is minui/graphics.c. The current implementation relies on libpixelflinger.
Using OpenGL ES / EGL will be a bit more tricky. Some of the early GLES tests, such as San Angeles, use the FramebufferWindow class, but that uses a fair bit of the framework. (FWIW, an upcoming release is expected to deprecate FramebufferWindow and switch the OpenGL tests that use them to a new library that talks to SurfaceFlinger.)
Update: the upcoming release happened, and you can see the replacement for FramebufferWindow ("WindowSurface") here.
If SurfaceFlinger isn't running you can talk to HardwareComposer directly, using the internal interface. There are some old tests that exercise it, but I don't know if they still work. The code in SurfaceFlinger is probably a better example at this point. Only one process can open HardwareComposer at a time, so SurfaceFlinger must not be running.

Categories

Resources