How to draw some stuff before surfacefligner start on android?
In this situation, it's some kind of traditional Linux with framebuffer device, so directly access framebuffer should be OK.
How about to use HWComposer directly and how about to use egl directly?
If SurfaceFlinger isn't running, you can just open the framebuffer device and write to it (assuming you're on a device that has a framebuffer device).
For an example of this, see the implementation of the "recovery" UI. The key file there is minui/graphics.c. The current implementation relies on libpixelflinger.
Using OpenGL ES / EGL will be a bit more tricky. Some of the early GLES tests, such as San Angeles, use the FramebufferWindow class, but that uses a fair bit of the framework. (FWIW, an upcoming release is expected to deprecate FramebufferWindow and switch the OpenGL tests that use them to a new library that talks to SurfaceFlinger.)
Update: the upcoming release happened, and you can see the replacement for FramebufferWindow ("WindowSurface") here.
If SurfaceFlinger isn't running you can talk to HardwareComposer directly, using the internal interface. There are some old tests that exercise it, but I don't know if they still work. The code in SurfaceFlinger is probably a better example at this point. Only one process can open HardwareComposer at a time, so SurfaceFlinger must not be running.
Related
I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.
I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.
Things I do not understand:
Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?
Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?
I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.
My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.
On desktop OSes, OpenGL function loaders like GLEW, GLAD, etc. are used to load functions at runtime. But what about on Android? How are functions loaded? I've looked at a few apps and all of them seem to depend on EGL and GLES. But AFAIK EGL isn't a loading library, but an interface. Well, an interface to an interface as GLES is actually an interface.
This leads to another question: How come Android uses EGL when it is generally not used on desktops?
Back when I used android a bit, you could either link to the GLES 2.0 library, or you could link to the GLES 3.0 library, so kinda as if they provide the function pointers for you. ish. If you used GLES3.0, but the phone you ran it on only supported 2.0, your app would not load. To work around this, I always linked to GLES 2.0, and wrote my own function loader using eglGetProcAddress to extract the GLES3.0 API if available. This is pretty much how function loaders on windows/linux work (using wglGetProcAddress or glxGetProcAddress).
GLES has always been a stripped down version of the full blown desktop GL. It has always targeted a smaller subset of the full blown API (removing all of the legacy cruft), which in turn simplifies the OpenGL driver code somewhat, which in turn should reduce memory usage and save a little bit of battery life. Basically it's just more suited to use on a low power system where battery life is a concern.
How come Android uses EGL when it is generally not used on desktops?
It was widely used on embedded electronics prior to Android, so I suspect it was path of least resistance for a new smartphone OS as all GPU vendors had an EGL implementation already.
Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.
This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)
No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.
I have a setup with OpenGL ES 2.0 and EGL on Android 4.4.2 (API level 19).
My goal is to access the buffer of the window (the default framebuffer in OpenGL terms) directly from the CPU / user space.
I have tried using ANativeWindow_fromSurface to get ANativeWindow from the Surface of a GLSurfaceView. Then trying to get access to the buffer with ANativeWindow_lock fails with status -22. Logcat gives
03-25 10:50:25.363: E/BufferQueue(171): [SurfaceView](this:0xb8d5d978,id:32,api:1,p:6488,c:171) connect: already connected (cur=1, req=2)
From this discussion it seems you can't do that with GLSurfaceView, because EGL has already acquired the surface.
How could you get to the memory of the window? Can you somehow do it through an EGLSurface? I am willing to use android::GraphicBuffer, even tough it is not part of the NDK.
If this is not possible, can you use the other direction, by first creating an android::GraphicBuffer and then binding it to an EGLSurface and the displayed window?
Android devices may not have a framebuffer (i.e. /dev/graphics/fb). It's still widely used by the recovery UI, but it's being phased out.
If it does have a framebuffer, it will be opened and held by the Hardware Composer unless the app framework has been shut down. Since you're trying to use the NDK, I assume the framework is still running.
If your NDK code is running as root or system, you can request a top window from SurfaceFlinger. The San Angeles demo provides an example.
Additional information can be found here, here, and here. If you want to work with graphics at a low level, you should also read the graphics architecture doc.
This is not doable with just NDK API, you will need to pull-in some OS headers, that are not guaranteed to be stable.
You will need to subclass ANativeWindow, similarly to what is done in frameworks/native/include/ui/FramebufferNativeWindow.h.
However you may need to construct your own buffer queue using own-created android::GraphicBuffer objects, and properly respond to all dequeue() and enqueue() requests.
On enqueue() you will need to sync (GPU renders asynchronously) and than map enqueued buffer to CPU memory.
Note that this approach may be underperformant, due to explicit GPU<->CPU sync needed.
Writing directly to the framebuffer no longer works. Is there anyway to write to the display in the NDK? I might use ANativeWindow but that requires an existing surface. Is there a better way? Or is the only way to create a surface, natively, and then use ANativeWindow?
The display is owned by SurfaceFlinger and Hardware Composer, so unless you're planning to halt the Android framework you will need to work through them. (See the graphics architecture doc for more details.)
If you're developing a stand-alone command that is running as "shell" or "root", and you don't mind using non-public interfaces, you can just ask SurfaceFlinger for a window and draw on that. As of 5.0 "Lollipop" the old GLES tests were updated to work this way. See this answer for pointers; the San Angeles demo is illustrative.
If you're developing a regular app, you have to create a Surface and render to that through ANativeWindow. Regular apps aren't allowed exclusive access to the displays.