I found some information about graphic stack android and saw this picture:
But I cannot understood why there was twice OpenGL was used? Is it possible to use openGl only one time after SurfaceFlinger? And after what in this picture does EGL library execute?
OpenGLES can be used for drawing both 2D and 3D shapes in current systems. In the picture you are showing, in the application stack, GL can be used to render objects by the application. Say, this output goes to a buffer "B". There can be many such applications, so they all create buffers, say B1, B2, B3. Now, there needs to be some framework, that is responsible for deciding which of these buffers gets shown on the display screen, or what combination of buffers gets shown. This is popularly called as "compositor". In the compositor, GL is again used to show content on to display.
So, GL can be used in both applications and compositors, which is what is shown in the stack above.
EGL is an API (from Khronos, like OpenGL, OpenGLES) for interfacing to the window system, in this case the Android window system. It creates the buffers B1, B2 etc, into which applications can draw, and also the final display buffers.
So, EGL creates/manages buffers/display, GL is a platform independent API that is responsible for 2D/3D drawing. Hope this helps.
Related
I'm using a Texture widget, rendering its content from native code using OpenGL ES. In native code I call ANativeWindow_fromSurface and from that create an EGL surface. AIUI what happens is:
The ANativeWindow represents the producer side of a buffer queue.
Calling eglSwapBuffers causes a texture to be sent to this queue.
Flutter receives the texture and renders it using Skia when the TextureLayer is painted.
The texture is scaled to match the size of the TextureLayer (the scaling happens in AndroidExternalTextureGL::Paint()).
I'm trying to figure out how to synchronise the OpenGL rendering. I think I can use the choreographer to synchronise with the display vsync, but I'm unclear on how much latency this bufferqueue-then-render-with-skia mechanism introduces. I don't see any means to explicitly synchronise my native code's generation of textures with the TextureLayer's painting of them.
The scaling appears to be a particularly tricky aspect. I would like to avoid it entirely, by ensuring that the textures the native code generates are always of the right size. However there doesn't appear to be any direct link between the size of the TextureLayer and the size of the Surface/ANativeWindow. I could use a SizeChangedLayoutNotifier (or one of various alternative hacks) to detect changes in the size and communicate them to the native code, but I think this would lag by at least a frame so scaling would still take place when resizing.
I did find this issue, which talks about similar resizing challenges, but in the context of using an OEM web view. I don't understand Hixie's detailed proposal in that issue, but it appears to be specific to embedding of OEM views so I don't think it would help with my case.
Perhaps using a Texture widget here is the wrong approach. It seems to be designed mainly for displaying things like videos and camera previews. Is there another way to host natively rendered, interactive OpenGL graphics in Flutter?
I want to do image processing on a raw image without displaying it on screen (I want to do some calculations based on the image data and display some results on screen based on these calculations.) I found an interesting answer to this question, shown here:
Do your actual processing on the GPU: Set up an OpenGL context (OpenGL
ES 2 tutorial), and create a SurfaceTexture object in that context.
Then pass that object to setPreviewTexture, and start preview. Then,
in your OpenGL code, you can call SurfaceTexture.updateTexImage, and
the texture ID associated with the SurfaceTexture will be updated to
the latest preview frame from the camera. You can also read back the
RGB texture data to the CPU for further processing using glReadPixels,
if desired.
I have a question on how to go about implementing it though.
Do I need to create a GLSurfaceView and a Renderer, I don't actually want to use OpenGL to draw anything on screen so I am not sure if I need them? From what I have read online though it seems very essential to have these in order to setup an OpenGL context? Any pointers anybody can give me on this?
You don't have to use a GLSurfaceView. GLSurfaceView is a convenience class written purely in Java. It simplifies the setup part for applications that want to use OpenGL rendering in Android, but all of its functionality is also available through lower level interfaces in the Android frameworks.
For purely offscreen rendering, you can use the EGL interfaces to create contexts, surfaces, etc. Somewhat confusingly, there are two versions in completely different parts of the Android frameworks:
EGL10 and EGL11 in the javax.microedition.khronos.egl package, available since API level 1.
EGL14 in the android.opengl package, available since API level 17.
They are fairly similar, but the newer EGL14 obviously has some more features. If you're targeting at least API level 17, I would go with the newer version.
Using the methods in the EGL14 class, you can then create contexts, surfaces, etc. For offscreen rendering, one option is to create a Pbuffer surface for rendering. To complete the setup, you will typically use functions like:
eglInitialize
eglChooseConfig
eglCreatePbufferSurface
eglCreateContext
eglMakeCurrent
The Android documentation does not really describe these functions, but you can find the actual documentation at the Khronos web site: https://www.khronos.org/registry/egl/sdk/docs/man/.
Is there any tradeoff of add some OpenGL to a "serious" (not game) Android app?
The reason why I want to use OpenGL, is to add some 3d behaviour to a few views.
According to this http://developer.android.com/guide/topics/graphics/opengl.html OpenGL 1.0 is available in every Android device and doesn't require modification of manifest file. So there will never be compatibility issues.
The only 2 things I can think about is 1. mantainability by other developers which can't OpenGL. And possible 2. Integration problems with other components / not well reusable (although, not sure).
Is there also anything else, unexpected things, overhead of some sort, complications, etc.?
Asking because it seems not to be a very popular practice, people seem to prefer to "fake" the 3d with 2d or give it up. Don't know if it's only because they don't want to learn OpenGL.
I use OpenGL for some visualization in a released app, and I have an uncaught exception handler in place to catch any exception coming from the GLThread and disable OpenGL the next time the app is run, since I had some crash reports in the internals of GLSurfaceView.java coming in from buggier devices. If the 3D rendering is not crucial to your app, this is one approach you can take so that users with these devices can continue to use the app.
From Android 3.0+ you can also preserve the EGL context by calling GLSurfaceView. setPreserveEGLContextOnPause(true);. You'll only really need to do this if your renderer is very expensive to initialize, and it only works if you're not destroying the GLSurfaceView in between (i.e. the default behavior of an activity when rotating the device). If you're not loading that many resources then initializing OpenGL is usually fast enough.
From the SurfaceView docs (emphasis mine):
The surface is Z ordered so that it is behind the window holding its SurfaceView; the SurfaceView punches a hole in its window to allow its surface to be displayed. The view hierarchy will take care of correctly compositing with the Surface any siblings of the SurfaceView that would normally appear on top of it. This can be used to place overlays such as buttons on top of the Surface, though note however that it can have an impact on performance since a full alpha-blended composite will be performed each time the Surface changes.
The advantage is that your GL thread can update the screen independently of the UI thread (i.e. it doesn't need to render to a texture and render the texture to the screen); the disadvantage is that something needs to composite your view with the screen. If you're lucky, this can be done in the "hardware composer"; otherwise it is done on the GPU and may be a bit wasteful of GPU resources (see For Butter or Worse: Smoothing Out Performance in Android UIs at 27:32 and 40:23).
If your view is small, it may be better to use a TextureView. This will render to a texture and render the texture as part of the normal view hierarchy which might be better, but can increase latency. The downside is it's only available since API level 14.
In OpenGL ES 1.1, I would like to take multiple texture Ids and combine them into a single textureId. Then I would be able to use this resulting texture multiple times in the future. My texture sources could be transparent PNGs that I want to stack together. This would be a huge optimization since I wouldn't have to render multiple textures every frame.
I have seen examples like the wiki Texture_Combiners, but it doesn't seem like the results are reusable.
Also, if there is a way to mask an image with another into a reusable texture, that would be extremely helpful too.
What you want to do is render to texture. If you're writing for iOS you're guaranteed that the OES framebuffer extension will be available, so you can use that. If you're writing for Android or another platform then the extension may be available but isn't guaranteed. If it isn't available you can fall back on glCopyTexImage2D.
So in the first case you'd create a frame buffer which has a texture as its colour buffer. Render to that then switch to another frame buffer and you can henceforth draw from the texture.
In the second you'd draw into whatever frame buffer you have, then use glCopyTexImage2D to copy from the current colour buffer into a texture. This will be a little slower because it's a copy, but it'll still probably be a lot faster than reading back the rendered content and then uploading it yourself.
ES 2.0 makes the functions contained in the framebuffer extension mandatory, so ES 2.0 capable GPUs are very likely to support the extension.
I would just like to ask if SurfaceFlinger is always called for any type of drawing into the screen?
Example, displaying of JPG file to the screen.
SurfaceFlinger is not what draws your window. It allocates a frame buffer for your window, which the framework running in your application draws directly to without interacting with SurfaceFlinger. The only interaction SurfaceFlinger is involved with when you draw your window is to composite the final new frame buffer to the screen once you are done drawing a frame.
http://pierrchen.blogspot.jp/2014/02/what-is-surfaceflinger-in-android.html
SurfaceFlinger is an Android system service, responsible for
compositing all the application and system surfaces into a single
buffer that is finally to be displayed by display controller.
Let's zoom in above statement.
SurfaceFlinger is a system wide service but it is not directly
available to application developer as Sensor or other services can
be. Every time you want to update your UI, SurfaceFlinger will kick
in. This explains why SurfaceFlinger is a battery drainer.
Besides your application surfaces, there are system surfaces,
including status bar, navigation bar and, when rotation happens,
surfaces created by the system for rotation animation. Most
applications have only one active surface - the one of current
foreground activity, others have more than one when SurfaceView is
used in the view hierarchy or Presentation mode is used.
SurfaceFlinger is responsible for COMPOSITING all those surfaces. A
common misunderstanding is that SurfaceFinger is for DRAWING. It is
not correct. Drawing is the job of OpenGL. The interesting thing is
SurfaceFlinger used openGL for compositing as well.
The composition result will be put in a system buffer, or native
window, which is the source for display controller to fetch data from.
This is what you see in the screen.
Yes, SurfaceFlinger is Android's compositor so it takes everything that will get displayed, figures out what the resulting frame will look like and then sends it off to be displayed on the screen via the graphics card's EGL interface.
You can get the idea that it controls the result of everything you see in a post by Android developer Jeff Sharkey where he tints the whole screen for nightmode. I also found a beamer presentation that looks good about this topic.