EGLImages with renderbuffer as source sibling and texture as target sibling - android

I am trying to setup an EGLImage where the source sibling is a GL_RENDERBUFFER (the EGLClientBuffer specified as an argument to eglCreateImageKHR). In another context, I create a GL_TEXTURE_2D and specify it as the target sibling of the EGLImage by using glEGLImageTargetTexture2DOES. Unfortunately, the latter call leads to a GL_INVALID_OPERATION. If both the source and target siblings are GL_TEXTURE_2D's, the setup works like a charm.
From my reading of the specification, this should be a permissable operation. It is also possible that my reduced test case has some other orthogonal issue. Though I doubt this since the setup works fine when both the source and target siblings are GL_TEXTURE_2D's. However, if this were the issue (and this kind of usage of EGLImages was permissable), what could be the possible issue that may lead to to a GL_INVALID_OPERATION. Or am I just mistaken in my interpretation of the specification?
Referenced Extensions:
http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt
http://www.khronos.org/registry/egl/extensions/KHR/EGL_KHR_image_base.txt
Clarifications:
I do check for the presence of all extensions in the specification (EGL_KHR_image, EGL_KHR_image_base, EGL_KHR_gl_texture_2D_image, EGL_KHR_gl_renderbuffer_image, etc..).
I also realize that there may be differences in the internal format of the EGLImage when I am using a GL_RENDERBUFFER vs a GL_TEXTURE_2D as the source. So I tried using the OES_EGL_image_external extension first with the texture as the source and then the renderbuffer. The texture works fine as always, the same GL_INVALID_OPERATION for the renderbuffer. Using external images when binding makes no difference to the error generated.
Both GL and EGL errors are checked after each call.

I'm afraid this could be a legitimate failure point. A GL_INVALID_OPERATION error can arise if the driver is unable to create a texture from the EGLImage supplied.
http://www.khronos.org/registry/gles/extensions/OES/OES_EGL_image.txt
If the GL is unable to specify a texture object using the supplied
eglImageOES (if, for example, refers to a multisampled
eglImageOES), the error INVALID_OPERATION is generated.
Do you call glFramebufferRenderbufferOES with the renderbuffer before passing it to eglCreateImageKHR? If so, suggest you try tweaking how your create your renderbuffer (e.g. try a different format, size) to pin down what conditions get you this error.

I spent a lot more time on this and EGLImage's in general after running into this issue. alonorbach's hypothesis was correct.
If for any reason, the driver cannot create a texture sibling from the supplied EGLImage, then a rather ambiguous GL_INVALID_OPERATION is returned. I was under the impression that if I was able to create a valid EGLImage (i.e., not an EGL_NO_IMAGE_KHR) and the appropriate extensions were supported, I would be able to bind to the same using either a rendebuffer or texture sibling (in GL_OES_EGL_image). This is certainly not the case. It also seems to vary a lot from device to device. I was able to get this working on NVidia Tegra units but not on the Adreno 320.
Does this mean it is impossible to reliably use EGLImages on Android? Not quite. Specifically for the issue I was running into, I was able to bind a texture sibling to an EGLImage created using a renderbuffer source by specifying GL_RGBA8_OES in extension GL_OES_rgb8_rgba8 as the internal format of the source renderbuffer (argument 2 to glRenderbufferStorage). This is less than ideal but proves the point that somehow the internal formats of the source and target siblings must match, and, in case of a mismatch, the driver is under no obligation to accomodate the variation and is free to give up.
Another source of entropy when trying to use EGLImages successfully (at least generically on Android) is the way in which the EGLImages themselves are created. I found that it was far more reliable to create an EGLImage using the EGL_NATIVE_BUFFER_ANDROID target specified in the EGL_ANDROID_image_native_buffer extension. If such extensions are available for your platform, it is highly advisable to use them in a fallback manner.
In summary, the solution that seems to work reliably for me seems to be to first try and create a valid EGLImage using any and all available extensions in a fallback manner. Then using that EGLImage try binding to the target sibling kind. If this pair of operations yields no errors, then that EGLImage/TargetKind pair is supported on that device and can be used for subsequent operations. If either operation fails, the next item in the fallback chain is checked. If all else fails, a solution that does not use EGLImages should likely be present. I have not encountered such an Android device yet (fingers crossed)
I am still discovering corner cases and optimizations and will keep this answer updated with findings.

Related

How do I use GL_MAP_PERSISTENT_BIT in OpenGL ES 3.1 on Android?

I recently switched from using glBufferData to glMapBufferRange which gives me direct access to GPU memory rather than copying the data from CPU to GPU every frame.
This works just fine and in OpenGL ES 3.0 I do the following per frame:
Get a pointer to my GPU buffer memory via glMapBufferRange.
Directly update my buffer using this pointer.
Use glUnmapBuffer to unmap the buffer so that I can render.
But some Android devices may have at least OpenGL ES 3.1 and, as I understand it, may also have the EXT_buffer_storage extension (please correct me if that's the wrong extension ?). Using this extension it's possible to set up persistent buffer pointers which do not require mapping/unmapping every frame using the GL_MAP_PERSISTENT_BIT flag. But I can't figure out or find much online in the way of how to access these features.
How exactly do I invoke glMapBufferRange with GL_MAP_PERSISTENT_BIT set in OpenGL ES 3.1 on Android ?
Examining glGetString(GL_EXTENSIONS) does seem to show the extension is present on my device, but I can't seem to find GL_MAP_PERSISTENT_BIT anwhere, e.g. in GLES31 or GLES31Ext, and I'm just not sure how to proceed.
The standard Android Java bindings for OpenGL ES only expose extensions that are guaranteed to be supported by all implementations on Android. If you want to expose less universally available vendor extensions you'll need to roll your own JNI bindings, using eglGetProcAddress() from native code compiled with the NDK to fetch the entry points.
For this one you want the extension entry point glBufferStorageEXT().

AOSP / Android 7: How is EGL utilized in detail?

I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.
I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.
Things I do not understand:
Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?
Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?
I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.
My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.

Can Three.js render accurately without certain WebGL Extensions?

In a recent build of Samsung's OS (I'm testing with a Galaxy S6) Chrome disables WebGL. The reason for this is documented here. I'm running Chrome without the blacklist using this flag:
--ignore-gpu-blacklist
These are the errors that Three.js spits out at me. I'm curious to know if Three.js can successfully run/render if these OES extensions don't exist:
OES_texture_float
OES_texture_float_linear
OES_texture_half_float
OES_texture_half_float_linear
OES_texture_filter_anisotropic
If so, how would I go about altering Three.js so that this could work? If not, where can I read up further on these extensions so I can get a better understanding of how they work?
All five of these extensions deal with textures in the following ways:
The first four extensions support floating-point textures, that is, textures whose components
are floating-point values.
OES_texture_float means 32-bit floating point textures with nearest-neighbor
filtering.
OES_texture_float_linear means 32-bit floating point textures with linear
filtering.
OES_texture_half_float means 16-bit floating point textures with nearest-neighbor
filtering.
OES_texture_half_float_linear means 16-bit floating point textures with linear
filtering.
See texture_float,
texture_float_linear,
and ARB_texture_float
(which OES_texture_float_linear is based on)
in the OpenGL ES Registry.
Three.js checks for these extensions (and outputs error messages if necessary) in order
to enable their functionality.
The last extension (called EXT_texture_filter_anisotropic) provides support for anisotropic filtering, which can provide better quality
when filtering some kinds of textures.
Registry: https://www.khronos.org/registry/gles/extensions/EXT/texture_filter_anisotropic.txt
This page
includes a visualization of this filtering.
Here too, Three.js checks for this extension to see if it can be used.
For all these extensions, it depends on your application
whether it makes sense to "go about altering Three.js". For instance,
does your application require floating-point textures for some of its effects? (You can check for
that by checking if you use THREE.HalfFloatType or THREE.FloatType.)
Although Three.JS checks for these extensions, it doesn't inherently rely on these extensions in order to work, and only at least one example requires their use. Therefore, the issue is not so much to modify Three.js as it is to modify your application. Nonetheless, here, in WebGLExtensions.js, is where the warning is generated.

Is compiling a shader during rendering a good / valid practice in OpenGL ES?

System: Android 4.03, OpenGL ES 2.0
Problem: When glAttachShader is invoked after the first frame has already been rendered with another program / shader, some devices (Galaxy S3) crash with a "GL_INVALID_VALUE" error (no further details are available in the error stack). Other devices (Asus eee TF101) are perfectly fine with that. The error does not always occur and sometimes it's also a "GL_INVALID_ENUM" instead.
If I force all shaders to be compiled right at the first call to onDrawFrame, it works on all (my) devices.
Questions:
Are there states in which the openGL(ES) machine is incapable of compiling a shader?
Is it possible that bound buffers, textures or enabled attribute arrays interfere with attaching a shader to a program?
If so, what is the ideal state one must ensure before attaching shaders and linking the program?
Is it even valid to compile shaders after other objects have already been rendered with other shaders?
Background: I'm developing an Android library that will allow me to use openGL graphics in a more object oriented way (using objects like "scene", "material", "model" etc.), utlimatively to write games easily. The scenes, models etc. are created in a thread different to the GL context. Only when onDrawFrame encounters one of these objects it will do the buffer object binding, texture binding and shader compilation, within the right thread.
I would like to avoid to compile all shaders at the beginning of my code. The shader source is assembled depending on the requirements of the material, the model and the scene (eg: Material: include bump-mapping, Model: include matrix-palette-skimming, scene: include fog). When a model is removed from a scene, I'm going to delete the shader again - and if I add another model, the new shader should be compiled ad-hoc.
At this point I'm trying to be as concise as possible without posting code - you can imagine that extracting the relevant parts from this library is difficult.
It is perfectly valid to compile during rendering although is discouraged as the driver needs to take resources (CPU) for that. Some driver states my trigger a shader recompile at the driver side as some states are injected into the shader. It wise to reorganize your drawing calls into chunks sharing the same driver state (preferred by shader program as is one of the most expensive operations done by the driver).
TIP: Be sure to "use" all variables, uniforms and attribs declared into your shader, otherwise, the Mali driver removes them during compile and when you try to get an uniform location, an attrib location and son on, the drivers returns GL_INVALID_VALUE.
Hope that helps.
If you copy the BasicGLSurfaceView sample code that comes with the Android development kit to start your project, then the first call to
checkGlError
is after attaching the vertex shader. However, you might have used an invalid value or enum a lot earlier or in a different location in your code. But this will only be picked up by this call, after glAttachShader.
In my case I deleted a texture which was still linked as render target for a framebuffer. My older Android device which runs slower compiled the shader before deleting, my newer device somehow managed to call
glFramebufferTexture2D
before compiling the shader. The whole thing somehow links to queueEvent and my poor understanding of thread-safety.
Thanks for your efforts, TraxNet and Prateek Nina.

How to properly use glDiscardFramebufferEXT

This question relates to the OpenGL ES 2.0 Extension EXT_discard_framebuffer.
It is unclear to me which cases justify the use of this extension. If I call glDiscardFramebufferEXT() and it puts the specified attachable images in an undefined state this means that either:
- I don't care about the content anymore since it has been used with glReadPixels() already,
- I don't care about the content anymore since it has been used with glCopyTexSubImage() already,
- I shouldn't have made the render in the first place.
Clearly, only the 1st two cases make sense or are there other cases in which glDiscardFramebufferEXT() is useful? If yes, which are these cases?
glDiscardFramebufferEXT is a performance hint to the driver. Mobile GPUs use tile based deferred rendering. In that context setting any of your framebuffer to be discarded saves the gpu work and memory bandwith as it does not need to write it back to uniform memory.
Typically you will discard:
the depth buffer as it is not presented on screen. It is just used during rendering on the gpu.
the msaa buffer as it is resolved to a smaller buffer for presenting to screen.
Additionally any buffer that is just used for rendering on the GPU should be discarded so it is not written back to uniform memory.
The main situation where I've seen DiscardFramebuffer used is when you have a multi-sampled renderbuffer that you just resolved to a texture using BlitFramebuffer or ResolveMultisampleFramebufferAPPLE (on iOS) in which case you no longer care about the contents of the original buffer.

Categories

Resources