I have some code that uses glBlendFuncSeparateOES and glBendEquationSeparateOES in order to render onto framebuffers with alpha.
However, I've found that a couple of my target devices do NOT appear to support these functions. They fail silently, and all that happens is that your render mode doesn't get set. My Kinda Fire cheapie tablet and an older Samsung both exhibit this behavior.
Is there a good way, on android, to query if they're actually implemented? I have tried eglGetProcAddress, but it returns an address for any string you throw at it!!!!!
Currently, I just have the game, on startup, do a quick render on a small FBO to see if the transparency is correct or if it has artifacts. It works, but it's a very kludgy method.
I'd much prefer if there was something like glIsBlendFuncSeparateSupported().
You can get a list of all available extensions using glGetString(GL_EXTENSIONS). This returns a space-separated list of supported extensions. For more details see Khronos specification.
Related
Is there a way for me to ensure or at least determine at runtime the correct accelerator (CPU, GPU) is used when using the TensorFlow Lite library?
Although I had followed the guide, and set the Interpreter.Options() object to use a GPU delegate on a device with a GPU (Samsung S9), its highly likely to be using the CPU in some cases. For example, if you use a quantized model with a default delegate options object, it will default to using the CPU, because quantizedModelsAllowed is set to false. I am almost sure that even though the options object passed to the interpreter had a GPUDelegate, that the CPU was used instead. Unfortunately, I had to guess based on speed of inference and accuracy.
There was no warning, I just noticed slower inference time and improved accuracy (because in my case the GPU was acting weirdly, giving me wrong values, and I am trying to figure out why as a separate concern). Currently, I have to guess if the GPU/ CPU is being used, and react accordingly. Now, I think there are other cases like this where it falls back to the CPU, but I don't want to guess.
I have heard of AGI (Android GPU Inspector), it currently only supports 3 pixel devices. It would have been nice to use this to see the GPU get used in the profiler. I have also tried Samsungs GPUWatch, this simply does not work (on both OpenGL and Vulkan), as my app doesn't use either of these APIs (it doesn't render stuff, it uses tensorflow!).
I will place my results here after using the benchmark tool:
Firstly you can see the model with CPU usage without XNNPack:
Secondly model with CPU with XNNPack:
Thirdly model with GPU usage!!!!!:
And lastly with Hexagon or NNAPI delegate:
As you can see model is been processed by GPU. Also I used 2 randomly selected phones. If you want any particular device please say it to me. Finally you can download all results from benchmark tool here.
Answer by TensorFlow advocate:
Q= What is happening when we set to use a specific delegate but the phone cannot support it? Let's say I set to use a Hexagon delegate and the phone cannot use it. It is going to fall back to CPU usage?
A= It should fallback to the CPU.
Q= What about if I set GPU and this delegate cannot support the specific model. Does it fall back to the CPU or it crashes?
A= It should also fallback to CPU but the tricky thing is sometimes a delegate "thinks" it could support an op at initialization time, but during runtime "realize" that it can't support the particular configuration of the op in the particular model. In such cases, the delegate crashes.
Q= Is there a way to determine what delegate is used during runtime despite what we have set to use?
A= You can look at the logcat, or use the benchmark tool to run the model on the particular phone to find out.
As Farmaker mentioned, TFLite's benchmarking & accuracy tooling is the best way for you to judge how a delegate will behave for your use-case (your model & device).
First, use the benchmark tool to check latencies with various configurations (for delegates, use params like use_gpu=true). See this page for a detailed explanation of the tool, and pre-built binaries for you to use via adb. You can also use the param --enable_op_profiling=true to see which ops from the graph get accelerated by the delegate.
Then, if you want to check accuracy/correctness of a delegate for your model (i.e. whether the delegate behaves like CPU would numerically), look at this documentation for tooling details.
In a recent build of Samsung's OS (I'm testing with a Galaxy S6) Chrome disables WebGL. The reason for this is documented here. I'm running Chrome without the blacklist using this flag:
--ignore-gpu-blacklist
These are the errors that Three.js spits out at me. I'm curious to know if Three.js can successfully run/render if these OES extensions don't exist:
OES_texture_float
OES_texture_float_linear
OES_texture_half_float
OES_texture_half_float_linear
OES_texture_filter_anisotropic
If so, how would I go about altering Three.js so that this could work? If not, where can I read up further on these extensions so I can get a better understanding of how they work?
All five of these extensions deal with textures in the following ways:
The first four extensions support floating-point textures, that is, textures whose components
are floating-point values.
OES_texture_float means 32-bit floating point textures with nearest-neighbor
filtering.
OES_texture_float_linear means 32-bit floating point textures with linear
filtering.
OES_texture_half_float means 16-bit floating point textures with nearest-neighbor
filtering.
OES_texture_half_float_linear means 16-bit floating point textures with linear
filtering.
See texture_float,
texture_float_linear,
and ARB_texture_float
(which OES_texture_float_linear is based on)
in the OpenGL ES Registry.
Three.js checks for these extensions (and outputs error messages if necessary) in order
to enable their functionality.
The last extension (called EXT_texture_filter_anisotropic) provides support for anisotropic filtering, which can provide better quality
when filtering some kinds of textures.
Registry: https://www.khronos.org/registry/gles/extensions/EXT/texture_filter_anisotropic.txt
This page
includes a visualization of this filtering.
Here too, Three.js checks for this extension to see if it can be used.
For all these extensions, it depends on your application
whether it makes sense to "go about altering Three.js". For instance,
does your application require floating-point textures for some of its effects? (You can check for
that by checking if you use THREE.HalfFloatType or THREE.FloatType.)
Although Three.JS checks for these extensions, it doesn't inherently rely on these extensions in order to work, and only at least one example requires their use. Therefore, the issue is not so much to modify Three.js as it is to modify your application. Nonetheless, here, in WebGLExtensions.js, is where the warning is generated.
I'm using my own GLSurfaceView and have been struggling with crashes related to the EGL config chooser for a while.
It seems as though requesting RGB_565 by calling setEGLConfigChooser(5, 6, 5, 0, 16, 0) should be the most supported. However, running this code on the emulator using host GPU I still get a crash, seemingly because my graphics card does not natively support RGB_565. Setting to RGBA_8888 by calling setEGLConfigChooser(8, 8, 8, 8, 16, 0) seems to run fine on my emulator and HTC Rezound device, but I'm seeing a small number of crash reports in the market still.
My guess is that most phones support RGBA_8888 natively now but a small number of my users have phones which are only compatible with RGB_565, which prevents my config chooser from getting a config.
Seeing as how I don't need the alpha channel, is there a right way to try RGBA_8888 first and then fall back to RGB_565? Is there a cleaner way to just ask for any ol' config without caring about the alpha value?
I saw a possible solution to determine ahead of time what the default display config was and request that specifically here: https://stackoverflow.com/a/20918779/234256. Unfortunately, it looks like the suggested getPixelFormat function is deprecated as of API level 17.
From my experience I do not think setEGLConfigChooser() actually works correctly, i.e. it has a bug in its implementation. Across a number of devices I have seen crashes where setEGLConfigChooser() fails to select a valid context even if the underlying surface is of the correct type.
I have found the only reliable way to choose an EGL context is with a custom EGLConfigChooser. This also has the added benefit of choosing a config based on your own rules, e.g. surface must have depth and preferably RGB888 but can settle for RGB565. This is actually pretty straightforward to use eglChooseConfig() to get a list of possible configurations and then return one of them that matches your selection criteria.
This gsoc code sample is for enabling MSAA. But it also contains code to select configurations, checking if they are available.
https://code.google.com/p/gdc2011-android-opengl/source/browse/trunk/src/com/example/gdc11/MultisampleConfigChooser.java#90
I have a cross-platform code base (iOS and Android) that uses a standard render-to-texture setup. Each frame (after initialization), the following sequence occurs:
glBindFramebuffer of a framebuffer with a texture color attachment
Render some stuff
*
glBindFramebuffer of the default framebuffer (0 on Android, usually 2 on iOS)
glBindTexture of the texture that was the color attachment to the first framebuffer
Render using the bound texture
On iOS and some Android devices (including the emulator), this works fine and as expected. On other devices (currently sitting in front of a Samsung Galaxy Note running 4.0.4), the second-phase rendering that uses the texture looks "jumpy". Other animations continue to run at 60 fps on the same screen as the "jumpy" bits; my conclusion is that the changes to the target texture are not always visible in the second rendering pass.
To test this theory, I insert a glFinish() at the step marked with a * above. On all devices, now, this has the correct behavior. Interestingly, glFlush() does NOT fix the problem. But glFinish() is expensive, and I haven't seen any documentation that suggests that this should be necessary.
So, here's my question: What must I do when finished rendering to a texture to make sure that the most-recently-drawn texture is available in later rendering passes?
The code you describe should be fine.
As long as you are using a single context, and not opting in to any extensions that relax synchronization behavior (such as EXT_map_buffer_range), then every command must appear to execute as if it had executed in exactly the same order specified in the API, and in your API usage you're rendering to the texture before reading from it.
Given that, you are probably encountering driver bugs on those devices. Can you list which devices are encountering the issue? You'll probably find common hardware or drivers.
I want to transplant a 3D program written in OpenGL on windows platform to Android, but I wonder if it can run smoothly on general Android platforms, so i want to estimate how much hardware resource is sufficient for it to run smoothly. It is some kind like the hardware requirements for a software or 3d game that a company will recommend the users. I don't know how can i get a hardware requirements of my program when transplant to Android.
i used gdebugger and it gave me some information but i don't think that is enough for me. Anyone here have some idea or solution? Many thanks in advance!
If your program is simple enough, you could write up some estimates about texture fill rate, which is a pretty basic (and old) metric of rendering performance. Nearly every 3D chip comes with a theoretical fill rate, so you can get the theoretical numbers of both your desktop system and some Android phones.
The texture memory footprint is another thing that you can estimate, especially using gdebugger. Once again, these numbers are known for most chips.
This is a quick way to produce some numbers, obviously without any real life performance guarantees.
The best way would be to test it on an actual device, and get an idea of what hardware works well. You could distribute a beta app and get some feedback too.
Depends on feature set that you use. For example, if you use FBO, the device will have to support framebuffer extension. If you use MSAA, smooth line, the device will have support corresponding extensions.
After listing down your requirements, you can use glGet to check for the device suppport
http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml