Is it possible to debug shaders in Android OpenGL ES 2? - android

Is there a possibility to debug the shaders (fragment and vertex) in an Android Application with OpenGL-ES 2?
Since we only pass a String with code and a bunch of variables to replace with handles, it is very tedious to find the proper changes that need to be done.
Is it possible to write to the Android Log, as in Log.d()
Is it possible to use break points and to inspect the current values in the shader calculations?
I am simply not used to write code with a pen anymore and that's what it feels like to code within the shader text code.

This is an old question but since it appears first in searches and the old answer can be expanded upon, I'm leaving an alternative answer:
While printing or debugging like we do on Java or Kotlin is not possible, this doesn't mean that it cannot be debugged. There used to be a tool on the now deprecated Android Monitor for letting you see a trace of your GPU execution frame by frame, which included inspecting calls and geometry.
Right now the official GPU debugger is the Android GPU Inspector, which has some useful performance metrics and will include debugging frame by frame in a future update.
If the Android GPU Inspector doesn't have what you need, you can go with vendor-specific debuggers depending on your device (Mali Graphics Debugger, Snapdragon Debugger, etc.)

No. Remember that the GPU is going to execute every program millions of times (once per vertex, and once per fragment), often with hundreds of threads running concurrently, so any concept of "connect a debugger" is pretty much impossible.

Related

How to determine (at runtime) if TensorFlow Lite is using a GPU or not?

Is there a way for me to ensure or at least determine at runtime the correct accelerator (CPU, GPU) is used when using the TensorFlow Lite library?
Although I had followed the guide, and set the Interpreter.Options() object to use a GPU delegate on a device with a GPU (Samsung S9), its highly likely to be using the CPU in some cases. For example, if you use a quantized model with a default delegate options object, it will default to using the CPU, because quantizedModelsAllowed is set to false. I am almost sure that even though the options object passed to the interpreter had a GPUDelegate, that the CPU was used instead. Unfortunately, I had to guess based on speed of inference and accuracy.
There was no warning, I just noticed slower inference time and improved accuracy (because in my case the GPU was acting weirdly, giving me wrong values, and I am trying to figure out why as a separate concern). Currently, I have to guess if the GPU/ CPU is being used, and react accordingly. Now, I think there are other cases like this where it falls back to the CPU, but I don't want to guess.
I have heard of AGI (Android GPU Inspector), it currently only supports 3 pixel devices. It would have been nice to use this to see the GPU get used in the profiler. I have also tried Samsungs GPUWatch, this simply does not work (on both OpenGL and Vulkan), as my app doesn't use either of these APIs (it doesn't render stuff, it uses tensorflow!).
I will place my results here after using the benchmark tool:
Firstly you can see the model with CPU usage without XNNPack:
Secondly model with CPU with XNNPack:
Thirdly model with GPU usage!!!!!:
And lastly with Hexagon or NNAPI delegate:
As you can see model is been processed by GPU. Also I used 2 randomly selected phones. If you want any particular device please say it to me. Finally you can download all results from benchmark tool here.
Answer by TensorFlow advocate:
Q= What is happening when we set to use a specific delegate but the phone cannot support it? Let's say I set to use a Hexagon delegate and the phone cannot use it. It is going to fall back to CPU usage?
A= It should fallback to the CPU.
Q= What about if I set GPU and this delegate cannot support the specific model. Does it fall back to the CPU or it crashes?
A= It should also fallback to CPU but the tricky thing is sometimes a delegate "thinks" it could support an op at initialization time, but during runtime "realize" that it can't support the particular configuration of the op in the particular model. In such cases, the delegate crashes.
Q= Is there a way to determine what delegate is used during runtime despite what we have set to use?
A= You can look at the logcat, or use the benchmark tool to run the model on the particular phone to find out.
As Farmaker mentioned, TFLite's benchmarking & accuracy tooling is the best way for you to judge how a delegate will behave for your use-case (your model & device).
First, use the benchmark tool to check latencies with various configurations (for delegates, use params like use_gpu=true). See this page for a detailed explanation of the tool, and pre-built binaries for you to use via adb. You can also use the param --enable_op_profiling=true to see which ops from the graph get accelerated by the delegate.
Then, if you want to check accuracy/correctness of a delegate for your model (i.e. whether the delegate behaves like CPU would numerically), look at this documentation for tooling details.

AOSP / Android 7: How is EGL utilized in detail?

I am trying to understand the Android (7) Graphics System from the system integrators point of view. My main focus is the minimum functionality that needs to be provided by libegl.
I understand that surfaceflinger is the main actor in this domain. Surfaceflinger initialized EGL, creates the actual EGL surface and acts as a consumer for buffers (frames) created by the app. The app again is executing the main part of required GLES calls. Obviously, this leads to restrictions as surfaceflinger and apps live in separate processes which is not the typical use case for GLES/EGL.
Things I do not understand:
Do apps on Android 7 always render into EGL_KHR_image buffers which are send to surfaceflinger? This would mean there's always an extra copy step (even when no composition is needed), as far as I understand... Or is there also some kind of optimized fullscreen mode, where apps do directly render into the final EGL surface?
Which inter-process sharing mechanisms are used here? My guess is that EGL_KHR_image, used with EGL_NATIVE_BUFFER_ANDROID, defines the exact binary format, so that an image object may be created in each process, where the memory is shared via ashmem. Is this already the complete/correct picture or do I miss something here?
I'd guess these are the main points I am lacking confident knowledge about, at the moment. For sure, I have some follow-up questions about this (like, how do gralloc/composition fit into this?), but, in accordance to this platform, I'd like to keep this question as compact as possible. Still, besides the main documentation page, I am missing documentation clearly targeted at system integrators. So further links would be really appreciated.
My current focus are typical use cases which would cover the vast majority of apps compatible with Android 7. If there are corner cases like long deprecated compatibility shims, I'd like to ignore them for now.

OpenGL debugging

When I write OpenGL (1.0) programs for Android, I found it not easy to debug my programs. OpenGL is just a fixed pipeline of several steps which process vertex coordintates. Is there any way to peek and see what are results of consecutive steps of the pipeline?
Added: Mad Scientist, thanks for your advice. However, I tried to use Tracer for OpenGL ES (following instructions from http://developer.android.com/tools/help/gltracer.html ) and I'm still not sure on how do I find the information I need. I can see slider to choose a frame, then I can see a list of functions called. When I choose one of functions, I can see GL state. But when I look (inside this GL state) at context 0 -> vertex array data -> generic vertex attributes, all coordinates I can see are zeroes. Is it normal? My main hope is that in those situations when I can see nothing but a black screen I would be able to see what are vertexes' coordinates before and after multiplication by matrices and this will help me find out why they are invisible.
Use the OpenGL Tracer for Android, it is part of the Android SDK. Just start the Monitor program and trace your App.
I found the Tracer a bit temperamental and it does not work with my Nexus 10, but if it works it provides a lot of information.

eglSwapBuffers is erratic/slow

I have a problem with very low rendering time on an android tablet using the NDK and the egl commands. I have timed calls to eglSwapBuffers and is taking a variable amount of time, frequently exceeded the device frame rate. I know it synchronizes to the refresh, but that is around 60FPS, and the times here drop well below that.
The only command I issue between calls to swap is glClear, so I know it isn't anything that I'm drawing causing the problem. Even just by clearing the frame rate drops to 30FPS (erratic though).
On the same device a simple GL program in Java easily renders at 60FPS, thus I know it isn't fundamentally a hardware issue. I've looked through the Android Java code for setting up the GL context and can't see any significant difference. I've also played with every config attribute, and while some alter the speed slightly, none (that I can find) change this horrible frame rate drop.
To ensure the event polling wasn't an issue I moved the rendering into a thread. That thread now only does rendering, thus just calls clear and swap repeatedly. The slow performance still persists.
I'm out of ideas what to check and am looking for suggestions as to what the problem might be.
There's really not enough info (like what device you are testing on, what was you exact config etc) to answer this 100% reliable but this kind of behavior is usually caused by window and surface pixel format mismatch eg. 16bit (RGB565) vs 32bit.
FB_MULTI_BUFFER=3 environment variable will enable the multi buffering on Freescale i.MX 6 (Sabrelite) board with some recent LTIB build (without X). Your GFX driver may needs something like this.

Detect causes of performance problems?

I have an android app that is getting fairly large and complex now, and it seems to have intermittent performance problems. One time I will run the app and it's fine, another time it will struggle when switching views.
How can I detect the causes of the performance problem using debugging tools so that I may correct it?
Use the ddms tool which comes with the SDK. It has a nice feature called Allocation Tracker that allows you to see in real time how much memory your code is consuming and what specific line is causing that.
Most of the cases your app will slow down because of bad adapter implementations, poor layout inflation techniques or not using a cache system to decode Bitmaps (such as using SoftReference).
Take a look at this article for a brief explanation: Tracking Memory Allocations
In addition to the tool Cristian mentioned, Traceview is another helpful one. It's not very well documented but it can give you information about how often methods are being called, and which methods are taking a lot of time.
Another good memory tracking tool is MAT, here is a page that describes how to use it with Android: http://ttlnews.blogspot.com/2010/01/attacking-memory-problems-on-android.html
Both the tracing and the heap dumps can be done through the DDMS panel, if you prefer not to work with the command line. In Eclipse, in the devices panel, under the device/emulator you are using, click on your app (listed by package name), and you can then Start/Stop Method Profiling to get a trace and you can use Dump HPROF to get a heap dump. Note, the dumps need to be converted to work with the MAT plugin. The attacking-memory-problems-on-android above describes how to do that.

Categories

Resources