GPU Profiling and callbacks in OpenGL ES - android

Is there a way to add callbacks in OpenGL ES similar to what DirectX has? I'm trying to profile the GPU performance, so I'm trying to figure out how long it took to execute certain parts of the GPU.
Ideally, I "push" a marker/callback, then call a bunch of GL draw calls, then push another marker, and then find out how many milliseconds passed inbetween those two markers a frame later.
(Any other ways to profile GPU performance would be helpful too.)

GPU maker provides nice profiler for Android. As far as my experience, it requires root privilege.
ADRENOâ„¢ PROFILER for Qualcomm Snapdragon
PerfHUD ES for NVIDIA Tegra2

Use the DDMS feature under your Eclipse environment. It's installed by default.
A Very powerful graphical profiling utility. You can also lookup threads, heap, method profiling, objects allocation, and more.
Check the how to use DDMS here.
Hope it helps ;)

Related

Is OpenGL function loader needed?

On desktop OSes, OpenGL function loaders like GLEW, GLAD, etc. are used to load functions at runtime. But what about on Android? How are functions loaded? I've looked at a few apps and all of them seem to depend on EGL and GLES. But AFAIK EGL isn't a loading library, but an interface. Well, an interface to an interface as GLES is actually an interface.
This leads to another question: How come Android uses EGL when it is generally not used on desktops?
Back when I used android a bit, you could either link to the GLES 2.0 library, or you could link to the GLES 3.0 library, so kinda as if they provide the function pointers for you. ish. If you used GLES3.0, but the phone you ran it on only supported 2.0, your app would not load. To work around this, I always linked to GLES 2.0, and wrote my own function loader using eglGetProcAddress to extract the GLES3.0 API if available. This is pretty much how function loaders on windows/linux work (using wglGetProcAddress or glxGetProcAddress).
GLES has always been a stripped down version of the full blown desktop GL. It has always targeted a smaller subset of the full blown API (removing all of the legacy cruft), which in turn simplifies the OpenGL driver code somewhat, which in turn should reduce memory usage and save a little bit of battery life. Basically it's just more suited to use on a low power system where battery life is a concern.
How come Android uses EGL when it is generally not used on desktops?
It was widely used on embedded electronics prior to Android, so I suspect it was path of least resistance for a new smartphone OS as all GPU vendors had an EGL implementation already.

How can I load up / stress test GPU on Android device?

I've been searching and trying stuff for a few days with no luck. I have an embedded system using a Snapdragon SoC. It is running Android 5.0 and using openGL ES 3.0. It is not a phone and does not have a display, but I am able to use Vysor Chrome extension to see and work with the Android GUI.
Since it's not a phone and in a rather tight physical package, and I will eventually be doing some intensive encoding/decoding stuff, I am trying to test thermal output and properties under load. I am using Snapdragon Profiler to monitor CPU utilization and temperature.
I have been able to successfully load up the CPU and get a good idea of thermal output. I just made some test code that encodes a bunch of bitmaps to jpeg using standard Android SDK calls (using the CPU).
Now I want to see what happens if I do some GPU intensive stuff. The idea being that if I leverage the GPU for some encoding chores maybe things won't get so hot because the GPU can more efficiently handle some types of jobs.
I have been reading and from what I gather, there are a few ways I can eventually leverage the GPU. I could use some library such as FFMPEG or Android's MediaCodec stuff that uses hardware acceleration. I could also use openCV or RenderScript.
Before I go down any of those paths I want to just get some test code running and profile the hardware.
What's a quick, easy way to do this? I have done a little bit of openGL ES shader programming, but since this is not really a 3D graphics thing, I am not sure I can use shaders to test this. Since it is part of the graphics pipeline, will openGL allow me to do some GPU intensive stuff in the shaders? Or will it just drop frames or crash if I start doing some heavy stuff in there? What can I do to load up the GPU if I try shaders? Just a long while loop or something?
If shaders aren't the best way to load up GPU, what is? I think shaders are the only programmable part of openGL ES. Using RenderScript can I explicitly run operations on the GPU or does the framework just automatically determine how to run the code?
Finally, what is the metric I should be probing to determine GPU usage? In my profiler I have CPU Utilization but there is no GPU utilization. I have available the following GPU metrics:
but I am able to use Vysor Chrome extension to see and work with the Android GUI.
If you have Chrome working on the platform with a network connection, and don't care too much about what is being rendered then https://www.shadertoy.com/ is a quick and dirty way of getting some complex graphics running via WebGL.
I could use some library such as FFMPEG or Android's MediaCodec stuff that uses hardware acceleration. I could also use openCV or RenderScript.
FFMPEG and MediaCodec will be hardware accelerated, but likely not on the 3D GPU, but a separate dedicated video encoder / decoder.

Is it possible to program GPU for Android

I am now programming on Android and I wonder whether we can use GPGPU for Android now? I once heard that Renderscript can potentially execute on GPGPU in the future. But I wonder whether it is possible for us to programming on GPGPU now? And if it is possible for me to program on the Android GPGPU, where can I find some tutorials or sample programs? Thank you for your help and suggestions.
Up till now I know that the OpenGL ES library was now accelerated use GPU, but I want to use the GPU for computing. What I want to do is to accelerate computing so that I hope to use some libraries of APIs such as OpenCL.
2021-April Update
Google has announced deprecation of the RenderScript API in favor of Vulkan with Android 12.
The option for manufacturers to include the Vulkan API was made available in Android 7.0 Compatibility Definition Document - 3.3.1.1. Graphic Libraries.
Original Answer
Actually Renderscript Compute doesn't use the GPU at this time, but is designed for it
From Romain Guy who works on the Android platform:
Renderscript Compute is currently CPU bound but with the for_each construct it will take advantage of multiple cores immediately
Renderscript Compute was designed to run on the GPU and/or the CPU
Renderscript Compute avoids having to write JNI code and gives you architecture independent, high performance results
Renderscript Compute can, as of Android 4.1, benefit from SIMD optimizations (NEON on ARM)
https://groups.google.com/d/msg/android-developers/m194NFf_ZqA/Whq4qWisv5MJ
yes , it is possible .
you can use either renderscript or opengGL ES 2.0 .
renderscript is available on android 3.0 and above , and openGL ES 2.0 is available on about 95% of the devices.
As of Android 4.2, Renderscript can involve GPU in computations (in certain cases).
More information here: http://android-developers.blogspot.com/2013/01/evolution-of-renderscript-performance.html
As I understand, ScriptIntrinsic subclasses are well-optimized to run on GPU on compatible hardware (for example, Nexus10 with Mali T604). Documentation:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsic.html
Of course you can decide to use OpenCL, but Renderscript is guaranteed (by Google, being a part of Android itself) to be running even on hardware which doesn't support GPGPU computation and will use any other available acceleration means supported by hardware it is running on.
There are several options: You can use OpenGL ES 2.0, which is supported by almost all devices but has limited functionality for GPGPU. You can use OpenGL ES 3.0, with which you can do much more in terms of GPU processing. Or you can use RenderScript, but this is platform-specific and furthermore does not give you any influence on whether your algorithms run on the GPU or the CPU. A summary about this topic can be found in this master's thesis: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You should also check out ogles_gpgpu, which allows GPGPU via OpenGL ES 2.0 on Android and iOS.

Can Android renderscript run on GPU?

Are there any Android devices where renderscript executes on the GPU instead of the CPU, or is this something not yet implemented anywhere?
As of JellyBean 4.2 there is a direct GPU integration for renderscript. See this and this.
I cannot confirm with any official documentation for Google, but I work with RenderScript all day every day and each time I run it, I see the logcat report loading drivers for graphics chips in my devices, most notably Tegra 2. Google has really lagged in documenting RenderScript, and I would not at all be surprised if they simply havn't corrected this omission in their discussion.
Currently the compute side of Renderscript will only run on the CPU:
For now, compute Renderscripts can only take advantage of CPU cores, but in the future, they can potentially run on other types of processors such as GPUs and DSPs.
Taken from Renderscript dev guide.
The graphics side of Renderscript sits on top of OpenGL ES so the shaders will run on the GPU.
ARM's Mali-T604 GPU will provide a target for the compute side of Renderscript (in a future Android release?) (see ARM Blog entry).
The design of RenderScript is so that it runs on the GPU. This was the main purpose of adding the new language. I assume there are devices where it runs on the CPU due to lack of support, but on most devices it runs on the GPU
I think this may depend on whether you're doing graphics or compute operations. The graphics operations will likely get executed on the GPU but the compute operations won't as far as I understand.
When you use the forEach construct the computation will run in multiple threads on the CPU, not the GPU (you can see this in the ICS source code). In future releases this may change (see https://events.linuxfoundation.org/slides/2011/lfcs/lfcs2011_llvm_liao.pdf) but I haven't seen any announcements.
Currently, only the Nexus 10 seems to support Renderscript GPU compute.

When should I use the CPU or or the GPU compiler option in Flash?

I've read through this section of Adobe's excellent 10.1 optimization tips. I found the statement below to be very helpful. Is there anything else to look out for? Is the dumbed-down difference just: use the GPU for raster and CPU for vector graphics?
The GPU is only effective for bitmaps,
solid shapes, and display objects that
have the cacheAsBitmap and
cacheAsBitmapMatrix set. When the GPU
is used in tandem for other display
objects and this generally results in
poor rendering performance.
Just notice it that you want to run in GPU and not CPU by this line on your Manifest.xml
<renderMode>gpu</renderMode>
dan
I would generally say that if you are going to make use of the new molehill api then gpu is a must, or if you are making a mobile application with any kind of animation then you should rasterize it and enable gpu mode.
Other than that, you probably wont notice too much difference from CPU mode on a standard desktop pc.
See GPU rendering in mobile AIR applications. Basically, if you need smooth animation of static DisplayObjects, want to take the time to optimize everything, and don't have video, use GPU. Otherwise, use CPU.

Categories

Resources