How can I load up / stress test GPU on Android device? - android

I've been searching and trying stuff for a few days with no luck. I have an embedded system using a Snapdragon SoC. It is running Android 5.0 and using openGL ES 3.0. It is not a phone and does not have a display, but I am able to use Vysor Chrome extension to see and work with the Android GUI.
Since it's not a phone and in a rather tight physical package, and I will eventually be doing some intensive encoding/decoding stuff, I am trying to test thermal output and properties under load. I am using Snapdragon Profiler to monitor CPU utilization and temperature.
I have been able to successfully load up the CPU and get a good idea of thermal output. I just made some test code that encodes a bunch of bitmaps to jpeg using standard Android SDK calls (using the CPU).
Now I want to see what happens if I do some GPU intensive stuff. The idea being that if I leverage the GPU for some encoding chores maybe things won't get so hot because the GPU can more efficiently handle some types of jobs.
I have been reading and from what I gather, there are a few ways I can eventually leverage the GPU. I could use some library such as FFMPEG or Android's MediaCodec stuff that uses hardware acceleration. I could also use openCV or RenderScript.
Before I go down any of those paths I want to just get some test code running and profile the hardware.
What's a quick, easy way to do this? I have done a little bit of openGL ES shader programming, but since this is not really a 3D graphics thing, I am not sure I can use shaders to test this. Since it is part of the graphics pipeline, will openGL allow me to do some GPU intensive stuff in the shaders? Or will it just drop frames or crash if I start doing some heavy stuff in there? What can I do to load up the GPU if I try shaders? Just a long while loop or something?
If shaders aren't the best way to load up GPU, what is? I think shaders are the only programmable part of openGL ES. Using RenderScript can I explicitly run operations on the GPU or does the framework just automatically determine how to run the code?
Finally, what is the metric I should be probing to determine GPU usage? In my profiler I have CPU Utilization but there is no GPU utilization. I have available the following GPU metrics:

but I am able to use Vysor Chrome extension to see and work with the Android GUI.
If you have Chrome working on the platform with a network connection, and don't care too much about what is being rendered then https://www.shadertoy.com/ is a quick and dirty way of getting some complex graphics running via WebGL.
I could use some library such as FFMPEG or Android's MediaCodec stuff that uses hardware acceleration. I could also use openCV or RenderScript.
FFMPEG and MediaCodec will be hardware accelerated, but likely not on the 3D GPU, but a separate dedicated video encoder / decoder.

Related

renderscript sample run on gpu

I'm wondering if anybody has developed a Renderscript Program that runs on GPU. I've tried some simple implementations, like doing IntrinsicBlur via RS but it turned out that it runs on CPU rather than GPU.
Intrinsics will always run on the processor that will do them the fastest. If it is running on the CPU, that means that the GPU is not suitable for running it quickly. Reasons for this might be that the GPU is usually used for drawing the screen (which takes a lot of effort too), and so there isn't additional compute bandwidth there.

Is RenderScript the only device-independent way to run GPGPU code on Android?

Is RenderScript the only device-independent way to run GPGPU code on Android ?
I don't count Tegra as there is only few phones that have it.
RenderScript is the official Android compute platform. As a result it will be on all Android devices. It was designed specifically to address the problem of running one code base across many different devices.
Well, using RenderScript doesn't necessarily mean that your code will run on the GPU. It might also use the CPU and (hopefully) parallelize tasks on several CPU cores and use CPU vector instructions. But as far as I know, you can never be sure about that and the decision process is kind of a blackbox.
If you want to make sure that your code runs on the GPU, you can "simulate" some GPGPU functions with OpenGL ES 2.0 shaders. This will run on all devices that support OpenGL ES 2.0. It depends on what you want to do, but for example many image processing functions can be implemented very efficiently this way. There is a library called ogles_gpgpu that provides an architecture for GPGPU on Android and iOS systems: https://github.com/internaut/ogles_gpgpu
OpenGL ES 3.1 also supports "Compute Shaders" but few devices support this, yet.

Is it possible to program GPU for Android

I am now programming on Android and I wonder whether we can use GPGPU for Android now? I once heard that Renderscript can potentially execute on GPGPU in the future. But I wonder whether it is possible for us to programming on GPGPU now? And if it is possible for me to program on the Android GPGPU, where can I find some tutorials or sample programs? Thank you for your help and suggestions.
Up till now I know that the OpenGL ES library was now accelerated use GPU, but I want to use the GPU for computing. What I want to do is to accelerate computing so that I hope to use some libraries of APIs such as OpenCL.
2021-April Update
Google has announced deprecation of the RenderScript API in favor of Vulkan with Android 12.
The option for manufacturers to include the Vulkan API was made available in Android 7.0 Compatibility Definition Document - 3.3.1.1. Graphic Libraries.
Original Answer
Actually Renderscript Compute doesn't use the GPU at this time, but is designed for it
From Romain Guy who works on the Android platform:
Renderscript Compute is currently CPU bound but with the for_each construct it will take advantage of multiple cores immediately
Renderscript Compute was designed to run on the GPU and/or the CPU
Renderscript Compute avoids having to write JNI code and gives you architecture independent, high performance results
Renderscript Compute can, as of Android 4.1, benefit from SIMD optimizations (NEON on ARM)
https://groups.google.com/d/msg/android-developers/m194NFf_ZqA/Whq4qWisv5MJ
yes , it is possible .
you can use either renderscript or opengGL ES 2.0 .
renderscript is available on android 3.0 and above , and openGL ES 2.0 is available on about 95% of the devices.
As of Android 4.2, Renderscript can involve GPU in computations (in certain cases).
More information here: http://android-developers.blogspot.com/2013/01/evolution-of-renderscript-performance.html
As I understand, ScriptIntrinsic subclasses are well-optimized to run on GPU on compatible hardware (for example, Nexus10 with Mali T604). Documentation:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsic.html
Of course you can decide to use OpenCL, but Renderscript is guaranteed (by Google, being a part of Android itself) to be running even on hardware which doesn't support GPGPU computation and will use any other available acceleration means supported by hardware it is running on.
There are several options: You can use OpenGL ES 2.0, which is supported by almost all devices but has limited functionality for GPGPU. You can use OpenGL ES 3.0, with which you can do much more in terms of GPU processing. Or you can use RenderScript, but this is platform-specific and furthermore does not give you any influence on whether your algorithms run on the GPU or the CPU. A summary about this topic can be found in this master's thesis: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You should also check out ogles_gpgpu, which allows GPGPU via OpenGL ES 2.0 on Android and iOS.

What is the best method to render video frames?

what is the best choice for rendering video frames obtained from a decoder bundled into my app (FFmpeg, etc..) ?
I would naturally tend to choose OpenGL as mentioned in Android Video Player Using NDK, OpenGL ES, and FFmpeg.
But in OpenGL in Android for video display, a comment notes that OpenGL isn't the best method for rendering video.
What then? The jnigraphics native library? And a non-GL SurfaceView?
Please note that I would like to use a native API for rendering the frames, such as OpenGL or jnigraphics. But Java code for setting up a SurfaceView and such is ok.
PS: MediaPlayer is irrelevant here, I'm talking about decoding and displaying the frames by myself. I can't rely on the default Android codecs.
I'm going to attempt to elaborate on and consolidate the answers here based on my own experiences.
Why openGL
When people think of rendering video with openGL, most are attempting to exploit the GPU to do color space conversion and alpha blending.
For instance converting YV12 video frames to RGB. Color space conversions like YV12 -> RGB require that you calculate the value of each pixel individually. Imagine for a frame of 1280 x 720 pixels how many operations this ends up being.
What I've just described is really what SIMD was made for - performing the same operation on multiple pieces of data in parallel. The GPU is a natural fit for color space conversion.
Why !openGL
The downside is the process by which you get texture data into the GPU. Consider that for each frame you have to Load the texture data into memory (CPU operation) and then you have to Copy this texture data into the GPU (CPU operation). It is this Load/Copy that can make using openGL slower than alternatives.
If you are playing low resolution videos then I suppose it's possible you won't see the speed difference because your CPU won't bottleneck. However, if you try with HD you will more than likely hit this bottleneck and notice a significant performance hit.
The way this bottleneck has been traditionally worked around is by using Pixel Buffer Objects (allocating GPU memory to store texture Loads). Unfortunately GLES2 does not have Pixel Buffer Objects.
Other Options
For the above reasons, many have chosen to use software-decoding combined with available CPU extensions like NEON for color space conversion. An implementation of YUV 2 RGB for NEON exists here. The means by which you draw the frames, SDL vs openGL should not matter for RGB since you are copying the same number of pixels in both cases.
You can determine if your target device supports NEON enhancements by running cat /proc/cpuinfo from adb shell and looking for NEON in the features output.
I have gone down the FFmpeg/OpenGLES path before, and it's not very fun.
You might try porting ffplay.c from the FFmpeg project, which has been done before using an Android port of the SDL. That way you aren't building your decoder from scratch, and you won't have to deal with the idiosyncracies of AudioTrack, which is an audio API unique to Android.
In any case, it's a good idea to do as little NDK development as possible and rely on porting, since the ndk-gdb debugging experience is pretty lousy right now in my opinion.
That being said, I think OpenGLES performance is the least of your worries. I found the performance to be fine, although I admit I only tested on a few devices. The decoding itself is fairly intensive, and I wasn't able to do very aggressive buffering (from the SD card) while playing the video.
Actually I have deployed a custom video player system and almost all of my work was done on the NDK side. We are getting full frame video 720P and above including our custom DRM system. OpenGL is not your answer as on Android Pixbuffers are not supported, so you are bascially blasting your textures every frame and that screws up OpenGLESs caching system. You frankly need to shove the video frames through the Native supported Bitmap on Froyo and above. Before Froyo your hosed. I also wrote a lot of NEON intrinsics for color conversion, rescaling, etc to increase throughput. I can push 50-60 frames through this model on HD Video.

GPU Profiling and callbacks in OpenGL ES

Is there a way to add callbacks in OpenGL ES similar to what DirectX has? I'm trying to profile the GPU performance, so I'm trying to figure out how long it took to execute certain parts of the GPU.
Ideally, I "push" a marker/callback, then call a bunch of GL draw calls, then push another marker, and then find out how many milliseconds passed inbetween those two markers a frame later.
(Any other ways to profile GPU performance would be helpful too.)
GPU maker provides nice profiler for Android. As far as my experience, it requires root privilege.
ADRENOâ„¢ PROFILER for Qualcomm Snapdragon
PerfHUD ES for NVIDIA Tegra2
Use the DDMS feature under your Eclipse environment. It's installed by default.
A Very powerful graphical profiling utility. You can also lookup threads, heap, method profiling, objects allocation, and more.
Check the how to use DDMS here.
Hope it helps ;)

Categories

Resources