Which android GPUs support "render to float texture"? - android

I believe that openGL ES 3.2 (and 3.1 + Android Extensions Pack AEP) support it, but I've heard that some GPU's with previous versions (specifically 3.1 without AEP) also have this particular extension.
My question is: how can I tell which GPUs have that particular extension, enabling one to render to a float texture?
I've searched manufacturer sites, but haven't been able to find this info (maybe I'm looking in the wrong place?)
I'm also a little wary, I heard that one manufacturer added this ability in their driver... but I wonder if that is a software solution (and therefore much slower, defeating the purpose).
Further to this, of course it's possible to do the encoding decoding in your own shader - but wouldn't this incur significant overhead? Or, maybe it's fine?
[BTW: the reason I'm asking is I want to purchase a phone to play around with general-purpose computing on mobile GPUs, and the latest phones are much more expensive]
Many thanks for any help! I've been trying to find this on-and-off for months...

Float rendering support is mandatory in OpenGL ES 3.2. It is not required for OpenGL ES 3.0 / 3.1 / 3.1 + AEP.
For earlier implementations you want to use a platform exposing the EXT_color_buffer_half_float and/or EXT_color_buffer_float extension.
Note that floating point rendering is relatively expensive due to the additional bandwidth, even when supported natively in the hardware. For higher dynamic range consider using something like RGB10_A2 if you can, it's smaller and faster (and supported in 3.0 core).

Related

Is it possible to program GPU for Android

I am now programming on Android and I wonder whether we can use GPGPU for Android now? I once heard that Renderscript can potentially execute on GPGPU in the future. But I wonder whether it is possible for us to programming on GPGPU now? And if it is possible for me to program on the Android GPGPU, where can I find some tutorials or sample programs? Thank you for your help and suggestions.
Up till now I know that the OpenGL ES library was now accelerated use GPU, but I want to use the GPU for computing. What I want to do is to accelerate computing so that I hope to use some libraries of APIs such as OpenCL.
2021-April Update
Google has announced deprecation of the RenderScript API in favor of Vulkan with Android 12.
The option for manufacturers to include the Vulkan API was made available in Android 7.0 Compatibility Definition Document - 3.3.1.1. Graphic Libraries.
Original Answer
Actually Renderscript Compute doesn't use the GPU at this time, but is designed for it
From Romain Guy who works on the Android platform:
Renderscript Compute is currently CPU bound but with the for_each construct it will take advantage of multiple cores immediately
Renderscript Compute was designed to run on the GPU and/or the CPU
Renderscript Compute avoids having to write JNI code and gives you architecture independent, high performance results
Renderscript Compute can, as of Android 4.1, benefit from SIMD optimizations (NEON on ARM)
https://groups.google.com/d/msg/android-developers/m194NFf_ZqA/Whq4qWisv5MJ
yes , it is possible .
you can use either renderscript or opengGL ES 2.0 .
renderscript is available on android 3.0 and above , and openGL ES 2.0 is available on about 95% of the devices.
As of Android 4.2, Renderscript can involve GPU in computations (in certain cases).
More information here: http://android-developers.blogspot.com/2013/01/evolution-of-renderscript-performance.html
As I understand, ScriptIntrinsic subclasses are well-optimized to run on GPU on compatible hardware (for example, Nexus10 with Mali T604). Documentation:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsic.html
Of course you can decide to use OpenCL, but Renderscript is guaranteed (by Google, being a part of Android itself) to be running even on hardware which doesn't support GPGPU computation and will use any other available acceleration means supported by hardware it is running on.
There are several options: You can use OpenGL ES 2.0, which is supported by almost all devices but has limited functionality for GPGPU. You can use OpenGL ES 3.0, with which you can do much more in terms of GPU processing. Or you can use RenderScript, but this is platform-specific and furthermore does not give you any influence on whether your algorithms run on the GPU or the CPU. A summary about this topic can be found in this master's thesis: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You should also check out ogles_gpgpu, which allows GPGPU via OpenGL ES 2.0 on Android and iOS.

OpenGL ES 2.0 vs OpenGL 3 - Similarities and Differences

From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.

What android devices don't support draw_texture?

i am developing an android app using OpenGL ES for drawing and i use the draw_texture extension as it's the fastest.
I read you have to query the string to check and see if the drawing method is supported on the phone and degrade gracefully if not. My main concern is, how common is it really to have a device which doesn't support this?
I mean, drawing textured quads (the only method standard in OpenGL) is so slow the game would hardly be enjoyable on these devices.
I'm just curious if it's worth the time to support these devices.
I don't know an example of Android device lacking the draw_texture extension, but it is highly likely that such devices actually exists in minimal amounts. It's definitely not worthed to dedicate effort in supporting them, but on the other hand it is nearly trivial to switch between drawTex and quads, especially if your code supports rotated sprites.

Now that we have Honeycomb, what is RenderScript?

So Android 3.0 (honeycomb) is out now and boasts some kind of new hardware-accelerated 3D graphics engine called Renderscript.
Renderscript 3D graphics engine
Renderscript is a runtime 3D framework that provides both an API for building 3D scenes as well as a special, platform-independent shader language for maximum performance. Using Renderscript, you can accelerate graphics operations and data processing. Renderscript is an ideal way to create high-performance 3D effects for applications, wallpapers, carousels, and more.
And according to this blog post dating back from 2009, this graphics engine could already be found used in a class named Fountain (even before honeycomb). This last clue was helpful to me, because now I can easily find that class name in the Honeycomb code.
Can any of you supply any more insights into Renderscript? and ways to learn more about how to use it? Now that this has become a public api, I'm assuming that the people in-the-know may be able to get the permission to freely talk about it (I hope).
RenderScript is several things:
- A language, similar to C99 with extra advanced features. It's pre-compiled on the host and re-compiled on the device, to achieve best performance.
- A rendering library (you can draw textured meshes, etc.)
- A compute library (to run heavy computations in RenderScript which can then be offloaded to the GPU, several CPUs or DSPs.)
Have you downloaded the Honeycomb SDK via the android tool? It downloads documentation, including the javadoc pages for Renderscript.

Floating point or fixed-point for Android NDK OpenGL apps?

I'm trying to decide on whether to primarily use floats or ints for all 3D-related elements in my app (which is C++ for the most part). I understand that most ARM-based devices have no hardware floating point support, so I figure that any heavy lifting with floats would be noticeably slower.
However, I'm planning to prep all data for the most part (i.e. have vertex buffers where applicable and transform using matrices that don't change a lot), so I'm just stuffing data down OpenGL's throat. Can I assume that this goes more or less straight to the GPU and will as such be reasonably fast? (Btw, the minimum requirement is OpenGL ES 2.0, so that presumably excludes older 1.x-based phones.)
Also - how is the penalty when I mix and match ints and floats? Assuming that all my geometry is just pre-built float buffers, but I use ints for matrices since those do require expensive operations like matrix multiplications, how much wrath will I incur here?
By the way, I know that I should keep my expectations low (sounds like even asking for floats on the CPU is asking for too much), but is there anything remotely like 128-bit VMX registers?
(And I'm secretly hoping that fadden is reading this question and has an awesome answer.)
Older Android devices like the G1 and MyTouch have ARMv6 CPUs without floating point support. Most newer devices, like the Droid, Nexus One, and Incredible, use ARMv7-A CPUs that do have FP hardware. If your game is really 3D-intensive, it might demand more from the 3D implementation than the older devices can provide anyway, so you need to decide what level of hardware you want to support.
If you code exclusively in Java, your app will take advantage of the FP hardware when available. If you write native code with the NDK, and select the armv5te architecture, you won't get hardware FP at all. If you select the armv7-a architecture, you will, but your app won't be available on pre-ARMv7-A devices.
OpenGL from Java should be sitting on top of "direct" byte buffers now, which are currently slow to access from Java but very fast from the native side. (I don't know much about the GL implementation though, so I can't offer much more than that.)
Some devices additionally support the NEON "Advanced SIMD" extension, which provides some fancy features beyond what the basic VFP support has. However, you must test for this at runtime if you want to use it (looks like there's sample code for this now -- see the NDK page for NDK r4b).
An earlier answer has some info about the gcc flags used by the NDK for "hard" fp.
Ultimately, the answer to "fixed or float" comes down to what class of devices you want your app to run on. It's certainly easier to code for armv7-a, but you cut yourself off from a piece of the market.
In my opinion you should stick with fixed-point as much as possible.
Not only old phones miss floating point support, but also new ones such as the HTC Wildfire.
Also, if you choose to require ARMv7, please note that for example the Motorola Milestone (Droid for Europe) does feature an ARMv7 CPU, but because of the way Android 2.1 has been built for this device, the device will not use your armeabi-v7a libs (and might hide your app from the Market).
I personally worked around this by detecting ARMv7 support using the new cpufeatures library provided with NDK r4b, to load some armeabi-v7a lib on demand with dlopen().

Categories

Resources