What android devices don't support draw_texture? - android

i am developing an android app using OpenGL ES for drawing and i use the draw_texture extension as it's the fastest.
I read you have to query the string to check and see if the drawing method is supported on the phone and degrade gracefully if not. My main concern is, how common is it really to have a device which doesn't support this?
I mean, drawing textured quads (the only method standard in OpenGL) is so slow the game would hardly be enjoyable on these devices.
I'm just curious if it's worth the time to support these devices.

I don't know an example of Android device lacking the draw_texture extension, but it is highly likely that such devices actually exists in minimal amounts. It's definitely not worthed to dedicate effort in supporting them, but on the other hand it is nearly trivial to switch between drawTex and quads, especially if your code supports rotated sprites.

Related

Which android GPUs support "render to float texture"?

I believe that openGL ES 3.2 (and 3.1 + Android Extensions Pack AEP) support it, but I've heard that some GPU's with previous versions (specifically 3.1 without AEP) also have this particular extension.
My question is: how can I tell which GPUs have that particular extension, enabling one to render to a float texture?
I've searched manufacturer sites, but haven't been able to find this info (maybe I'm looking in the wrong place?)
I'm also a little wary, I heard that one manufacturer added this ability in their driver... but I wonder if that is a software solution (and therefore much slower, defeating the purpose).
Further to this, of course it's possible to do the encoding decoding in your own shader - but wouldn't this incur significant overhead? Or, maybe it's fine?
[BTW: the reason I'm asking is I want to purchase a phone to play around with general-purpose computing on mobile GPUs, and the latest phones are much more expensive]
Many thanks for any help! I've been trying to find this on-and-off for months...
Float rendering support is mandatory in OpenGL ES 3.2. It is not required for OpenGL ES 3.0 / 3.1 / 3.1 + AEP.
For earlier implementations you want to use a platform exposing the EXT_color_buffer_half_float and/or EXT_color_buffer_float extension.
Note that floating point rendering is relatively expensive due to the additional bandwidth, even when supported natively in the hardware. For higher dynamic range consider using something like RGB10_A2 if you can, it's smaller and faster (and supported in 3.0 core).

iOS performance tips the same on Android?

I am just looking through all of the iOS performance tips regarding OpenGL ES 2.0 drawing and I am just wondering: "do they all apply equally to Android development?"
Are the processes the same irrespective of Android and iOS and are they just 'pure' OpenGL tips?
Specifically, does this tip apply to Android development too? Because it would change the way I currently create my attribute data:
When you are designing your vertex structure, align the beginning of each attribute to an offset that is either a multiple of its component size or 4 bytes, whichever is larger. When an attribute is misaligned, iOS must perform additional processing before passing the data to the graphics hardware.
As a line of principle, the same best practices apply to both platforms since some of them are "common sense" approaches (i.e. batching is one of them).
On the other side, the same golden rules do not necessarily apply to all android devices and I would like to explain what I mean.
the point is mainly due to the GPU architecture. If you read the best practices for the IOS devices, for sure they are referred to the PowerVR GPUs (excellent GPUs) which have certain characteristics that not necessarily respect the one of the other GPUs you will find in the Android massive number of devices.
For instance, in the Android market you will find for sure GPUs being part of the Adreno family, the Mali family and so on.
This means that, in order to benefit the best from all these devices, you should write the code according to the manufacturer's best practices.
This means that you should study the SDK's reccomendations of each one of them.
So, to cut a long story short, some of those best practices apply to android as well, some other, could collide with the GPUs specifics of some mobile devices.

Android 3d Graphics library that supports shadow and reflection

I've been digging since while for 3D graphics tutorials for Android. I tried raw opengl (es). I also tried min3d. I found out that things can be very complex or very easy if someone else did the math for you. Min3D is really great and easy to use, but is also really minimalistic. I can't find how to make lights actually cast shadows (and I doubt it's supported), I can't find how to make the surface reflect, how to change the surface to diffuse more or less.
Is there any library that has scene handling and supports shadows (also shadow cast by diffused light), control materials to achieve different levels of diffusion, reflections and transparency.
Note: I forgot to mention I need free framework.
Since the answer of Eric convinced me that realistic 3d is hard to achieve on mobile devices I would accept answers that explain how to fake these effects (or links). Again effects I need are:
shadows
reflections
from glossy material
from matte material
transparency (I think that's in the min3d examples, but it's here for completeness
I've seen shadows in 3d android games, although I'm not 100% convinced that are real (cast by objects).
I can't think of any libraries that meet your requirements for a mobile platform, but there are several middleware products to choose from:
Unity3d : http://unity3d.com/
Unreal SDK : http://udk.com
Ogre : http://www.ogre3d.org/
All of these offer scene management, lighting, material management, etc... yet, I doubt any of these are a silver bullet for what you are asking. Regardless of pricing and licensing - upto 3500 dollars for Unity3D pro mobile - you will still have to do a serious amount of coding yourself and often in a language you may not be familar with.
Also, keep in mind that a lot of the gfx in opengl-es based products (mostly games) are often faked. While it's perfectly possible to have dynamic lighting, shadows and transparency, these things can be crippling for your performance if you have a lot of geometry. After all, a phone or tablet is not the powerhouse that a desktop cpu/gpu is these days. Not yet anyway.
Another thing to note: I'm not sure what level of realism you are trying to achieve, but all the things you mention are typically associated with raytracing/raycasting. And that's a whole other bag of tricks as you can forget about real-time interaction, especially on mobile devices.
No way around it: coding and creating with real-time graphics in mind is hard and it's even harder on mobile platforms.

How to get a software's hardware requirements when transplant from windows to Android?

I want to transplant a 3D program written in OpenGL on windows platform to Android, but I wonder if it can run smoothly on general Android platforms, so i want to estimate how much hardware resource is sufficient for it to run smoothly. It is some kind like the hardware requirements for a software or 3d game that a company will recommend the users. I don't know how can i get a hardware requirements of my program when transplant to Android.
i used gdebugger and it gave me some information but i don't think that is enough for me. Anyone here have some idea or solution? Many thanks in advance!
If your program is simple enough, you could write up some estimates about texture fill rate, which is a pretty basic (and old) metric of rendering performance. Nearly every 3D chip comes with a theoretical fill rate, so you can get the theoretical numbers of both your desktop system and some Android phones.
The texture memory footprint is another thing that you can estimate, especially using gdebugger. Once again, these numbers are known for most chips.
This is a quick way to produce some numbers, obviously without any real life performance guarantees.
The best way would be to test it on an actual device, and get an idea of what hardware works well. You could distribute a beta app and get some feedback too.
Depends on feature set that you use. For example, if you use FBO, the device will have to support framebuffer extension. If you use MSAA, smooth line, the device will have support corresponding extensions.
After listing down your requirements, you can use glGet to check for the device suppport
http://www.opengl.org/sdk/docs/man/xhtml/glGet.xml

Floating point or fixed-point for Android NDK OpenGL apps?

I'm trying to decide on whether to primarily use floats or ints for all 3D-related elements in my app (which is C++ for the most part). I understand that most ARM-based devices have no hardware floating point support, so I figure that any heavy lifting with floats would be noticeably slower.
However, I'm planning to prep all data for the most part (i.e. have vertex buffers where applicable and transform using matrices that don't change a lot), so I'm just stuffing data down OpenGL's throat. Can I assume that this goes more or less straight to the GPU and will as such be reasonably fast? (Btw, the minimum requirement is OpenGL ES 2.0, so that presumably excludes older 1.x-based phones.)
Also - how is the penalty when I mix and match ints and floats? Assuming that all my geometry is just pre-built float buffers, but I use ints for matrices since those do require expensive operations like matrix multiplications, how much wrath will I incur here?
By the way, I know that I should keep my expectations low (sounds like even asking for floats on the CPU is asking for too much), but is there anything remotely like 128-bit VMX registers?
(And I'm secretly hoping that fadden is reading this question and has an awesome answer.)
Older Android devices like the G1 and MyTouch have ARMv6 CPUs without floating point support. Most newer devices, like the Droid, Nexus One, and Incredible, use ARMv7-A CPUs that do have FP hardware. If your game is really 3D-intensive, it might demand more from the 3D implementation than the older devices can provide anyway, so you need to decide what level of hardware you want to support.
If you code exclusively in Java, your app will take advantage of the FP hardware when available. If you write native code with the NDK, and select the armv5te architecture, you won't get hardware FP at all. If you select the armv7-a architecture, you will, but your app won't be available on pre-ARMv7-A devices.
OpenGL from Java should be sitting on top of "direct" byte buffers now, which are currently slow to access from Java but very fast from the native side. (I don't know much about the GL implementation though, so I can't offer much more than that.)
Some devices additionally support the NEON "Advanced SIMD" extension, which provides some fancy features beyond what the basic VFP support has. However, you must test for this at runtime if you want to use it (looks like there's sample code for this now -- see the NDK page for NDK r4b).
An earlier answer has some info about the gcc flags used by the NDK for "hard" fp.
Ultimately, the answer to "fixed or float" comes down to what class of devices you want your app to run on. It's certainly easier to code for armv7-a, but you cut yourself off from a piece of the market.
In my opinion you should stick with fixed-point as much as possible.
Not only old phones miss floating point support, but also new ones such as the HTC Wildfire.
Also, if you choose to require ARMv7, please note that for example the Motorola Milestone (Droid for Europe) does feature an ARMv7 CPU, but because of the way Android 2.1 has been built for this device, the device will not use your armeabi-v7a libs (and might hide your app from the Market).
I personally worked around this by detecting ARMv7 support using the new cpufeatures library provided with NDK r4b, to load some armeabi-v7a lib on demand with dlopen().

Categories

Resources