So Android 3.0 (honeycomb) is out now and boasts some kind of new hardware-accelerated 3D graphics engine called Renderscript.
Renderscript 3D graphics engine
Renderscript is a runtime 3D framework that provides both an API for building 3D scenes as well as a special, platform-independent shader language for maximum performance. Using Renderscript, you can accelerate graphics operations and data processing. Renderscript is an ideal way to create high-performance 3D effects for applications, wallpapers, carousels, and more.
And according to this blog post dating back from 2009, this graphics engine could already be found used in a class named Fountain (even before honeycomb). This last clue was helpful to me, because now I can easily find that class name in the Honeycomb code.
Can any of you supply any more insights into Renderscript? and ways to learn more about how to use it? Now that this has become a public api, I'm assuming that the people in-the-know may be able to get the permission to freely talk about it (I hope).
RenderScript is several things:
- A language, similar to C99 with extra advanced features. It's pre-compiled on the host and re-compiled on the device, to achieve best performance.
- A rendering library (you can draw textured meshes, etc.)
- A compute library (to run heavy computations in RenderScript which can then be offloaded to the GPU, several CPUs or DSPs.)
Have you downloaded the Honeycomb SDK via the android tool? It downloads documentation, including the javadoc pages for Renderscript.
Related
I'm about to start working on an augmented reality project that will involve using the EPSON Moverio AR glasses or similar. Assuming we go for those glasses, they run on Android 4.0.4 - API 15. The app will involve near realtime analysis of video (frames from the glasses camera) for feature detection / tracking with markers and overlaying 3d objects in the 'real world' based on the markers.
So far then, on the technical side, it looks like we'll be dealing with:
API 15
OpenCV
OpenGLES2
Considering the above, I'm wondering if it's worth doing it all thru the NDK, using a single NativeActivity with the android_native_app_glue code. When I say worth it I mean performance wise.
Sure, doing it all on the C/C++ side has for instance the advantage that the code could then potentially be ported with minimal modification to run on other environments. But OpenCV does have Java bindings for Android and GL can also be used to a certain extent from Java. So I'm just wondering if performance-wise it's worth it or it would be about the same as, say, using a GLSurfaceView.
I work in augmented reality. The vast majority of applications I've seen have been native. Google recommends avoiding native application unless the gains are absolutely necessary. I think AR is one of the relatively few cases where it is necessary. The benefits I'm aware of are:
Native camera access will allow you to get a higher capture framerate. Passing the data to the Java layer considerably slows this down. Even OpenCV's non-native capture can be slower in Java because OpenCV primary maintains the data in native objects. Your capture framerate is a limiting factor on how fast you can update the pose information for your objects. Beware though, OpenCV's native camera will not work on devices running Qualcomm's MSM optimized fork of android - this includes many snapdragon devices.
Every call to an OpenGL method in Java not only has a cost related to dropping into native, and they also perform quite a few additional checks. Look through GLES20.cpp which contains the native implementation of the GLES20 class's methods. You'll see that you could bypass quite a lot of logic by using native calls. This is fine in most mobile application, but 3D rendering often gets a significant benefit from bypassing those checks and the JNI overhead. This is even more important in AR because your will already be swamping the system with CV processing.
You will very likely want your detection related code in native. OpenCV have samples if you want to see the difference between native and Java detection performance. The former will use fewer resources and be more consistent. Using a native application means that you can call your native functions without paying the cost of passing large amount of data from Java to native.
Native sensor access is more efficient and the rate says far more consistent in native thanks to the lack of garbage collection and JNI. This is relevant if you will be using IMU data interesting ways.
You may be able to build an non-native application that has most of it's code in native and runs well despite being Java based, but it is considerably more difficult.
Is RenderScript the only device-independent way to run GPGPU code on Android ?
I don't count Tegra as there is only few phones that have it.
RenderScript is the official Android compute platform. As a result it will be on all Android devices. It was designed specifically to address the problem of running one code base across many different devices.
Well, using RenderScript doesn't necessarily mean that your code will run on the GPU. It might also use the CPU and (hopefully) parallelize tasks on several CPU cores and use CPU vector instructions. But as far as I know, you can never be sure about that and the decision process is kind of a blackbox.
If you want to make sure that your code runs on the GPU, you can "simulate" some GPGPU functions with OpenGL ES 2.0 shaders. This will run on all devices that support OpenGL ES 2.0. It depends on what you want to do, but for example many image processing functions can be implemented very efficiently this way. There is a library called ogles_gpgpu that provides an architecture for GPGPU on Android and iOS systems: https://github.com/internaut/ogles_gpgpu
OpenGL ES 3.1 also supports "Compute Shaders" but few devices support this, yet.
Renderscript is an Android computation engine that lets you use CPU/GPU native hardware acceleration in order to boost applications, for example in image processing and computer vision algorithms.
Is there a similar thing in iOS and Windows Phone 7/8?
The RenderScript compatibility library is designed to compile for most any posix system. It would be very easy to get it running on other platforms.
I can't speak for Windows Phone but on iOS it would be vImage running on the Accelerate Framework. Just like Renderscript, it is dynamically optimized for the CPU on the target platform.
vImage optimizes image processing by using the CPU’s vector processor.
If a vector processor is not available, vImage uses the next best
available option. This framework allows you to reap the benefits of
vector processors without the need to write vectorized code.
https://developer.apple.com/library/mac/documentation/performance/Conceptual/vImage/Introduction/Introduction.html
I can't speak for Windows Phone but on iOS it would be Apple Metal, its language specification being almost same as renderscript c99.
For iOS it is the newly introduced swift I guess.
Maybe it is worth to try it out, but I'm not an iOS developer so I can't say anything about its performance, but the demos on the WWDC looked promising. Also instead of Renderscript Swift seemes to be designed for graphics, the Renderscript soppurt for graphics has been deprecated and Renderscript turned more into a general computation framework (which of course can be used as a backend for graphic calculations).
https://developer.apple.com/library/prerelease/ios/documentation/swift/conceptual/swift_programming_language/TheBasics.html
I am now programming on Android and I wonder whether we can use GPGPU for Android now? I once heard that Renderscript can potentially execute on GPGPU in the future. But I wonder whether it is possible for us to programming on GPGPU now? And if it is possible for me to program on the Android GPGPU, where can I find some tutorials or sample programs? Thank you for your help and suggestions.
Up till now I know that the OpenGL ES library was now accelerated use GPU, but I want to use the GPU for computing. What I want to do is to accelerate computing so that I hope to use some libraries of APIs such as OpenCL.
2021-April Update
Google has announced deprecation of the RenderScript API in favor of Vulkan with Android 12.
The option for manufacturers to include the Vulkan API was made available in Android 7.0 Compatibility Definition Document - 3.3.1.1. Graphic Libraries.
Original Answer
Actually Renderscript Compute doesn't use the GPU at this time, but is designed for it
From Romain Guy who works on the Android platform:
Renderscript Compute is currently CPU bound but with the for_each construct it will take advantage of multiple cores immediately
Renderscript Compute was designed to run on the GPU and/or the CPU
Renderscript Compute avoids having to write JNI code and gives you architecture independent, high performance results
Renderscript Compute can, as of Android 4.1, benefit from SIMD optimizations (NEON on ARM)
https://groups.google.com/d/msg/android-developers/m194NFf_ZqA/Whq4qWisv5MJ
yes , it is possible .
you can use either renderscript or opengGL ES 2.0 .
renderscript is available on android 3.0 and above , and openGL ES 2.0 is available on about 95% of the devices.
As of Android 4.2, Renderscript can involve GPU in computations (in certain cases).
More information here: http://android-developers.blogspot.com/2013/01/evolution-of-renderscript-performance.html
As I understand, ScriptIntrinsic subclasses are well-optimized to run on GPU on compatible hardware (for example, Nexus10 with Mali T604). Documentation:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsic.html
Of course you can decide to use OpenCL, but Renderscript is guaranteed (by Google, being a part of Android itself) to be running even on hardware which doesn't support GPGPU computation and will use any other available acceleration means supported by hardware it is running on.
There are several options: You can use OpenGL ES 2.0, which is supported by almost all devices but has limited functionality for GPGPU. You can use OpenGL ES 3.0, with which you can do much more in terms of GPU processing. Or you can use RenderScript, but this is platform-specific and furthermore does not give you any influence on whether your algorithms run on the GPU or the CPU. A summary about this topic can be found in this master's thesis: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You should also check out ogles_gpgpu, which allows GPGPU via OpenGL ES 2.0 on Android and iOS.
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
What I'm curious to know is whether or not OpenGL 3 is comparable to OpenGL ES 2.0. In other words, given that I'm about to make a game engine for both desktop and Android, are there any differences I should be aware of in particular regarding OpenGL 3.x+ and OpenGL ES 2.0?
This can also include OpenGL 4.x versions as well.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
From what I've read, it appears that OpenGL ES 2.0 isn't anything like OpenGL 2.1, which is what I assumed from before.
Define "isn't anything like" it. Desktop GL 2.1 has a bunch of functions that ES 2.0 doesn't have. But there is a mostly common subset of the two that would work on both (though you'll have to fudge things for texture image loading, because there are some significant differences there).
Desktop GL 3.x provides a lot of functionality that unextended ES 2.0 simply does not. Framebuffer objects are core in 3.x, whereas they're extensions in 2.0 (and even then, you only get one destination image without another extension). There's transform feedback, integer textures, uniform buffer objects, and geometry shaders. These are all specific hardware features that either aren't available in ES 2.0, or are only available via extensions. Some of which may be platform-specific.
But there are also some good API convenience features available on desktop GL 3.x. Explicit attribute locations (layout(location=#)), VAOs, etc.
For example, if I start reading this book, am I wasting my time if I plan to port the engine to Android (using NDK of course ;) )?
It rather depends on how much work you intend to do and what you're prepared to do to make it work. At the very least, you should read up on what OpenGL ES 2.0 does, so that you can know how it differs from desktop GL.
It's easy to avoid the actual hardware features. Rendering to texture (or to multiple textures) is something that is called for by your algorithm. As is transform feedback, geometry shaders, etc. So how much you need it depends on what you're trying to do, and there may be alternatives depending on the algorithm.
The thing you're more likely to get caught on are the convenience features of desktop GL 3.x. For example:
layout(location = 0) in vec4 position;
This is not possible in ES 2.0. A similar definition would be:
attribute vec4 position;
That would work in ES 2.0, but it would not cause the position attribute to be associated with the attribute index 0. That has to be done via code, using glBindAttribLocation before the program is linked. Desktop GL also allows this, but the book you linked to doesn't do it. For obvious reasons (it's a 3.3-based book, not one trying to maintain compatibility with older GL versions).
Uniform buffers is another. The book makes liberal use of them, particularly for shared perspective matrices. It's a simple and effective technique for that. But ES 2.0 doesn't have that feature; it only has the per-program uniforms.
Again, you can code to the common subset if you like. That is, you can deliberately forgo using explicit attribute locations, uniform buffers, vertex array objects and the like. But that book isn't exactly going to help you do it either.
Will it be a waste of your time? Well, that book isn't for teaching you the OpenGL 3.3 API (it does do that, but that's not the point). The book teaches you graphics programming; it just so happens to use the 3.3 API. The skills you learn there (except those that are hardware based) transfer to any API or system you're using that involves shaders.
Put it this way: if you don't know graphics programming very much, it doesn't matter what API you use to learn. Once you've mastered the concepts, you can read the various documentation and understand how to apply those concepts to any new API easily enough.
OpenGL ES 2.0 (and 3.0) is mostly a subset of Desktop OpenGL.
The biggest difference is there is no legacy fixed function pipeline in ES. What's the fixed function pipeline? Anything having to do with glVertex, glColor, glNormal, glLight, glPushMatrix, glPopMatrix, glMatrixMode, etc... in GLSL using any of the variables that access the fixed function data like gl_Vertex, gl_Normal, gl_Color, gl_MultiTexCoord, gl_FogCoord, gl_ModelViewMatrix and the various other matrices from the fixed function pipeline.
If you use any of those features you'll have some work cut out for you. OpenGL ES 2.0 and 3.0 are just plain shaders. No "3d" is provided for you. You're required to write all projection, lighting, texture references, etc yourself.
If you're already doing that (which most modern games probably do ) you might not have too much work. If on the other hand you've been using those old deprecated OpenGL features which from my experience is still very very common (most tutorials still use that stuff). Then you've got a bit of work cut out for you as you try to reproduce those features on your own.
There is an open source library, regal, which I think was started by NVidia. It's supposed to reproduce that stuff. Be aware that whole fixed function system was fairly inefficient which is one of the reasons it was deprecated but it might be a way to get things working quickly.