For fast transfer of texels to/from an EGL surface, we have successfully used GraphicBuffer buffer as described in this thread:
How to use GraphicBuffer in android ndk
However on Android 7.0 that is not an option. As GraphicBuffer uses the private libary libui.so. So what replaces it? What is the Google-approved method of doing a fast transfer to/from an EGL surface?
In Android 8 (API level 26), the upcoming Oreo release, they have introduced a Hardware Buffer wrapper. I've compared the HardwareBuffer and GraphicBuffer classes, both provide an interface to create and access a shared buffer object, where the new HardwareBuffer is a generalised version of the GraphicBuffer. Therefore you will no longer need to link against the non-public libraries from API 26+.
The only alternative I have seen for Android 7 is to manually provide all required libraries with the apk for a project.
We will have to wait until Android 8 is released following it's beta testing phase. The roadmap for release can be found here, anticipated release is some time before the end of 2017. If you plan on updating your project with the new API features before the release date and want to test it out, you can use the Android O preview version on a Google device.
Related
I recently switched from using glBufferData to glMapBufferRange which gives me direct access to GPU memory rather than copying the data from CPU to GPU every frame.
This works just fine and in OpenGL ES 3.0 I do the following per frame:
Get a pointer to my GPU buffer memory via glMapBufferRange.
Directly update my buffer using this pointer.
Use glUnmapBuffer to unmap the buffer so that I can render.
But some Android devices may have at least OpenGL ES 3.1 and, as I understand it, may also have the EXT_buffer_storage extension (please correct me if that's the wrong extension ?). Using this extension it's possible to set up persistent buffer pointers which do not require mapping/unmapping every frame using the GL_MAP_PERSISTENT_BIT flag. But I can't figure out or find much online in the way of how to access these features.
How exactly do I invoke glMapBufferRange with GL_MAP_PERSISTENT_BIT set in OpenGL ES 3.1 on Android ?
Examining glGetString(GL_EXTENSIONS) does seem to show the extension is present on my device, but I can't seem to find GL_MAP_PERSISTENT_BIT anwhere, e.g. in GLES31 or GLES31Ext, and I'm just not sure how to proceed.
The standard Android Java bindings for OpenGL ES only expose extensions that are guaranteed to be supported by all implementations on Android. If you want to expose less universally available vendor extensions you'll need to roll your own JNI bindings, using eglGetProcAddress() from native code compiled with the NDK to fetch the entry points.
For this one you want the extension entry point glBufferStorageEXT().
I am currently working on a camera-based OpenCV app with critical performance requirements
We already have java-based camera implementations - both, the deprecated HAL 1 and also camera2 API
We use camera1 implementation on platforms < 21 and the Camera 2 implementation on platforms >= 21
These two implementations are already extremely optimized for performance, however, we believe we could still improve by upgrading to the new native ndk camera API (the main improvement would be reducing the overhead of JNI image data transfer to native OpenCV processor)
In Android 7.0 (API 24) release, NDK native camera support was introduced. However, the only NDK documentation available is this flat list of C headers
Unfortunately, I am currently confused because there is no clear information about native camera platform support
When I looked at the native API I noticed it is very similar to the java camera2 API
This makes me (wishfully) think that the native API should be backward compatible with earlier platforms that support the camera2 java API
I have started an experimental project in an attempt to bust the myth, however, due to generally lacking NDK documentation, progress is slow
I am specially interested if anyone else already attempted to leverage the native camera API and there's a relevant conclusion on this matter that could be shared
On another track, I'm also curious to find out if the camera native API implementation is a reverse JNI binding to the camera2 java API or if it indeed is a lower-level integration. It's also interesting to know if the camera2 java API is a JNI binding to the native camera api?
There's more NDK documentation thank just the C headers; if you click on one of the functions, for example, you can get reference docs.
However, I think you're correct in that the compatibility story isn't well-documented.
The short version is that if you call ACameraManager_getCameraIdList and it returns camera IDs, then you can open those with the NDK API.
If it doesn't return any IDs, then there are no supported cameras on that device.
The longer story is that the NDK API only supports camera devices that have a hardware level of LIMITED or higher. LEGACY devices are not supported.
As an optimization note, how are you passing data through JNI? While JNI isn't ridiculously fast, it's not that slow, and as long as you're using passing mechanisms that don't copy data (such as direct access to ByteBuffer via getDirectBufferAddress.
The only way to get profit from the fact, that mobile devices have shared memory for CPU and GPU was using GrphicBuffer. But since Android 7 restrict access to private native libs (including gralloc) it is impossible to use it any more. The question - is there any alternative way to get direct memory access to texture's pixel data?
I know, that something similar can be done using PBO (pixel buffer object). But it still does additional memory copy, which is undesirable. Especially if we know, that there was way to do it with zero copies.
There are many apps, which used this feature, because it can heavily increase the performance. I think many developers are in stuck with this problem now.
Since Android 8 / API 26 (sorry not for Android 7...)
Hardware Buffer APIs are alts for GrphicBuffer().
The native hardware buffer API lets you
directly allocate buffers to create your own pipelines for
cross-process buffer management. You can allocate an AHardwareBuffer
and use it to obtain an EGLClientBuffer resource type via the
eglGetNativeClientBufferANDROID extension.
NDK revision history
Minimum revision of NDK is 15c (July 2017)
Android NDK, Revision 15c (July 2017)
Added native APIs for Android 8.0.
* Hardware Buffer API
android/hardware_buffer_jni.h is in the directory (NDK)/sysroot/usr/include/
Refs:
NDK - Native Hardware Buffer (android/hardware_buffer_jni.h)
Android/Java - HardwareBuffer
GrphicBuffer related article Using OpenGL ES to Accelerate Apps with Legacy 2D GUIs
NB: for Android 7 / API 24
Native API guide also says in Graphics/EGL section
API level 24 added support for the EGL_KHR_mutable_render_buffer,
ANDROID_create_native_client_buffer, and
ANDROID_front_buffer_auto_refresh extensions.
and EGL_ANDROID_create_native_client_buffer is an EGL extension which contains eglCreateNativeClientBufferANDROID(), which returns EGLClientBuffer. (EGL/eglext.h)
I think you can use SurfaceTexture, SurfaceTexture can create by MediaCore, SurfaceTexture can direct encode by MediaCore。This Plan can process 1080p video in 2ms-5ms per video frame。
I'm trying to port an iOS project to Android (java). I've however encountered a few ES 2.0 extension functions (OES), which do not appear in the Android GLES20 API:
glGenVertexArraysOES
glBindVertexArrayOES
glDeleteVertexArraysOES
It appears I have to call these functions from NDK, dynamically bind the extensions at runtime and check for support od devices. Not something I'd love to do.
While googling I found these functions in the GLES30 api. So my question is:
- is it possible to mix GLES20 and GLES30 calls?
- are these functions basically calls to the same api or is this completely different?
- any other sugggestions?
Just looking at the API entry points, ES 3.0 is a superset of ES 2.0. So the transition is mostly smooth. You request API version 3 when making the GLSurfaceView.setEGLContextClientVersion() call, and your ES 2.0 code should still work. Then you can start using methods from GLES30 on top of the GLES20 methods.
There are some very subtle differences, e.g. related to slight differences in cube map sampling, but you're unlikely to run into them. If you want details, see appendix F.2 of the spec document. Some features like client side vertex arrays have been declared legacy, but are still supported.
The only thing you're likely to encounter are differences in GLSL. You can still use ES 2.0 shaders as long as you keep the #version 100 in the shader code. But if you want to use the latest GLSL version (#version 300 es), there are incompatible changes. The necessary code changes are simple, it's mostly replacing attribute and varying with in and out, and not using the built-in gl_FragColor anymore. You have to switch over to the new GLSL version if you want to take advantage of certain new ES 3.0 features, like multiple render targets.
The downside of using ES 3.0 is of course that you're much more restricted in the devices your software runs on. While the latest higher-end devices mostly support ES 3.0, there are still plenty of devices out there that only support 2.0, and will stay at that level. According to the latest data from Google (http://developer.android.com/about/dashboards/index.html), 18.2% of all devices support ES 3.0 as of July 7, 2014.
As #vadimvolk explained, you will need to check whether OpenGL driver supports OES_vertex_array_object extension. More info here:
http://www.khronos.org/registry/gles/extensions/OES/OES_vertex_array_object.txt
If you just stick to use OpenGL ES 3.0, you can use these methods after checking that you've got OpenGL ES 3.0 context. In Android, you can mix calls to GLES20 and GLES30 because these APIs are backwards-compatible.
All you need is to create OpenGL ES 2.0 context and check if returned context version is 3.0 by reading GL_VERSION string. If it is 3.0, you can use mix both GLES20 and GLES30 functions. Additional info: https://plus.google.com/u/0/+RomainGuy/posts/iJmTjpUfR5E
Functions are same. In GLES20 they are exists only on some devices as not mandatory extensions.
In GLES30 they are mandatory.
If you use them from GLES30 your application will work only on devices supports GLES30 (only devices made for android 4.4).
It seems that this question has been asked before, I just would like to know whether there is an update in Android.
I plan to write an audio application involving low delay audio I/O (appr. < 10 ms). It seems not to be possible based on the methods proposed by the SDK, hence is there - in the meantime - a way to achieve this goal using the NDK?
there are currently no libraries in the NDK for accessing the android sound system, at least none that are considered safe to use (are stable).
Have you done any tests with the AudioTrack class? Its the lowest latency option available at the moment.
Currently 2 main apis are exposed in NDK for Audio:
OpenSL (from Android 2.3 Api level 9)
OpenMAX AL (from Android 4.0 Api level 14)
A good start point to learn about the OpenSL API in Android is in the samples code of the NDK:
look at "native-audio" sample.
Measurement about performances were made in this blog:
http://audioprograming.wordpress.com/
As summary the best latencies obtained were around 100-200ms, far from your target.
But, from android NDK documentation, the OpenSL interface is the one that in the future will benefit most from HW acceleration to go towards low latency.