ZSL feature on Android Lollipop with camera 2 API - android

I am trying to understand ZSL feature/capability support on Android 5.0, from camera application, camera framework and libcameraservice implementation as well camera HAL v3.2 specifications.
As far as I understand, ZSL implementation in android, is possible in two ways:
Framework implemented ZSL
In Kitkat, only framework implemented ZSL was supported, and it was pretty straightforward. (Using bidirectional streams for ZSL)
In Lollipop,they have documented framework implemented ZSL very clearly,
http://androidxref.com/5.0.0_r2/xref/hardware/libhardware/include/hardware/camera3.h#1076
Application implemented ZSL
In Lollipop, they have introduced the concept of application implemented ZSL. ZSL has been exposed as a capability to the application, as per the available documentation
http://androidxref.com/5.0.0_r2/xref/system/media/camera/docs/docs.html
Under android.request.availableCapabilities, it says that:
For ZSL, "RAW_OPAQUE is supported as an output/input format"
In Lollipop, framework implemented ZSL works the same way as Kitkat, with Camera1 API application.
However, I could not find anywhere in Camera2 API application code, how to enable application/framework implemented ZSL.
http://androidxref.com/5.0.0_r2/xref/packages/apps/Camera2/
Hence, the questions:
Is it possible to enable framework implemented ZSL in Android L, with Camera2 API application?
Is it possible to enable application implemented ZSL in Android L, without RAW_OPAQUE support, with Camera2 API application?
If either 1 or 2 is possible, what is required from Camera HAL to enable ZSL in Android L?
Any help appreciated.

No, the framework-layer ZSL only works with the old camera API.
No, unless it's sufficient to use the output buffer as-is, without sending it back to the camera device for final processing.
The longer answer is that the ZSL reprocessing APIs had to be cut out of the initial camera2 implementation, so currently there's no way for an application to send buffers back to the camera device, in any format (RAW_OPAQUE or otherwise).
Some of the documentation in camera3.h is misleading relative to the actual framework implementation, as well - only IMPLEMENTATION_DEFINED BIDIRECTIONAL ZSL is supported by the framework, and RAW_OPAQUE is not used anywhere.
Edit: As of Android 6.0 Marshmallow, reprocessing is available in the camera2 API, on devices that support it (such as Nexus 6P/5X).

Related

Fast flipping between front and back cameras in Camera1 API?

I am aware that Camera2 API allows access to both cameras however I am focused on reusing old retired phones as part of a sensor network project. My understanding is that for the older Camera1 API only one camera could be opened for use at a time and then would need to be released before making use of second (except on a limited range of models whose manufacturers had elected to provide additional capabilities) which according to this post can take over one second - is there any way around this in order to be able able to flip between the two cameras on legacy devices? I'm looking to capture at least 4FPS on each camera in order to do visual odometry.
Generally, no. Most devices only have one hardware processing pipeline, and it's shared between the two camera sensors. So one camera has to be shut down before the other can be started.
Camera2 doesn't guarantee simultaneous access, either, unfortunately. Both APIs can run multiple cameras at once if the device hardware allows. For the old API, there's no way to query if concurrent use is possible, so the only way to find out is to try it. But most likely, it won't.

Is OpenGL function loader needed?

On desktop OSes, OpenGL function loaders like GLEW, GLAD, etc. are used to load functions at runtime. But what about on Android? How are functions loaded? I've looked at a few apps and all of them seem to depend on EGL and GLES. But AFAIK EGL isn't a loading library, but an interface. Well, an interface to an interface as GLES is actually an interface.
This leads to another question: How come Android uses EGL when it is generally not used on desktops?
Back when I used android a bit, you could either link to the GLES 2.0 library, or you could link to the GLES 3.0 library, so kinda as if they provide the function pointers for you. ish. If you used GLES3.0, but the phone you ran it on only supported 2.0, your app would not load. To work around this, I always linked to GLES 2.0, and wrote my own function loader using eglGetProcAddress to extract the GLES3.0 API if available. This is pretty much how function loaders on windows/linux work (using wglGetProcAddress or glxGetProcAddress).
GLES has always been a stripped down version of the full blown desktop GL. It has always targeted a smaller subset of the full blown API (removing all of the legacy cruft), which in turn simplifies the OpenGL driver code somewhat, which in turn should reduce memory usage and save a little bit of battery life. Basically it's just more suited to use on a low power system where battery life is a concern.
How come Android uses EGL when it is generally not used on desktops?
It was widely used on embedded electronics prior to Android, so I suspect it was path of least resistance for a new smartphone OS as all GPU vendors had an EGL implementation already.

Which are the android native camera supported platforms?

I am currently working on a camera-based OpenCV app with critical performance requirements
We already have java-based camera implementations - both, the deprecated HAL 1 and also camera2 API
We use camera1 implementation on platforms < 21 and the Camera 2 implementation on platforms >= 21
These two implementations are already extremely optimized for performance, however, we believe we could still improve by upgrading to the new native ndk camera API (the main improvement would be reducing the overhead of JNI image data transfer to native OpenCV processor)
In Android 7.0 (API 24) release, NDK native camera support was introduced. However, the only NDK documentation available is this flat list of C headers
Unfortunately, I am currently confused because there is no clear information about native camera platform support
When I looked at the native API I noticed it is very similar to the java camera2 API
This makes me (wishfully) think that the native API should be backward compatible with earlier platforms that support the camera2 java API
I have started an experimental project in an attempt to bust the myth, however, due to generally lacking NDK documentation, progress is slow
I am specially interested if anyone else already attempted to leverage the native camera API and there's a relevant conclusion on this matter that could be shared
On another track, I'm also curious to find out if the camera native API implementation is a reverse JNI binding to the camera2 java API or if it indeed is a lower-level integration. It's also interesting to know if the camera2 java API is a JNI binding to the native camera api?
There's more NDK documentation thank just the C headers; if you click on one of the functions, for example, you can get reference docs.
However, I think you're correct in that the compatibility story isn't well-documented.
The short version is that if you call ACameraManager_getCameraIdList and it returns camera IDs, then you can open those with the NDK API.
If it doesn't return any IDs, then there are no supported cameras on that device.
The longer story is that the NDK API only supports camera devices that have a hardware level of LIMITED or higher. LEGACY devices are not supported.
As an optimization note, how are you passing data through JNI? While JNI isn't ridiculously fast, it's not that slow, and as long as you're using passing mechanisms that don't copy data (such as direct access to ByteBuffer via getDirectBufferAddress.

Is it possible to program GPU for Android

I am now programming on Android and I wonder whether we can use GPGPU for Android now? I once heard that Renderscript can potentially execute on GPGPU in the future. But I wonder whether it is possible for us to programming on GPGPU now? And if it is possible for me to program on the Android GPGPU, where can I find some tutorials or sample programs? Thank you for your help and suggestions.
Up till now I know that the OpenGL ES library was now accelerated use GPU, but I want to use the GPU for computing. What I want to do is to accelerate computing so that I hope to use some libraries of APIs such as OpenCL.
2021-April Update
Google has announced deprecation of the RenderScript API in favor of Vulkan with Android 12.
The option for manufacturers to include the Vulkan API was made available in Android 7.0 Compatibility Definition Document - 3.3.1.1. Graphic Libraries.
Original Answer
Actually Renderscript Compute doesn't use the GPU at this time, but is designed for it
From Romain Guy who works on the Android platform:
Renderscript Compute is currently CPU bound but with the for_each construct it will take advantage of multiple cores immediately
Renderscript Compute was designed to run on the GPU and/or the CPU
Renderscript Compute avoids having to write JNI code and gives you architecture independent, high performance results
Renderscript Compute can, as of Android 4.1, benefit from SIMD optimizations (NEON on ARM)
https://groups.google.com/d/msg/android-developers/m194NFf_ZqA/Whq4qWisv5MJ
yes , it is possible .
you can use either renderscript or opengGL ES 2.0 .
renderscript is available on android 3.0 and above , and openGL ES 2.0 is available on about 95% of the devices.
As of Android 4.2, Renderscript can involve GPU in computations (in certain cases).
More information here: http://android-developers.blogspot.com/2013/01/evolution-of-renderscript-performance.html
As I understand, ScriptIntrinsic subclasses are well-optimized to run on GPU on compatible hardware (for example, Nexus10 with Mali T604). Documentation:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsic.html
Of course you can decide to use OpenCL, but Renderscript is guaranteed (by Google, being a part of Android itself) to be running even on hardware which doesn't support GPGPU computation and will use any other available acceleration means supported by hardware it is running on.
There are several options: You can use OpenGL ES 2.0, which is supported by almost all devices but has limited functionality for GPGPU. You can use OpenGL ES 3.0, with which you can do much more in terms of GPU processing. Or you can use RenderScript, but this is platform-specific and furthermore does not give you any influence on whether your algorithms run on the GPU or the CPU. A summary about this topic can be found in this master's thesis: Parallel Computing for Digital Signal Processing on Mobile Device GPUs.
You should also check out ogles_gpgpu, which allows GPGPU via OpenGL ES 2.0 on Android and iOS.

Low delay audio on Android via NDK

It seems that this question has been asked before, I just would like to know whether there is an update in Android.
I plan to write an audio application involving low delay audio I/O (appr. < 10 ms). It seems not to be possible based on the methods proposed by the SDK, hence is there - in the meantime - a way to achieve this goal using the NDK?
there are currently no libraries in the NDK for accessing the android sound system, at least none that are considered safe to use (are stable).
Have you done any tests with the AudioTrack class? Its the lowest latency option available at the moment.
Currently 2 main apis are exposed in NDK for Audio:
OpenSL (from Android 2.3 Api level 9)
OpenMAX AL (from Android 4.0 Api level 14)
A good start point to learn about the OpenSL API in Android is in the samples code of the NDK:
look at "native-audio" sample.
Measurement about performances were made in this blog:
http://audioprograming.wordpress.com/
As summary the best latencies obtained were around 100-200ms, far from your target.
But, from android NDK documentation, the OpenSL interface is the one that in the future will benefit most from HW acceleration to go towards low latency.

Categories

Resources