Low delay audio on Android via NDK - android

It seems that this question has been asked before, I just would like to know whether there is an update in Android.
I plan to write an audio application involving low delay audio I/O (appr. < 10 ms). It seems not to be possible based on the methods proposed by the SDK, hence is there - in the meantime - a way to achieve this goal using the NDK?

there are currently no libraries in the NDK for accessing the android sound system, at least none that are considered safe to use (are stable).
Have you done any tests with the AudioTrack class? Its the lowest latency option available at the moment.

Currently 2 main apis are exposed in NDK for Audio:
OpenSL (from Android 2.3 Api level 9)
OpenMAX AL (from Android 4.0 Api level 14)
A good start point to learn about the OpenSL API in Android is in the samples code of the NDK:
look at "native-audio" sample.
Measurement about performances were made in this blog:
http://audioprograming.wordpress.com/
As summary the best latencies obtained were around 100-200ms, far from your target.
But, from android NDK documentation, the OpenSL interface is the one that in the future will benefit most from HW acceleration to go towards low latency.

Related

Which are the android native camera supported platforms?

I am currently working on a camera-based OpenCV app with critical performance requirements
We already have java-based camera implementations - both, the deprecated HAL 1 and also camera2 API
We use camera1 implementation on platforms < 21 and the Camera 2 implementation on platforms >= 21
These two implementations are already extremely optimized for performance, however, we believe we could still improve by upgrading to the new native ndk camera API (the main improvement would be reducing the overhead of JNI image data transfer to native OpenCV processor)
In Android 7.0 (API 24) release, NDK native camera support was introduced. However, the only NDK documentation available is this flat list of C headers
Unfortunately, I am currently confused because there is no clear information about native camera platform support
When I looked at the native API I noticed it is very similar to the java camera2 API
This makes me (wishfully) think that the native API should be backward compatible with earlier platforms that support the camera2 java API
I have started an experimental project in an attempt to bust the myth, however, due to generally lacking NDK documentation, progress is slow
I am specially interested if anyone else already attempted to leverage the native camera API and there's a relevant conclusion on this matter that could be shared
On another track, I'm also curious to find out if the camera native API implementation is a reverse JNI binding to the camera2 java API or if it indeed is a lower-level integration. It's also interesting to know if the camera2 java API is a JNI binding to the native camera api?
There's more NDK documentation thank just the C headers; if you click on one of the functions, for example, you can get reference docs.
However, I think you're correct in that the compatibility story isn't well-documented.
The short version is that if you call ACameraManager_getCameraIdList and it returns camera IDs, then you can open those with the NDK API.
If it doesn't return any IDs, then there are no supported cameras on that device.
The longer story is that the NDK API only supports camera devices that have a hardware level of LIMITED or higher. LEGACY devices are not supported.
As an optimization note, how are you passing data through JNI? While JNI isn't ridiculously fast, it's not that slow, and as long as you're using passing mechanisms that don't copy data (such as direct access to ByteBuffer via getDirectBufferAddress.

Android 7 GraphicBuffer alternative for direct access to OpenGL texture memory

The only way to get profit from the fact, that mobile devices have shared memory for CPU and GPU was using GrphicBuffer. But since Android 7 restrict access to private native libs (including gralloc) it is impossible to use it any more. The question - is there any alternative way to get direct memory access to texture's pixel data?
I know, that something similar can be done using PBO (pixel buffer object). But it still does additional memory copy, which is undesirable. Especially if we know, that there was way to do it with zero copies.
There are many apps, which used this feature, because it can heavily increase the performance. I think many developers are in stuck with this problem now.
Since Android 8 / API 26 (sorry not for Android 7...)
Hardware Buffer APIs are alts for GrphicBuffer().
The native hardware buffer API lets you
directly allocate buffers to create your own pipelines for
cross-process buffer management. You can allocate an AHardwareBuffer
and use it to obtain an EGLClientBuffer resource type via the
eglGetNativeClientBufferANDROID extension.
NDK revision history
Minimum revision of NDK is 15c (July 2017)
Android NDK, Revision 15c (July 2017)
Added native APIs for Android 8.0.
* Hardware Buffer API
android/hardware_buffer_jni.h is in the directory (NDK)/sysroot/usr/include/
Refs:
NDK - Native Hardware Buffer (android/hardware_buffer_jni.h)
Android/Java - HardwareBuffer
GrphicBuffer related article Using OpenGL ES to Accelerate Apps with Legacy 2D GUIs
NB: for Android 7 / API 24
Native API guide also says in Graphics/EGL section
API level 24 added support for the EGL_KHR_mutable_render_buffer,
ANDROID_create_native_client_buffer, and
ANDROID_front_buffer_auto_refresh extensions.
and EGL_ANDROID_create_native_client_buffer is an EGL extension which contains eglCreateNativeClientBufferANDROID(), which returns EGLClientBuffer. (EGL/eglext.h)
I think you can use SurfaceTexture, SurfaceTexture can create by MediaCore, SurfaceTexture can direct encode by MediaCore。This Plan can process 1080p video in 2ms-5ms per video frame。

Low-latency audio playback on Android

I'm currently attempting to minimize audio latency for a simple application:
I have a video on a PC, and I'm transmitting the video's audio through RTP to a mobile client. With a very similar buffering algorithm, I can achieve 90ms of latency on iOS, but a dreadful ±180ms on Android.
I'm guessing the difference stems from the well-known latency issues on Android.
However, after reading around for a bit, I came upon this article, which states that:
Low-latency audio is available since Android 4.1/4.2 in certain devices.
Low-latency audio can be achieved using libpd, which is Pure Data library for Android.
I have 2 questions, directly related to those 2 statements:
Where can I find more information on the new low-latency audio in Jellybean? This is all I can find but it's sorely lacking in specific information. Should the changes be transparent to me, or is there some new class/API calls I should be implementing for me to notice any changes in my application? I'm using the AudioTrack API, and I'm not even sure if it should reap benefits from this improvement or if I should be looking into some other mechanism for audio playback.
Should I look into using libpd? It seems to me like it's the only chance I have of achieving lower latencies, but since I've always thought of PD as an audio synthesis utility, is it really suited for a project that just grabs frames from a network stream and plays them back? I'm not really doing any synthesizing. Am I following the wrong trail?
As an additional note, before someone mentions OpenSL ES, this article makes it quite clear that no improvements in latency should be expected from using it:
"As OpenSL ES is a native C API, non-Dalvik application threads which
call OpenSL ES have no Dalvik-related overhead such as garbage
collection pauses. However, there is no additional performance benefit
to the use of OpenSL ES other than this. In particular, use of OpenSL
ES does not result in lower audio latency, higher scheduling priority,
etc. than what the platform generally provides."
For lowest latency on Android as of version 4.2.2, you should do the following, ordered from least to most obvious:
Pick a device that supports FEATURE_AUDIO_PRO if possible, or FEATURE_AUDIO_LOW_LATENCY if not. ("Low latency" is 50ms one way; pro is <20ms round trip.)
Use OpenSL. The Dalvik GC has a low amortized cost, but when it runs it takes more time than a low-latency audio thread can allow.
Process audio in a buffer queue callback. The system runs buffer queue callbacks in a thread that has more favorable scheduling than normal user-mode threads.
Make your buffer size a multiple of AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER). Otherwise your callback will occasionally get two calls per timeslice rather than one. Unless your CPU usage is really light, this will probably end up glitching. (On Android M, it is very important to use EXACTLY the system buffer size, due to a bug in the buffer handling code.)
Use the sample rate provided by AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE). Otherwise your buffers take a detour through the system resampler.
Never make a syscall or lock a synchronization object inside the buffer callback. If you must synchronize, use a lock-free structure. For best results, use a completely wait-free structure such as a single-reader single-writer ring buffer. Loads of developers get this wrong and end up with glitches that are unpredictable and hard to debug.
Use vector instructions such as NEON, SSE, or whatever the equivalent instruction set is on your target processor.
Test and measure your code. Track how long it takes to run--and remember that you need to know the worst-case performance, not the average, because the worst case is what causes the glitches. And be conservative. You already know that if it takes more time to process your audio than it does to play it, you'll never get low latency. But on Android this is even more important, because the CPU frequency fluctuates so much. You can use perhaps 60-70% of CPU for audio, but keep in mind that this will change as the device gets hotter or cooler, or as the wifi or LTE radios start and stop, and so on.
Low-latency audio is no longer a new feature for Android, but it still requires device-specific changes in the hardware, drivers, kernel, and framework to pull off. This means that there's a lot of variation in the latency you can expect from different devices, and given how many different price points Android phones sell at, there probably will always be differences. Look for FEATURE_AUDIO_PRO or FEATURE_AUDIO_LOW_LATENCY to identify devices that meet the latency criteria your app requires.
From the link at your point 1:
"Low-latency audio
Android 4.2 improves support for low-latency audio playback, starting
from the improvements made in Android 4.1 release for audio output
latency using OpenSL ES, Soundpool and tone generator APIs. These
improvements depend on hardware support — devices that offer these
low-latency audio features can advertise their support to apps through
a hardware feature constant."
Your citation in complete form:
"Performance
As OpenSL ES is a native C API, non-Dalvik application threads which
call OpenSL ES have no Dalvik-related overhead such as garbage
collection pauses. However, there is no additional performance benefit
to the use of OpenSL ES other than this. In particular, use of OpenSL
ES does not result in lower audio latency, higher scheduling priority,
etc. than what the platform generally provides. On the other hand, as
the Android platform and specific device implementations continue to
evolve, an OpenSL ES application can expect to benefit from any future
system performance improvements."
So, the api to comunicate with drivers and then hw is OpenSl (in the same fashion Opengl does with graphics). The earlier versions of Android have a bad design in drivers and/or hw, though. These problems were addressed and corrected with 4.1 and 4.2 versions, so if the hd have the power, you get low latency using OpenSL.
Again, from this note from the puredata library website, is evident that the library uses OpenSL itself to achieve low latency:
Low latency support for compliant devices
The latest version of Pd for
Android (as of 12/28/2012) supports low-latency audio for compliant
Android devices. When updating your copy, make sure to pull the latest
version of both pd-for-android and the libpd submodule from GitHub.
At the time of writing, Galaxy Nexus, Nexus 4, and Nexus 10 provide a
low-latency track for audio output. In order to hit the low-latency
track, an app must use OpenSL, and it must operate at the correct
sample rate and buffer size. Those parameters are device dependent
(Galaxy Nexus and Nexus 10 operate at 44100Hz, while Nexus 4 operates
at 48000Hz; the buffer size is different for each device).
As is its wont, Pd for Android papers over all those complexities as
much as possible, providing access to the new low-latency features
when available while remaining backward compatible with earlier
versions of Android. Under the hood, the audio components of Pd for
Android will use OpenSL on Android 2.3 and later, while falling back
on the old AudioTrack/AudioRecord API in Java on Android 2.2 and
earlier.
When using OpenSL ES you should fulfil the following requirements to get low latency output on Jellybean and later versions of Android:
The audio should be mono or stereo, linear PCM.
The audio sample rate should be the same same sample rate as the output's native rate (this might not actually be required on some devices, because the FastMixer is capable of resampling if the vendor configures it to do so. But in my tests I got very noticeable artifacts when upsampling from 44.1 to 48 kHz in the FastMixer).
Your BufferQueue should have at least 2 buffers. (This requirement has since been relaxed. See this commit by Glenn Kasten. I'm not sure in which Android version this first appeared, but a guess would be 4.4).
You can't use certain effects (e.g. Reverb, Bass Boost, Equalization, Virtualization, ...).
The SoundPool class will also attempt to make use of fast AudioTracks internally when possible (the same criteria as above apply, except for the BufferQueue part).
Those of you more interested in Android’s 10 Millisecond Problem ie low latency audio on Android. We at Superpowered created the Android Audio Path Latency Explainer. Please see here:
http://superpowered.com/androidaudiopathlatency/#axzz3fDHsEe56
Another database of audio latencies and buffer sizes used:
http://superpowered.com/latency/#table
Source code:
https://github.com/superpoweredSDK/SuperpoweredLatency
There is a new C++ Library Oboe which help with reducing Audio Latency. I have used it in my projects and it works good.
It has this features which help in reducing audio latency:
Automatic latency tuning
Chooses the audio API (OpenSL ES on API 16+ or AAudio on API 27+)
Application for measuring sampleRate and bufferSize: https://code.google.com/p/high-performance-audio/source/checkout and http://audiobuffersize.appspot.com/ DB of results

Recording\Playing audio directly with libmedia\AudioFlinger

I'm checking out the possibility of interfacing directly to libmedia\AudioFlinger for playing\recording raw audio (like AudioTrack\AudioRecord do).
The purpose is to workaround the minimum buffer size limitation of those 2 Java classes.
I know that 2.3 introduces OpenSL, but I want to do that for 2.2 and below.
Has anyone done that before? Is there any good reference implementation that uses that?
If not, how would you approach linking against this library and using it to workaround the minimum buffer size?
Thanks
Unfortunately there are only two supported audio APIs available, and you have mentioned both (AudioTrack and OpenSL). Any lower level than that and you would be interfering with the audio mixing already being done by the device for things like SFX and phone calls. Also as there is no API for lower layer audio you would need to go hacking, which probably isn't what you want to do for obvious compatibility reasons.

Android Audio Latency Workarounds

So anybody worth their salt in the android development community knows about issue 3434 relating to low latency audio in Android. For those who don't, you can educate yourself here. http://code.google.com/p/android/issues/detail?id=3434
I'm looking for any sort of temporary workaround for my personal project. I've heard tell of exposing private interfaces to the NDK by rolling your own build of android and modifying the NDK.
All I need is a way to access the low level alsa drivers which are already packaged with the standard 2.2 build. I'd like to have the ability to send PCM directly to the audio hardware on my device. I don't care that the resulting app won't be distributable over the marketplace, and likely won't run with any other device than mine.
Anybody have any useful ideas?
-Griff
EDIT: I should mention, I know AudioTrack provides this functionality, but I'd like much lower latency -- AudioTrack sits around 300ms, I'd like somewhere around 20-30 ms.
Griff, that's just the problem, NDK wil not improve the known latency issue (that's even documented). The hardware abstraction layer in native code is currently adding to the latency, so it's not just about access to the low level drivers (btw you shouldn't rely on alsa drivers being there anyway).
Android: sound API (deterministic, low latency) covers the tradeoffs pretty well. TL;DR: NDK gives you a minor benefit because the threads can run at higher priority, but this benefit is meaningless pre-Jellybean because the entire audio system is tuned for Java.
The Galaxy Nexus running 4.1 can get fairly close to 30ms of output latency.

Categories

Resources