OpenSL slCreateEngine error on old Android Devices - android

I'm writing an Android application that uses a Buffer Queue Audio Player via OpenSL ES. My application runs just fine on recent devices but I'm having troubles with the HTC Wildfire S.
In particular, calling the slCreateEngine functions produces a SL_RESULT_RESOURCE_ERROR. There isn't a lot of information in the documentation about this error.
Right before that call to slCreateEngine, I am also seeing these errors in LogCat:
Failed to open MM_PARSER_LIB, dlerror = Cannot load library: load_library[1105]: Library 'libmmparser.so' not found
Failed to open MM_PARSER_LITE_LIB, dlerror = Cannot load library: load_library[1105]: Library 'libmmparser_lite.so' not found
Some theories I have:
The device is too memory-constrained to instantiate the engine object
The device does not support OpenSL ES audio interface (this thread
suggests that not all devices support it:
https://groups.google.com/forum/#!topic/android-ndk/Px-7NvaLmjo)
The specs on the HTC Wildfire S are as follows:
Android 2.3.4 (as far as I can take it through carrier updates)
Qualcomm MSM7227 (ARMv6 processor)
512MB RAM
I know that Open SL ES is supported as of Gingerbread, but that doesn't mean all devices have the capability. Due to other requirements for this application, I must handle by audio processing and playback with NDK and cannot use Media Player or AudioTrack.
Questions:
Does anyone know what could be causing the reported error on this
device?
Is there a way of figuring out what devices are compatible with OpenSL ES?
Are ARMv6 devices necessarily incompatible with OpenSL ES?
EDIT
See my comments to HerrLip
NativeAudio example from the NDK folder runs on this device and slCreateEngine is executed successfully. This eliminates my suspicions that the device and other ARMv6 devices are not supported.
The libmmparser.so and libmmparser_lite.so libraries still cannot be loaded in the NativeAudio example, but this does not seem to be the problem as it works on the device.
My application has quite a bit more going on than the NativeAudio example. Perhaps memory constraints are preventing the slCreateEngine function call from getting the required resources.

According to the docs, that error code means "Operation failed due to a lack of resources (usually a result of object realization)." So, I'd assume you're right about constraints.
You could try somebody else's app and see what happens. (Try https://code.google.com/p/high-performance-audio/source/browse/audio-buffer-size )
I don't know

Related

AVC HW encoder with MediaCodec Surface reliability?

I'm working on a Android app that uses MediaCodec to encode H.264 video using the Surface method. I am targeting Android 5.0 and I've followed all the examples and samples from bigflake.com (I started working on this project two years ago, so I kind of went through all the gotchas and other issues).
All is working nice and well on Nexus 6 (which uses the Qualcomm hardware encoder for doing this), and I'm able to record flawlessly in real-time 1440p video with AAC audio, in a multitude of outputs (from MP4 local files, upto http streaming).
But when I try to use the app on a Sony Android TV (running Android 5.1) which uses a Mediatek chipset, all hell breaks loose even from the encoding level. To be more specific:
It's basically impossible to make the hardware encoder work properly (that is, "OMX.MTK.VIDEO.ENCODER.AVC"). With the most basic setup (which succeeds at MediaCodec's level), I will almsot never get output buffers out of it, only weird, spammy, logcat error messages stating that the driver has encountered errors each time a frame should be encoded, like this:
01-20 05:04:30.575 1096-10598/? E/venc_omx_lib: VENC_DrvInit failed(-1)!
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] cannot set param
01-20 05:04:30.575 1096-10598/? E/MtkOmxVenc: [ERROR] EncSettingH264Enc fail
Sometimes, trying to configure it to encode at a 360 by 640 pixels resolution will succeed in making the encoder actually encode stuff, but the first problem I'll notice is that it will only create one keyframe, that is, the first video frame. After that, no more keyframes are ever, ever created, only P-frames. Ofcourse, the i-frame-interval was set to a decent value and is working with no issues on other devices. Needless to say, this makes it impossible to create seekable MP4 files, or any kind of streamable solution on top.
Most of the times, after releasing the encoder, logcat will start spamming endlessly with "Waiting for input frame to be released..." which basically requires a reboot of the device, since nothing will work from that point on anyway.
In the case where it doesn;t go havoc after a simple release(), no problem - the hardware encoder is making sure that it cannot be created a second time, and it falls back to the generic SOFTWARE avc google encoder. hich ofcourse is basically a mockup encoder which does little to nothing than spit out an error when trying to make it encode anything larger than 160p videos...
So, my question is: is there any hope of making this MediaCodec API actually work on such a device? My understanding was that there are some CTS tests performed by Google/manufacturers (in this case, Sony) that would allow a developer to actually think that an API is supported on a device which prouds itself as running Android 5.1. Am I missing something obvious here? Did anyone actually ever tried doing this (a simple MediaCodec video encoding test) and succeeded? It's really frustrating!
PS: it's worth mentioning that not even Sony provides yet a recording capability for this TV set, which many people are complaining anyway. So, my guess is that this sounds more like a Mediatek problem, but still, what exactly are the Android's CTS for in this case anyway?

build screenrecord for android 4.2

Is it possible to build screenrecord binary to run on 4.2 , or are there too many missing apis ?
Can I workaround changed apis and libraries ? because as far as I understand the main parts exist in 4.2 ( like mediacodec ) . I dont need the muxer I can use my own muxing for 4.2.
It will be difficult.
screenrecord shipped in Android 4.4, but except for one minor API (to tear down the virtual display) everything it needs is present in Android 4.3.
Android 4.2 lacks some important things. The MediaMuxer class didn't exist, and MediaCodec didn't yet have the createInputSurface() call. As you noted it's not hard to work around the former, but for the latter you either have to feed MediaCodec raw YUV buffers (which is difficult to do pre-4.3, and will reduce your frame rate), or (since screenrecord is using private internal APIs already) interface directly with libstagefright to implement your own "metadata mode" handling.
I'm not sure offhand what the state of virtual displays was in 4.2, but you'd also need those to work fully.
Companies like Kamcord and Everyplay advertise that they can do game recording back to Android 4.1, but I suspect they're recording the OpenGL ES rendering (rather than virtual display output) and doing a lot of internal plumbing themselves.

Low-latency audio playback on Android

I'm currently attempting to minimize audio latency for a simple application:
I have a video on a PC, and I'm transmitting the video's audio through RTP to a mobile client. With a very similar buffering algorithm, I can achieve 90ms of latency on iOS, but a dreadful ±180ms on Android.
I'm guessing the difference stems from the well-known latency issues on Android.
However, after reading around for a bit, I came upon this article, which states that:
Low-latency audio is available since Android 4.1/4.2 in certain devices.
Low-latency audio can be achieved using libpd, which is Pure Data library for Android.
I have 2 questions, directly related to those 2 statements:
Where can I find more information on the new low-latency audio in Jellybean? This is all I can find but it's sorely lacking in specific information. Should the changes be transparent to me, or is there some new class/API calls I should be implementing for me to notice any changes in my application? I'm using the AudioTrack API, and I'm not even sure if it should reap benefits from this improvement or if I should be looking into some other mechanism for audio playback.
Should I look into using libpd? It seems to me like it's the only chance I have of achieving lower latencies, but since I've always thought of PD as an audio synthesis utility, is it really suited for a project that just grabs frames from a network stream and plays them back? I'm not really doing any synthesizing. Am I following the wrong trail?
As an additional note, before someone mentions OpenSL ES, this article makes it quite clear that no improvements in latency should be expected from using it:
"As OpenSL ES is a native C API, non-Dalvik application threads which
call OpenSL ES have no Dalvik-related overhead such as garbage
collection pauses. However, there is no additional performance benefit
to the use of OpenSL ES other than this. In particular, use of OpenSL
ES does not result in lower audio latency, higher scheduling priority,
etc. than what the platform generally provides."
For lowest latency on Android as of version 4.2.2, you should do the following, ordered from least to most obvious:
Pick a device that supports FEATURE_AUDIO_PRO if possible, or FEATURE_AUDIO_LOW_LATENCY if not. ("Low latency" is 50ms one way; pro is <20ms round trip.)
Use OpenSL. The Dalvik GC has a low amortized cost, but when it runs it takes more time than a low-latency audio thread can allow.
Process audio in a buffer queue callback. The system runs buffer queue callbacks in a thread that has more favorable scheduling than normal user-mode threads.
Make your buffer size a multiple of AudioManager.getProperty(PROPERTY_OUTPUT_FRAMES_PER_BUFFER). Otherwise your callback will occasionally get two calls per timeslice rather than one. Unless your CPU usage is really light, this will probably end up glitching. (On Android M, it is very important to use EXACTLY the system buffer size, due to a bug in the buffer handling code.)
Use the sample rate provided by AudioManager.getProperty(PROPERTY_OUTPUT_SAMPLE_RATE). Otherwise your buffers take a detour through the system resampler.
Never make a syscall or lock a synchronization object inside the buffer callback. If you must synchronize, use a lock-free structure. For best results, use a completely wait-free structure such as a single-reader single-writer ring buffer. Loads of developers get this wrong and end up with glitches that are unpredictable and hard to debug.
Use vector instructions such as NEON, SSE, or whatever the equivalent instruction set is on your target processor.
Test and measure your code. Track how long it takes to run--and remember that you need to know the worst-case performance, not the average, because the worst case is what causes the glitches. And be conservative. You already know that if it takes more time to process your audio than it does to play it, you'll never get low latency. But on Android this is even more important, because the CPU frequency fluctuates so much. You can use perhaps 60-70% of CPU for audio, but keep in mind that this will change as the device gets hotter or cooler, or as the wifi or LTE radios start and stop, and so on.
Low-latency audio is no longer a new feature for Android, but it still requires device-specific changes in the hardware, drivers, kernel, and framework to pull off. This means that there's a lot of variation in the latency you can expect from different devices, and given how many different price points Android phones sell at, there probably will always be differences. Look for FEATURE_AUDIO_PRO or FEATURE_AUDIO_LOW_LATENCY to identify devices that meet the latency criteria your app requires.
From the link at your point 1:
"Low-latency audio
Android 4.2 improves support for low-latency audio playback, starting
from the improvements made in Android 4.1 release for audio output
latency using OpenSL ES, Soundpool and tone generator APIs. These
improvements depend on hardware support — devices that offer these
low-latency audio features can advertise their support to apps through
a hardware feature constant."
Your citation in complete form:
"Performance
As OpenSL ES is a native C API, non-Dalvik application threads which
call OpenSL ES have no Dalvik-related overhead such as garbage
collection pauses. However, there is no additional performance benefit
to the use of OpenSL ES other than this. In particular, use of OpenSL
ES does not result in lower audio latency, higher scheduling priority,
etc. than what the platform generally provides. On the other hand, as
the Android platform and specific device implementations continue to
evolve, an OpenSL ES application can expect to benefit from any future
system performance improvements."
So, the api to comunicate with drivers and then hw is OpenSl (in the same fashion Opengl does with graphics). The earlier versions of Android have a bad design in drivers and/or hw, though. These problems were addressed and corrected with 4.1 and 4.2 versions, so if the hd have the power, you get low latency using OpenSL.
Again, from this note from the puredata library website, is evident that the library uses OpenSL itself to achieve low latency:
Low latency support for compliant devices
The latest version of Pd for
Android (as of 12/28/2012) supports low-latency audio for compliant
Android devices. When updating your copy, make sure to pull the latest
version of both pd-for-android and the libpd submodule from GitHub.
At the time of writing, Galaxy Nexus, Nexus 4, and Nexus 10 provide a
low-latency track for audio output. In order to hit the low-latency
track, an app must use OpenSL, and it must operate at the correct
sample rate and buffer size. Those parameters are device dependent
(Galaxy Nexus and Nexus 10 operate at 44100Hz, while Nexus 4 operates
at 48000Hz; the buffer size is different for each device).
As is its wont, Pd for Android papers over all those complexities as
much as possible, providing access to the new low-latency features
when available while remaining backward compatible with earlier
versions of Android. Under the hood, the audio components of Pd for
Android will use OpenSL on Android 2.3 and later, while falling back
on the old AudioTrack/AudioRecord API in Java on Android 2.2 and
earlier.
When using OpenSL ES you should fulfil the following requirements to get low latency output on Jellybean and later versions of Android:
The audio should be mono or stereo, linear PCM.
The audio sample rate should be the same same sample rate as the output's native rate (this might not actually be required on some devices, because the FastMixer is capable of resampling if the vendor configures it to do so. But in my tests I got very noticeable artifacts when upsampling from 44.1 to 48 kHz in the FastMixer).
Your BufferQueue should have at least 2 buffers. (This requirement has since been relaxed. See this commit by Glenn Kasten. I'm not sure in which Android version this first appeared, but a guess would be 4.4).
You can't use certain effects (e.g. Reverb, Bass Boost, Equalization, Virtualization, ...).
The SoundPool class will also attempt to make use of fast AudioTracks internally when possible (the same criteria as above apply, except for the BufferQueue part).
Those of you more interested in Android’s 10 Millisecond Problem ie low latency audio on Android. We at Superpowered created the Android Audio Path Latency Explainer. Please see here:
http://superpowered.com/androidaudiopathlatency/#axzz3fDHsEe56
Another database of audio latencies and buffer sizes used:
http://superpowered.com/latency/#table
Source code:
https://github.com/superpoweredSDK/SuperpoweredLatency
There is a new C++ Library Oboe which help with reducing Audio Latency. I have used it in my projects and it works good.
It has this features which help in reducing audio latency:
Automatic latency tuning
Chooses the audio API (OpenSL ES on API 16+ or AAudio on API 27+)
Application for measuring sampleRate and bufferSize: https://code.google.com/p/high-performance-audio/source/checkout and http://audiobuffersize.appspot.com/ DB of results

Variable video playback rate on Android

I need to make a video player that can gradually change playback speed from 0 to roughly 200%. It has to be performing very fast, as it will be playing HD movies recorded at high framerates (60 FPS). Lower resolution can be used if impossible to support HD.
The code only needs to run on relatively high end Android tablets with hardware h264 decoder, and ICS (no Jelly Bean available for the target tablets).
I have not found any support for changing video playback rate in the Android system, and I suspect I need to dig pretty deep into the JNI to get there, but would like to ask here first if anyone has some code, suggestions or pointers that can help me.
I got android custom player from vitamio. In that, the media player have an option of setplayback speed.
ie
mMediaPlayer.setPlaybackSpeed(speed);
Set video and audio playback speed
Parameters:
speed e.g. 0.8 or 2.0, default to 1.0, range in [0.5-2]
Please refer the link: http://www.vitamio.org/en/docs/news/2013/0529/19.html
I have been looking into doing something similar, and here are some of my findings that may be useful to you:
If you have downloaded the android ndk r7 or later, ndk->samples->native-media is a sample project that uses jni to execute a native android media player.
This uses the OpenMAXAL.h library (that comes in the ndk): You will notice an interface called XA_IID_PLAYBACKRATE. Have at the decent reference card, but thin on samples. It sounds like it should do what we want though.
The sample indicates a minSdkVersion = 14, so it should work on your ICS devices.
I tested this on the only ICS+ device available to me, a 16GB ASUS Nexus7 running 4.2 (Jellybean), and I got the following outputs in my log of note (omitting my own debug statements)
01-15 14:19:33.384: W/libOpenSLES(6037): class MediaPlayer interface 1 requested but unavailable MPH=75
01-15 14:19:33.384: W/libOpenSLES(6037): Leaving Object::GetInterface (SL_RESULT_FEATURE_UNSUPPORTED)
01-15 14:19:33.384: A/libc(6037): jni/native-media-jni.c:409: Java_com_example_nativemedia_NativeMedia_createStreamingMediaPlayer: assertion "XA_RESULT_SUCCESS == res" failed
01-15 14:19:33.384: A/libc(6037): Fatal signal 11 (SIGSEGV) at 0xdeadbaad (code=1), thread 6037 (ple.nativemedia)
in the function that loads up a media stream (or file) and creates the native mediaplayer instance. These errors quite pointedly indicate that the feature either isn't supported on my device/decoder, my OS or my file type. I'm not actually sure which one (or combination) it is, but if it's the first one, it probably means there aren't very many devices out there that will support the feature you want. Maybe the Nexus7 is an outlier, but that's still a sizable chunk of the tablet space, unfortunately, and means we can't expect much consistency in other devices.
If anyone follows these notes and has success in getting things to run, do comment - I will keep hacking away at this and try to get this working, and will update with any progress.

Android Audio Latency Workarounds

So anybody worth their salt in the android development community knows about issue 3434 relating to low latency audio in Android. For those who don't, you can educate yourself here. http://code.google.com/p/android/issues/detail?id=3434
I'm looking for any sort of temporary workaround for my personal project. I've heard tell of exposing private interfaces to the NDK by rolling your own build of android and modifying the NDK.
All I need is a way to access the low level alsa drivers which are already packaged with the standard 2.2 build. I'd like to have the ability to send PCM directly to the audio hardware on my device. I don't care that the resulting app won't be distributable over the marketplace, and likely won't run with any other device than mine.
Anybody have any useful ideas?
-Griff
EDIT: I should mention, I know AudioTrack provides this functionality, but I'd like much lower latency -- AudioTrack sits around 300ms, I'd like somewhere around 20-30 ms.
Griff, that's just the problem, NDK wil not improve the known latency issue (that's even documented). The hardware abstraction layer in native code is currently adding to the latency, so it's not just about access to the low level drivers (btw you shouldn't rely on alsa drivers being there anyway).
Android: sound API (deterministic, low latency) covers the tradeoffs pretty well. TL;DR: NDK gives you a minor benefit because the threads can run at higher priority, but this benefit is meaningless pre-Jellybean because the entire audio system is tuned for Java.
The Galaxy Nexus running 4.1 can get fairly close to 30ms of output latency.

Categories

Resources