People on the net give latency numbers of around 88mS for Galaxy Nexus running ICS and 72mS for Nexus 7 running JB 4.1.1 I have tried with both AudioTrack and OpenES and found that I cannot get less than 140mS latency on either device. Am I missing something? I have set my output threads to URGENT_AUDIO priority, pass the audio in small chunks (eg 160 shorts) and use the minimum buffer size (in the AudioTrack case).
Are the quoted numbers only valid for short sounds played through SoundPool and not applicable to streaming PCM? Just to be clear, I am talking about playback only not recording.
This is androids dirty little secret. It is not fixed, all you need is an app, an ear, and a finger to find the truth.
Related
I am attempting to record an mp4 through the use of OpenGL SurfaceView and using a MediaCodec encoder to encode the frames of the video in one thread and in another thread I am using AudioRecord to record audio samples and using a seperate encoder to encode the audio data and finally a MediaMuxer to do the actualy muxing.
On some devices (mostly higher end devices) the method that I am using works completely fine and the output video is to be expected with no error or anything like that. However, on some other devices that isnt the case.
What happens is on some devices, (ie Moto Droid Turbo 2) the audio encoder (which is running in its own thread) will process a few audio samples then return MediaCodec.INFO_BUFFER_UNAVAILABLE when attempting to dequeue an input buffer for the encoder, this start happening after about 5 samples that are successfully encoded and the video encoder runs completely fine. But on another device (ie the Samsung Galaxy Alpha) just opposite happens where the video encoder begins to return the MediaCodec.INFO_BUFFER_UNAVAILABLE status when dequeue an input buffer for the video encoder and the audio encoder runs fine.
I guess my question is, and I have looked all over for a completely clear explanation of this, is what is the cause of a buffer being unavailable? Other then not releasing the buffer before next use, what can cause this? I am 98% certain that I am releasing the buffers when the need to be released because the exact same code will work on most devices that I have tested on. (Nexus 5x, Nexus 6P, Samsung S6, Samsung Edge/Edge+, Moto X (2014), Moto X (2015), HTC One M9, HTC One M8, etc).
Does anyone have any ideas? I didnt think posting code examples was necessary because I know it works on most devices I have tested but my question is what can cause MediaCodec.INFO_BUFFER_UNAVAILABLE to be returned aside from not release the buffer, if anything?
EDIT:
Further investigating the issue and it become a little strange even. I was starting to think that this had something to do with processing power of the device, which would explain why the higher end devices worked fine but the lower end did not. HOWEVER, the Samsung Note Edge (which works fine) has the same CPU, GPU, chipset AND amount of RAM as the Moto Droid Turbo, which encounters the error with dequeing audio buffers from the audio encoder.
Edit 2:
Yup, this was entirely something that I was doing that caused this. The issue was happening because I missed a release call to the buffer in the instance that the muxer hadn't yet started yet. So instead of releasing the buffer in that instance, I just ignore it and moved it which caused the buffer to be hung up. problem solved
I need to create a VOIP app and I'm using OpenSL ES. I need to capture and play pcm audio data at 8KHz sampling rate for all android devices. But, when i capture audio at sampling rate 8KHz and play it at the same time (voice communication), it produces noise and the audio is distorted for some devices like Samsung Galaxy S3, S4 etc. I know, there's a specific preferred sampling rate for each device and I want to know is there any workaround or any way to work with 8KHz sampling rate only without any distortion?
I tried increasing buffer size and many other things but failed to find an optimum and generic solution. I need audio data sampled at 8KHz for my encoder and decoder. I took re-sampling audio data before it is passed to my encoder or decoder as my second thought, but its not the solution i'm looking for.
I found CSipSimple used OpenSL and I went through some of their codes too. But, yet I couldn't find a solution and may be I failed to understand where to concentrate.
I'm stuck here!
Here's how I solved my problem:
I was working on audio streaming for Android using OpenSL ES and this tutorial helped me a lot. I followed the instructions here and got the thing working. Then i found audio streaming with this approach doesn't work very well for some devices (mostly samsung devices). I tried many things like increasing buffer size, disabling environmental reverb etc etc. I found this answer very useful for improving streaming performance.
Finally, I found the audio is distorted because of theadlocks I had to use for synchronizing the buffer switches. Using lock free structure is suggested for better audio performance. Then I went with another approach of Victor Lazzarini which is a lock free Audio IO. This article of Lock-free audio IO with OpenSL ES on Android helped a lot to implement a lock free structure along with a better audio performance.
I'm working on an Android app centered around audio playback and I'm experiencing some erratic behavior (audio stuttering and hiccups) that I suspect might be inherent to certain devices, OS versions or perhaps the native buffer size of the device.
About my implementation - I'm in need of low latency so I'm processing my audio in OpenSL ES callbacks and using a fairly small buffer size of 128 samples to enqueue the buffers. I am doing mp3 decoding during the callback, but with my ring buffer size, I do not need to decode during every callback cycle.
I'm using a remote testing service to gauge audio playback quality on a variety of devices and OS versions and here's a few examples of the inconsistencies I'm finding.
Samsung Galaxy S4 w/ Android 4.4 - no audio playback issues
Samsung Galaxy S4 w/ Android 4.3 - user experiences audio drop-outs/stuttering when locking/un-locking the device
Samsung Galaxy Note 2 w/ Android 4.1.2 - no issues
Samsung Galaxy Note 2 w/ Android 4.3 - audio drop-outs during playback and stuttering when locking/unlocking screen.
Personally, I have a Galaxy S3 w/ 4.1.2 and a Nexus 5 with 4.4 and don't ever experience these issues. I also have a few older 2.3.7 devices where these issues do not occur (2010 Droid Incredible, LG Optimus Elite).
I am fairly confident that I'm not over-working the processor since I can get this running on older, Gingerbread devices just fine.
My questions:
If I raise my base SDK to 4.2, I can detect native buffer size from the hardware and use some multiple of this during my buffer queue callbacks. Would this make much of a difference in cases where stuttering and drop-outs are problematic especially during screen lock?
Is there a known bug with Android 4.3 where audio playback suffers,
especially during screen lock actions? Is this possibly just a Samsung issue?
Are there other ways of improving performance to avoid this problem? I absolutely need OpenSL ES for my app.
Thanks.
Increasing buffer size solves some problems regarding distortion and noise. Yes, you can detect native buffer size from the hardware when your SDK is above 4.2 with this:
String size = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER);
But yet another approach to get the minimum buffer size for Audio Record is:
int minForAudioRecord = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
And for Audio Playback:
int minForAudioTrack = AudioTrack.getMinBufferSize(8000,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
If your SDK version is greater than or equal to 4.2, then you can have preferred sample rate for your audio.
String rate = audioManager.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE);
I've found samsung devices are the worst while dealing with these things because each device has different approach to deal with audio drivers.
I'm new to the android platform, and I wanted to develop an app that runs in the background and reads the microphone input, applies a transformation to it, and outputs the resulting audio to the speaker.
I'm wondering if there is any lag perceived by the user in this process, or if it's possible to do it in near-realtime so that the user can hear the transformed audio in sync with the ambient audio. Thanks!
Yes, users will hear a severe latency lag or echo with attempts at real-time audio on current unmodified Android devices using the provided APIs.
The summary is that Android devices are configured for fairly long audio buffers, which has been reported to be in the somewhere around the range of 100 to 400 milliseconds long, depending on the particular device and the Android OS version it is running. (Shorter buffers might be possible on Android devices on which one can build and install a modified custom build of the OS with your own custom audio drivers.)
(Humans hear echoes at somewhere around or above 25 mS. Audio buffers on iOS can be as short as 5.8 mS, so you may have better luck trying to develop your near-real-time audio processing on a different device platform.)
Audio processing on android isn't all the great, in fact to be honest, it sucks. The out-of-the-box latency on android devices for such things is pretty awful. You can however tinker with the NDK and try to put together something based on OpenSL ES which will have significantly low latency.
There is a similar StackOverflow question: Playing back sound coming from microphone in real-time
Some other helpful links:
http://arunraghavan.net/2012/01/pulseaudio-vs-audioflinger-fight/
http://www.musiquetactile.fr/android-is-far-behind-ios/
http://www.geardiary.com/2012/02/21/the-dismal-state-of-android-as-a-music-production-solution/
On the other side of the coin, android mic quality is way better than IOS quality. I have a galaxy s4 and a huawei very low end phone and both have a wonderful mic quality when recording.
I have an application that plays back AMR audio files that it has downloaded and cached locally.
This works fine — the basic MediaPlayer does its job.
However, the audio volume is generally very low, and manually increasing the volume with the hardware keys still doesn't make the playback quite loud enough.
The behaviour seems to vary across devices — Sony Ericssons are particularly low, HTC devices are reasonable, and the Samsung Galaxy S is actually very loud when the volume is turned up to the maximum.
Are there any relatively simple approaches, using the Android SDK, that could, say, double the volume while playing back from the AMR file?
I note that AudioTrack allows you to manipulate audio, but this seems to be for raw PCM streams.