I'm using openSL ES in one of my Android apps. When the app is in the foreground the callbacks are pretty regular. The mic callback is called approximately every 10ms and so is the speaker callback. However, if I put my app in the background and open up a browser (or another app for that matter) I see that a "storm" of callbacks are triggered upon opening up the browser (or browsing). Is there a way to get around that? And why does it happen? Is openSL compensating for a period of time where it wasn't able to execute the callbacks? (like it's trying to catch up).
My source code is in C and I'm on Jelly Bean 4.3.
I have tried to increase the thread priorities of AudioTrack and AudioRecorder, and it does seem to help, but I'm not sure that's the way to go.
ADDITIONAL QUESTIONS
So you're saying that even with increased thread priority you might get a burst of callbacks and that you should discard those ?
How is that a good solution? You'll be dropping mic packet (or draining the source of the speaker packets), right? If you don't drop mic packets, the receiver of the mic packets will interpret the burst of mic packets as excessive jitter, right?
More importantly: I manually increased the thread priority of AudioTrack and AudioRecorder and changed the sched policy to round robin. It required both root access and installation of BusyBox (which comes with a command line util for changing thread priorities/sched policy). How is this done programatically from C ? I want to make sure that it IS the individual thread priority that is increased and not just the priority of my app (process).
Yes, this is by design. Trying to push the thread priority high is the legitimate way to work around. Make sure to work with native buffer size and sampling (see Low-latency audio playback on Android) for best results. You should still be prepared to discard bursts of callbacks because there is no way to guarantee they will never happen. You should also try to reduce overall CPU consumption and RAM footstamp of your app while it is in the background.
Related
I'm working on an app that uses a MediaPlayer object to play H.264 MP4 videos from a WallpaperService as it is a live wallpaper app. Battery drain occurs while the device (Nexus 5, Android 6.0.1) is idle and sleeping if I pause/stop the MediaPlayer with mediaPlayer.pause() or mediaPlayer.stop(). The drain is about 3-7%/hour as tested multiple times overnight. As soon as I release the media player with mediaPlayer.release(), the battery drain goes back to a more normal 1%/hour. I pause/stop the mediaPlayer when onVisibilityChanged calls false. The phone is reporting to be going to sleep in both the stock Android battery chart and Better Battery Stats.
How can this battery drain be explained if the CPU is going into a sleep state successfully?
EDIT: Something new I've discovered is that when calling mediaPlayer.setSurface(null) right before mediaPlayer.pause(), the idle battery use comes back to normal. I can then do mediaPlayer.setSurface(surface) to set it back before mediaPlayer.start(). The problem is there's some black artifacting for a couple of seconds after restarting.
I can't give you a precise answer but can give you what to look for. I suspect what is going on is that pause() is checking for events frequently enough to keep the processor from entering the deeper sleep/C-states. In contrast, stop() doesn't need to check for events and so allows the processor to enter a deep sleep state. I wrote an article on sleep states some years back.
I suspect that the writers of the function decided to check more frequently than is necessary. This is a very common mistake due to the developers thinking that shorter sleeps / more frequent checking result in better response (which it almost never does). You can check this by using a processor power monitor that actually checks the hardware sleep states. Unfortunately, most don't and only check for processor independent "equivalents".
So let's get back to your question: What can you do about it. I can give you some advice but none of it is very satisfying:
Check for an API or data structure that allows you to set the
checking interval for pause(). By the way, I don't know of any.
Write your own. Of course, this complicates writing platform independent apps
Use an alternative media player that has done this correctly
Hammer on google until it's fixed
Like I said, none of this is very satisfying. By the way, searching the net, I found evidence that this has happened more than once with new Android releases.
Good luck and let us know what happens.
I want to modify the android scheduler "CFS" by myself.
I want to to assign a real-time priority to the user interactive task distinguished by heuristic or what so ever.
So, I just want to modify android kernel, build my modified kernel and research the performance.
How can I do this?
To modify the Android's kernel scheduling policy is unlikely to be allowed from a security point of view. But based various features of "realtime" you can always make your program meets these requirements:
a. Responsiveness: by ensure the input loop is as efficient as possible and always responding as fast to input as possible. In the Linux kernel this is done through "voluntary preemption".
b. Low latency: by piecing every jobs into as small a piece as possible so that control can be passed back to respond to input, or in the case of audio, control can be issued at a precise start of the clock (SCHED_DEADLINE scheduling). Android does have some API for this:
http://source.android.com/devices/audio/latency_design.html
In general changing priority is not ideal to solve the realtime requirement (eg, giving higher priority to one process may end up having another process suffering in performance). What is actually done (eg, LynxOS, a realtime OS used in Missile system, and is not Linux, but some of its component like TCP/IP is from FreeBSD) is to tune the system so that it perform at the level with lots of spare hardware capacity. So in LynxOS a lot of the system threshold limits are very low, so the hardware is always free enough to respond quickly to input events.
https://github.com/keesj/gomo/wiki/AndroidScheduling
Android Low latency Audio using SoundPool
Low-latency audio playback on Android
we have an app with mobile audio clients written in low-level OpenSL ES to achieve low-latency input from microphone. Than we are sending 10ms frames encapsulated in UDP datagram to server.
On server we are doing some post-processing which is curucially dependent on aan assumption that frames from mobile clients comes in fixed intervals (eg. 10ms per frame), so we can align them.
It seems that internal crystal frequencies on mobile phones can vary a lot and due to this, we are getting perfect alignment on the beggining but poor alignment after few minutes.
I know, that ALSA on Linux can tell you exact frequency of the crystal - so you can correct your counts based on this. Unfortunatelly I don't know how to get this information in Android.
Thx for help
The essence of the problem you face is that you have an ADC and a DAC on separate systems with different local oscillators. You're presumably timing your packets against a 3rd (and possibly 4th) CPU clock.
The correct solution to this problem is some kind of clock recovery algorithm. To do this properly you need some means of accurately timestamping (e.g. to bit accuracy) transmitted packets, and then use a PLL to drive the clock-rate of the receiver's sample clock. This is is precisely the approach that both IEEE1394 audio and MPEG2 Transport streams use.
Since probably can't do either of these things, your approach is most likely going to involve dropping or repeating samples (or even entire packets) periodically to keep your receive buffer from under- or over-flowing.
USB Audio has a similar lack of hardware support for clock recovery, and the approaches used there may be applicable to your situation.
Relying on the transmission and reception timing of network packets is a terrible idea. The jitter on delivery times is horrendous - particularly with Wifi or cellular connections. You'd be well advised to use not rely on it at all, and instead do as both IEEE1394 audio and MPEG 2 TS do, which is to decouple audio data transport from consumption using a model FIFO in which data is consumed at a constant rate and delivered to it in packets of unreliable timing.
As for ALSA, all it can do (unless if has an accurate external timing reference) is to measure the drift between the sample clock of the audio interface and the CPU's clock. This does not yield 'the exact frequency' of anything as neither oscillator is likely to be accurate, and both may drift dependent on temperature.
The short version:
I'm developing a synth app and using Opensl with low latency. I was doing all the audio calculation in the Opensl callback funktion (I know I should not but I did anyway). Now the calculations take about 75% cpu time on my nexus 4, so the next step is to do all the calculations in multiple threads instead.
The problem I ran into was that the audio started to stutter since the callback thread obviously run on a high priority while my new thread doesn't. If I use more/bigger buffers the problem goes away but so does the realtime too. Setting higher priority on the new thread don't seem to work.
So, is there even possible to do threaded low latency audio or do I have to do everything in the callback for it to work?
I have a buffer of 256 samples and that's about 5ms and that should be ages for the thread- scheduler-thingie to run my calc thread.
I think the fundamental problem lies in the performance of your synth-engine. A decent channel count with a Cortex-A8 or -A9 CPU is achievable with a single core. What language have you implemented it in? If it happens to be Java, I recommend porting it to C++.
Using multiple threads for synthesis is certainly possible, but brings with it new problems - namely that each thread must synchronise before the generated audio can be mixed.
Unless you take an additional latency hit that would come from running the synthesis threads asynchronously, the likely set-up is that in your render call-back you'd signal the additional synthesis threads and then wait for them to complete before mixing the audio from all of them together.
(an obvious optimisation is that the render call-back runs some of the processing itself as it's already running on the CPU and would otherwise be doing nothing).
Herein lies the problem. Unless you can be certain that your synth render threads run with real-time priority, you can potentially take a scheduling hit each time the render callback runs, and potentially another if you block the callback thread waiting for the synth render threads to catch up.
Last time I looked at audio on Android, Bionic was deficient of a means of setting real-time thread priority (e.g. SCHED_FIFO). In any case, whether this is even allowed is matter of operating system policy: on a desktop Linux system you either need to be root or have adjusted the appropriate ulimit (as root) - I'm not sure what Android does here, but I very much suspect that downloaded apps aren't by default given this permission. Nor the other useful permission which is to mlock() the code and its likely stack needs into physical memory.
The idea is Phone A sends a sound signal and bluetooth signal at the same time and Phone B will calculate the delay between the two signals.
In practice I am getting inconsistent results with delays from 90ms-160ms.
I tried optimizing both ends as much as possible.
On the output end:
Tone is generated once
Bluetooth and audio output each have their own thread
Bluetooth only outputs after AudioTrack.write and AudioTrack is in streaming mode so it should start outputting before the write is even completed.
On the receiving end:
Again two separate threads
System time is recorded before each AudioRecord.read
Sampling specs:
44.1khz
Reading entire buffer
Sampling 100 samples at a time using fft
Taking into account how many samples transformed since initial read()
Your method relies on basically zero latency throughout the whole pipeline, which is realistically impossible. You just can't synchronize it with that degree of accuracy. If you could get the delays down to 5-6ms, it might be possible, but you'll beat your head into your keyboard before that happens. Even then, it could only possibly be accurate to 1.5 meters or so.
Consider the lower end of the delays you're receiving. In 90ms, sound can travel slightly over 30m. That's the very end of the marketed bluetooth range, without even considering that you'll likely be in non-ideal transmitting conditions.
Here's a thread discussing low latency audio in Android. TL;DR is that it sucks, but is getting better. With the latest APIs and recent devices, you may be able to get it down to 30ms or so, assuming you run some hand-tuned audio functions. No simple AudioTrack here. Even then, that's still a good 10-meter circular error probability.
Edit:
A better approach, assuming you can synchronize the devices' clocks, would be to embed a timestamp into the audio signal, using a simple am/fm modulation or pulse train. Then you could decode it at the other end and know when it was sent. You still have to deal with the latency problem, but it simplifies the whole thing nicely. There's no need for bluetooth at all, since it isn't really a reliable clock anyway, since it can be regarded as having latency problems of its own.
This gives you a pretty good approach
http://netscale.cse.nd.edu/twiki/pub/Main/Projects/Analyze_the_frequency_and_strength_of_sound_in_Android.pdf
You have to create an 1 kHz sound with some amplitude (measure in dB) and try to measure the amplitude of the sound arrived to the other device. From the sedation you might be able to measure the distance.
As I remember: a0 = 20*log (4*pi*distance/lambda) where a0 is the sedation and lambda is given (you can count it from the 1kHz)
But in such a sensitive environment, the noise might spoil the whole thing, just an idea, how I would do if I were you.