Synchronized audio playback on separate Android devices - android

I'm curious how I would play the same audio on multiple android devices all in sync? seedio for iOS is an example of what I'm talking about.
I can think of two possible scenarios. Cache the audio on each device and exactly synchronize playback start time. Use an RTP like protocol to synchronize the playback in real-time.

My suggestion would be to cache the audio an every device and then synchronize the playback.
NTP can get you surprisingly good clock synchronization.
Then maybe you redo the clock sync ever so often and restart playback from a common point in the file after to account for drift in clock speeds. How often you need this will vary based on how much clock drift there is on the devices.
An interesting research project.

Related

Android: play multiple mp3s simultaneously, with precise sync and independent volume control

I want to create an Android app that plays multiple mp3s simultaneously, with precise sync (less than 1/10 of a second off) and independent volume control. Size of each mp3 could be over 1MB, run time up to several minutes. My understanding is that MediaPlayer will not do the precise sync, and SoundPool can't handle files over 1MB or 5 seconds run time. I am experimenting with superpowered and may end up using that, but I'm wondering if there's anything simpler, given that I don't need any processing (reverb, flange, etc.), which is superpowered's focus.
Also ran across the YouTube video on Android high-performance audio, from Google I/O 2016. Wondering if anyone has any experience with this.
https://www.youtube.com/watch?v=F2ZDp-eNrh4
Superpowered was originally made for my DJ app (DJ Player in the App Store), where precisely syncing multiple tracks is a requirement.
Therefore, syncing multiple mp3s and independent volume control is definitely possible and core to Superpowered. All you need is the SuperpoweredAdvancedAudioPlayer class for this.
The CrossExample project in the SDK has two players playing in sync.
The built-in audio features in Android are highly device and/or build dependent. You can't get a consistent feature set with those. In general, the audio features of Android are not stable. That's why you need a specialized audio library which does everything "inside" your application (so is not a "wrapper" around Android's audio features).
When you are playing compressed files (AAC, MP3, etc) on Android in most situations they are decoded in hardware to save power, except when the output goes to a USB audio interface. The hardware codec accepts data in big chunks (again, to save power). Since it's not possible to issue a command to start playing multiple streams at once, what will often be happening is that one stream will already send a chunk of compressed audio to hardware codec, and it will start playing, while others haven't yet sent their data.
You really need to decode these files in your app and mix the output to produce a single audio stream. Then you will guarantee the desired synchronization. The built-in mixing facilities are mostly intended to allow multiple apps to use the same sound output, they are not designed for multitrack mixing.

Android audio delay - how to calculate delay

i understand that there are some issues why android can't ply low latency audio and has a >100ms delay on everything (well.. actually vibrations are faster as audio!!! Shame on you!).. but is there some possibility to figure out how much earlier i need to run the sound to actually be on time?
e.g. how to calculate audio delay?
Im creating a rhythm game and i need to play "ticks" in sync with music.
Im using libGDX Sound - e.g. sound pool - play() now.
Any suggestions?
Your app could emit a sound with the speaker and then use the microphone to detect the sound emited by itself (something similar to remote.js).
Even though there are many variables involved (the mic will also have a latency), you can make the device calibrate it self by continuously trying to guess how long the sound will take to be detected, and emiting it again and again until your guess gets "fine tuned".

Sound echo while playing recieved sound in real time

I am trying to receive audio from headset's microphone using AudioRecord and playback the audio in real time to the Headphones using AudioTrack.I have implemented required code but the problem is that there is a disturbing Echo. I'm not using speakers and i'm using headphones. So,whats causing this echo? I used device's echocanceller which introduced in API level 11 and echo decreased but didn't go away.Im aware of audio latency in android devices but i can't understand how the delay may cause echo while i'm using headphones. Please guide me in the right direction.
I dont think there is a generic solution to this problem.
The reason are
1) the headphone quality may be bad, there may be internal coupling between the mic and headphones as the wires are very close in headphones
2) The echo canceler in android is not mandatory to be implemented by all devices. Try querying it first and settings. also the echo canceler implementation may vary from device to device
3) the latency affect the performance of the echo canceler a lot as the algorithm has to adapt to the delay and buffer that much audio.
4) lower versions of android have horrible delay problems acknowledged by google itself. You may want to move to higher android version as these things have been improved greatly.
In general any API that has some direct hardware access like mic and camera , the performance varies from device to device and performance cannot be guaranteed.
you may want to look at openSLES for better audio performance and easier integration to AEC library if you are thinking of integrating.
Please look at -
https://source.android.com/devices/latency_design.html
Low-latency audio playback on Android
https://www.youtube.com/watch?v=d3kfEeMZ65c
Hope this helps,
Regards,
Shrish

How to synchronize sound playback in android?

I'm writing my first android app, trying to playback two 10min soundfiles synchronously (imagine an instrumental track and an acapella), to be able to change the volume of each track independently). I am using two MediaPlayers for this, since a SoundPool is targeted at shorter audio samples as far as I read.
Now my problem is that, when pausing and resuming the playback, sometimes the players are not synchronous anymore, even though I set their positions to the same value before resuming playback.
I know that this is kind of inevitable, because they cannot be started at exactly the same moment and they may require different amounts of time for starting playback, but: Is there maybe any other approach to meet my requirements?
You can take a look at JetPlayer, this may accomplish what you want as far as synchronization.
To use it you create audio channels (your instrument channel and vocal channel) as MIDI files in a track and the player can keep them synchronized while allowing you to mute or unmute the different channels as appropriate.
The user guide for creating JET resouces can be found here.

Outputting transformed microphone input in realtime?

I'm new to the android platform, and I wanted to develop an app that runs in the background and reads the microphone input, applies a transformation to it, and outputs the resulting audio to the speaker.
I'm wondering if there is any lag perceived by the user in this process, or if it's possible to do it in near-realtime so that the user can hear the transformed audio in sync with the ambient audio. Thanks!
Yes, users will hear a severe latency lag or echo with attempts at real-time audio on current unmodified Android devices using the provided APIs.
The summary is that Android devices are configured for fairly long audio buffers, which has been reported to be in the somewhere around the range of 100 to 400 milliseconds long, depending on the particular device and the Android OS version it is running. (Shorter buffers might be possible on Android devices on which one can build and install a modified custom build of the OS with your own custom audio drivers.)
(Humans hear echoes at somewhere around or above 25 mS. Audio buffers on iOS can be as short as 5.8 mS, so you may have better luck trying to develop your near-real-time audio processing on a different device platform.)
Audio processing on android isn't all the great, in fact to be honest, it sucks. The out-of-the-box latency on android devices for such things is pretty awful. You can however tinker with the NDK and try to put together something based on OpenSL ES which will have significantly low latency.
There is a similar StackOverflow question: Playing back sound coming from microphone in real-time
Some other helpful links:
http://arunraghavan.net/2012/01/pulseaudio-vs-audioflinger-fight/
http://www.musiquetactile.fr/android-is-far-behind-ios/
http://www.geardiary.com/2012/02/21/the-dismal-state-of-android-as-a-music-production-solution/
On the other side of the coin, android mic quality is way better than IOS quality. I have a galaxy s4 and a huawei very low end phone and both have a wonderful mic quality when recording.

Categories

Resources