Android audio delay - how to calculate delay - android

i understand that there are some issues why android can't ply low latency audio and has a >100ms delay on everything (well.. actually vibrations are faster as audio!!! Shame on you!).. but is there some possibility to figure out how much earlier i need to run the sound to actually be on time?
e.g. how to calculate audio delay?
Im creating a rhythm game and i need to play "ticks" in sync with music.
Im using libGDX Sound - e.g. sound pool - play() now.
Any suggestions?

Your app could emit a sound with the speaker and then use the microphone to detect the sound emited by itself (something similar to remote.js).
Even though there are many variables involved (the mic will also have a latency), you can make the device calibrate it self by continuously trying to guess how long the sound will take to be detected, and emiting it again and again until your guess gets "fine tuned".

Related

AudioRecord while also playing audio - accessing output playback data

I'm messing around in my app with a custom model for speech commands - I have it working fine recording and processing input audio from an AudioRecord, and I give feedback to the user through text to speech.
One issue I have is that I'd like this to work even when audio is playing - either through my own text to speech or through something else playing in the background (music for instance). I realize this is going to be a non trivial problem, but if I could get access in some way to the audio output data (what the phone is playing) and match that up with my microphone input data, I think I can at least adjust my model for this + improve my results.
However, based on Android - Can I get the audio data for playback from the audio mixer? , it sounds like that is impossible.
Two questions:
1) Is there any way that I'm missing to get access to expected audio output/playback data through the android api, or any options the android api provides for dealing with this issue (the feedback loop between audio output and input)?
2) Outside of stopping all other playback or waiting for other playback to finish - is there any other approach to solve this problem? I would assume some calling apps have a way of dealing with this if the user is on speaker phone, I'm just missing how to do it myself
Thanks
Answers to 1 & 2: You want AcousticEchoCanceler.
A short lecture on why "deleting the speaker audio from the microphone input" input is a non-trivial task that takes substantial signal processing knowledge: It's more complicated than just time-shifting the speaker audio a little bit and subtracting it from the mic input. The fact is, the spectrum of the audio changes drastically even as it leaves the speaker (most tiny speakers have a very peaky response centered around 3-4KHz). The audio may bounce off multiple objects (walls, etc.) before it gets back to the mic (multipath interference). Different frequency components interfere at the microphone in different, impossible to predict ways, vastly changing the spectrum of the audio. And by the way -- if anything in the room moves, say, if you put your hand near the phone -- everything changes. That is why you don't want to try to write your own echo cancellation filter. Android has provided one for you, so you can write cool speakerphone apps and such.

Stop an audio play at an exact time

This question has a background for iOS but I am also asking in general.
I am developing an app that requires to start and stop playing an audio file at specific points on the timeline. Apart from finding out the duration of play by substracting the start-time from the stop-time and use the duration to set a separate timer to stop the audio play, is there a more effective way (an exact way) to control the stop?
Currently, my app, under the influence of its "weather" that the stop could be near or far, occasionally the stop will come too late and part of the audio after the specified stop point is played.
Now, I have examined audio editing program namely Audacity, by selecting a range on the timeline and hit play, Audacity starts and stops (at least to the naked ear or the feeble mind) precisely at the specified points. What is the underlying control mechanism and how it differs to iOS API?
Could iOS employ the same or similar mechanism? How about Android?
Much appreciated.
With Android u can start and stop the playback of a audiofile
at the axactly time in milliseconds also like in audacity.
In audacity the time is formated like this 00h00m00.000s.
The Time after the "." are the milliseconds , so u have to transfer both times to milliseconds.
Than u can seek to the start time and stop after the (start-stop)-time difference with a handler.
See here Android: How to stop media (mp3) in playing when specific milliseconds come?
but why u not cut the specific segment in audacity?

Synchronized audio playback on separate Android devices

I'm curious how I would play the same audio on multiple android devices all in sync? seedio for iOS is an example of what I'm talking about.
I can think of two possible scenarios. Cache the audio on each device and exactly synchronize playback start time. Use an RTP like protocol to synchronize the playback in real-time.
My suggestion would be to cache the audio an every device and then synchronize the playback.
NTP can get you surprisingly good clock synchronization.
Then maybe you redo the clock sync ever so often and restart playback from a common point in the file after to account for drift in clock speeds. How often you need this will vary based on how much clock drift there is on the devices.
An interesting research project.

How to synchronize sound playback in android?

I'm writing my first android app, trying to playback two 10min soundfiles synchronously (imagine an instrumental track and an acapella), to be able to change the volume of each track independently). I am using two MediaPlayers for this, since a SoundPool is targeted at shorter audio samples as far as I read.
Now my problem is that, when pausing and resuming the playback, sometimes the players are not synchronous anymore, even though I set their positions to the same value before resuming playback.
I know that this is kind of inevitable, because they cannot be started at exactly the same moment and they may require different amounts of time for starting playback, but: Is there maybe any other approach to meet my requirements?
You can take a look at JetPlayer, this may accomplish what you want as far as synchronization.
To use it you create audio channels (your instrument channel and vocal channel) as MIDI files in a track and the player can keep them synchronized while allowing you to mute or unmute the different channels as appropriate.
The user guide for creating JET resouces can be found here.

Android MediaPlayer causes game to freeze with "AudioHardware pcm playback is going to standby"

This is a tough one :/
I'm making a music-based Android game a la Audiosurf. It works all nice except a few seconds before end of a song (that is being played with a normal MediaPlayer) the music stops aprubtly and the whole game (including UI) freezes for several seconds.
Each time that happens I see an "AudioHardware pcm playback is going to standby"-error in logcat.
Googling has led me to the conclusion that
this could be a HTC Hero specific issue (cannot be reproduced on emulator or other devices)
this message is normally logged when a http stream isn't fast enough for MediaPlayer
Audio in Android sucks in general
As I am decoding the mp3 with the NDK + libmpg123 for audio analysis already I might aswell just play the audio myself (using a very ugly interface between NDK C code and an AudioTrack in Java).
Is there a fix/workaround for this bug or should I really go that way? (I only have limited time left to complete this project)
I appreciate every hint!
You might be stopping the music when you've buffered it all in your C code. Since the AudioTrack has a delay in it, you may need to wait longer for it to finish.
I'd need more detail about your code to help, though.

Categories

Resources