How to use AudioTrack.getTimestamp() on Android to calculate latency? - android

I see many resources recommending that AudioTrack.getTimestamp() be used on modern Android versions to calculate audio latency for audio/video sync.
For instance:
https://stackoverflow.com/a/37625791/332798
https://developer.amazon.com/docs/fire-tv/audio-video-synchronization.html#section1-1
https://groups.google.com/forum/#!topic/android-platform/PoHfyNK54ps
However, none of these explain how to use the timestamp to calculate the latency? I'm struggling to figure what to do with the timestamp's framePosition/nanoTime to come up with a latency number.

So prior to this API, you would use AudioTrack.getPlaybackHeadPosition() which was just an approximation. Thus, to account for latency you had to offset that value with a latency value from one of two hidden methods: AudioManager.getOutputLatency() or AudioTrack.getLatency().
With the new AudioTrack.getTimestamp() API, you get a snapshot of the playhead position at a given time, taken directly at the output. As such, it is fully accurate and already accounts for device latency. Thus there's no need to call any other APIs now to add/remove latency.
The caveat is that this timestamp is only a snapshot, and the docs recommend you don't call this new method very often. So the trick to getting the "current" position is to use your last snapshot and linearly interpolate what the current value should be:
playheadPos = timestamp.framePosition +
(System.nanoTime() - timestamp.nanoTime) * samplerate / 1e9;
This position can then be compared against how many frames you've written into the AudioTrack, by maintaining another counter which increments every time AudioTrack.write() completes:
int bytesWritten = track.write(...);
writtenPos += bytesWritten / pcmFrameSize;
If you're working with ENCODING_AC3, the playhead position reported by AudioTrack is still in terms of samples. You will either need to convert it to bytes, or convert the number of bytes you've written in back into samples. Either way, you will need to know the bitrate of your AC3 stream (i.e. 384000bps)
int bytesWritten = track.write(...);
writtenPos += bytesWritten * samplerate / (bitrate / 8);

Related

How to compare amplitude from Android with iOS

So I've read various posts and blogs on how to get the amplitude from the microphone on Android. Multiple posts suggested using this implementation:
minSize = AudioRecord.getMinBufferSize(
44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT
);
audioRecord = AudioRecord(
MediaRecorder.AudioSource.MIC,
44100,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minSize
);
val buffer = ShortArray(minSize)
audioRecord.read(buffer, 0, minSize)
var max = 0
for (s in buffer) {
if (abs(s.toInt()) > max) {
max = abs(s.toInt())
}
}
return max.toDouble()//This being the amplitude
First question: What is the measurement of the value I am getting returned? An example of the value could typically be 381 with "regular noise" e.g. milliVolts(mV)?
On iOS you are able to get averagePower and peakPower from the AudioRecorder which returns the average or max amplitude in dbFS.
Second question: Is it possible to do the same implementation that we have on Android on iOS?
Third question: Is it possible to do the same implementation that we have on iOS on Android?
To provide some context; As part of a research project we are looking for different sound patterns that might link a user to a specific context and in order to do so on a larger scale we need to be able to compare the amplitude from Android and iOS.
Fourth question: Regardless of the implementations mentioned above, is there a better way to compare soundwaves from microphones of both iOS and Android devices?
The easiest way to get the max amplitude on Android is a lot less code. Just call MediaRecorder.getMaxAmplitude(). It returns the max amplitude since the previous call. I'm not sure where you got the suggestion to use the call you did, that seems like the hard way.
The units aren't specified. It should correspond to the amount of pressure picked up by the mic, but as every model will have different mics, amps, DACs, etc it isn't going to be the same on all devices. All you can be promised is it will be between 0 and (2^x)-1 where x is the number of bits per sample you picked. I'm not sure why you'd think it would be in millivolts, this isn't an electrical measurement. Sound is typically measured in dB or pascals.
For comparing iOS to Android and trying to do matching- what are you trying to do? The code you have is just finding the max value. That's kind of uninteresting, unless all you're doing is an applause meter. If you're actually looking to compare soundwaves wouldn't you be better off taking the fourier transform and doing so in the frequency domain? The time domain is really messy for that.

audio latency issues

In the application which I want to create, I face some technical obstacles. I have two music tracks in the application. For example, a user imports the music background as a first track. The second path is a voice recorded by the user to the rhythm of the first track played by the speaker device (or headphones). At this moment we face latency. After recording and playing back in the app, the user hears the loss of synchronisation between tracks, which occurs because of the microphone and speaker latencies.
Firstly, I try to detect the delay by filtering the input sound. I use android’s AudioRecord class, and the method read(). This method fills my short array with audio data.
I found that the initial values of this array are zeros so I decided to cut them out before I will start to write them into the output stream.
So I consider those zeros as a „warmup” latency of the microphone. Is this approach correct? This operation gives some results, but it doesn’t resolve the problem, and at this stage, I’m far away from that.
But the worse case is with the delay between starting the speakers and playing the music. This delay I cannot filter or detect. I tried to create some calibration feature which counts the delay. I play a „beep” sound through the speakers, and when I start to play it, I also begin to measure time. Then, I start recording and listen for this sound being detected by the microphone. When I recognise this sound in the app, I stop measuring time. I repeat this process several times, and the final value is the average from those results. That is how I try to measure the latency of the device. Now, when I have this value, I can simply shift the second track backwards to achieve synchronisation of both records (I will lose some initial milliseconds of the recording, but I skip this case, for now, there are some possibilities to fix it).
I thought that this approach would resolve the problem, but it turned out this is not as simple as I thought. I found two issues here:
1. Delay while playing two tracks simultaneously
2. Random in device audio latency.
The first: I play two tracks using AudioTrack class and I run method play() like this:
val firstTrack = //creating a track
val secondTrack = //creating a track
firstTrack.play()
secondTrack.play()
This code causes delays at the stage of playing tracks. Now, I don’t even have to think about latency while recording; I cannot play two tracks simultaneously without delays. I tested this with some external audio file (not recorded in my app) - I’m starting the same audio file using the code above, and I can see a delay. I also tried it with MediaPlayer class, and I have the same results. In this case, I even try to play tracks when callback OnPreparedListener invoke:
val firstTrack = //AudioPlayer
val secondTrack = //AudioPlayer
second.setOnPreparedListener {
first.start()
second.start()
}
And it doesn’t help.
I know that there is one more class provided by Android called SoundPool. According to the documentation, it can be better with playing tracks simultaneously, but I can’t use it because it supports only small audio files and that can't limit me.
How can I resolve this problem? How can I start playing two tracks precisely at the same time?
The second: Audio latency is not deterministic - sometimes it is smaller, and sometimes it’s huge, and it’s out of my hands. So measuring device latency can help but again - it cannot resolve the problem.
To sum up: is there any solution, which can give me exact latency per device (or app session?) or other triggers which detect actual delay, to provide the best synchronisation while playback two tracks at the same time?
Thank you in advance!
Synchronising audio for karaoke apps is tough. The main issue you seem to be facing is variable latency in the output stream.
This is almost certainly caused by "warm up" latency: the time it takes from hitting "play" on your backing track to the first frame of audio data being rendered by the audio device (e.g. headphones). This can have large variance and is difficult to measure.
The first (and easiest) thing to try is to use MODE_STREAM when constructing your AudioTrack and prime it with bufferSizeInBytes of data prior to calling play (more here). This should result in lower, more consistent "warm up" latency.
A better way is to use the Android NDK to have a continuously running audio stream which is just outputting silence until the moment you hit play, then start sending audio frames immediately. The only latency you have here is the continuous output latency.
If you decide to go down this route I recommend taking a look at the Oboe library (full disclosure: I am one of the authors).
To answer one of your specific questions...
Is there a way to calculate the latency of the audio output stream programatically?
Yes. The easiest way to explain this is with a code sample (this is C++ for the AAudio API but the principle is the same using Java AudioTrack):
// Get the index and time that a known audio frame was presented for playing
int64_t existingFrameIndex;
int64_t existingFramePresentationTime;
AAudioStream_getTimestamp(stream, CLOCK_MONOTONIC, &existingFrameIndex, &existingFramePresentationTime);
// Get the write index for the next audio frame
int64_t writeIndex = AAudioStream_getFramesWritten(stream);
// Calculate the number of frames between our known frame and the write index
int64_t frameIndexDelta = writeIndex - existingFrameIndex;
// Calculate the time which the next frame will be presented
int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / sampleRate_;
int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta;
// Assume that the next frame will be written into the stream at the current time
int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC);
// Calculate the latency
*latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) / NANOS_PER_MILLISECOND;
A caveat: This method relies on accurate timestamps being reported by the audio hardware. I know this works on Google Pixel devices but have heard reports that it isn't so accurate on other devices so YMMV.
Following the answer of donturner, here's a Java version (that also uses other methods depending on the SDK version)
/** The audio latency has not been estimated yet */
private static long AUDIO_LATENCY_NOT_ESTIMATED = Long.MIN_VALUE+1;
/** The audio latency default value if we cannot estimate it */
private static long DEFAULT_AUDIO_LATENCY = 100L * 1000L * 1000L; // 100ms
/**
* Estimate the audio latency
*
* Not accurate at all, depends on SDK version, etc. But that's the best
* we can do.
*/
private static void estimateAudioLatency(AudioTrack track, long audioFramesWritten) {
long estimatedAudioLatency = AUDIO_LATENCY_NOT_ESTIMATED;
// First method. SDK >= 19.
if (Build.VERSION.SDK_INT >= 19 && track != null) {
AudioTimestamp audioTimestamp = new AudioTimestamp();
if (track.getTimestamp(audioTimestamp)) {
// Calculate the number of frames between our known frame and the write index
long frameIndexDelta = audioFramesWritten - audioTimestamp.framePosition;
// Calculate the time which the next frame will be presented
long frameTimeDelta = _framesToNanoSeconds(frameIndexDelta);
long nextFramePresentationTime = audioTimestamp.nanoTime + frameTimeDelta;
// Assume that the next frame will be written at the current time
long nextFrameWriteTime = System.nanoTime();
// Calculate the latency
estimatedAudioLatency = nextFramePresentationTime - nextFrameWriteTime;
}
}
// Second method. SDK >= 18.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED && Build.VERSION.SDK_INT >= 18) {
Method getLatencyMethod;
try {
getLatencyMethod = AudioTrack.class.getMethod("getLatency", (Class<?>[]) null);
estimatedAudioLatency = (Integer) getLatencyMethod.invoke(track, (Object[]) null) * 1000000L;
} catch (Exception ignored) {}
}
// If no method has successfully gave us a value, let's try a third method
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
AudioManager audioManager = (AudioManager) CRT.getInstance().getSystemService(Context.AUDIO_SERVICE);
try {
Method getOutputLatencyMethod = audioManager.getClass().getMethod("getOutputLatency", int.class);
estimatedAudioLatency = (Integer) getOutputLatencyMethod.invoke(audioManager, AudioManager.STREAM_MUSIC) * 1000000L;
} catch (Exception ignored) {}
}
// No method gave us a value. Let's use a default value. Better than nothing.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
estimatedAudioLatency = DEFAULT_AUDIO_LATENCY;
}
return estimatedAudioLatency
}
private static long _framesToNanoSeconds(long frames) {
return frames * 1000000000L / SAMPLE_RATE;
}
The android MediaPlayer class is notoriously slow to begin audio playback, I experienced an issue in an app I was creating where there was a greater than one second delay to begin playing an audio clip. I resolved it by switching to ExoPlayer which resulted in the playback starting within 100ms. I've also read that ffmpeg has even faster start audio startup time than ExoPlayer but I haven't used it so I can't make any promises.

getMaxAmplitude() alternative for Visualizer

In my app I allow the user to record audio using the phone's camera, while the recording is in progress I update a Path using time as the X value and a normalized form of getMaxAmplitude() for the y value.
float amp = Math.min(mRecorder.getMaxAmplitude(), mMaxAmplitude)
/ (float) mMaxAmplitude;
This works rather well.
My problem occurs when I go to play back the audio (after transporting it over the network). I want to recreate the waveform generated while recording, but the MediaPlayer class does not possess the same getMaxAmplitude() method.
I have been attempting to use the Visualizer class provided by the framework, but am having a difficult time getting a usable result for the y value. The byte array returned contains values between -128 and 127 but when i look at the actual values they do not appear to represent the waveform as I would expect it to be.
How do I use the values returned from the visualizer to get a value related to the loudness of the sound?
Your byte array is probably an array of 16, 24 or 32 bit signed values. Assuming they are 16 bit signed then the bytes will be alternating hi-byte with the MSB being the sign bit and the lo-byte. Or, depending on the endianness it could be lo-byte followed by the high byte. Moreover, if you have two channels of data, each sample is probably interleaved. Again, assuming 16-bits, you can decode the samples something in a manner similar to this:
for (int i = 0 ; i < numBytes/2 ; ++i)
{
sample[i] = (bytes[i*2] << 8) | bytes[i*2+1];
}
According to the documentation of getMaxAmplitude, it returns the maximum absolute amplitude that was sampled since the last call. I guess this means the peak amplitude but it's not totally clear from the documentation. To compute the peak amplitude, just compute the max of the abs of all the samples.
int maxPeak = 0.0;
for (int i = 0 ; i < numSamples ; ++i)
{
maxPeak = max(maxPeak, abs(samples[i]));
}

convolution of audio signal

I'm using audiorecorder to record sound and do some processing in pseudorealtime on android phone.
i'm facing a problem between FFT and convolution of audio signal:
I perform FFT on a known signal(a sine waveform), and i correctly always find the single tone contained in it, by using the FFT.
Now i want to do the same thing by using a convolution (it's an exercise): I perform 5000 convolutions of that signal by using 5000 filters. Each filter is a sine waveform on a different frequency between 0 and 5000 Hz.
Then, i search the peak for each convolution output. By this way i should find the maximum peak when i'm using the filter with the same tone contained on the signal.
Infact with a tone of 2kHz i can find the max with the 2kHz filter.
The problem is that when i receive a 4kHz tone, i find the max on the convolution with the 4200Hz filter (while the FFT instead always works fine)
Is it matematically possible?
what is the problem in my convolution?
This is the convolution function that i wrote:
//i do the convolution and return the max
//IN is the array with the signal
//DATASIZE is the size of the array IN
//KERNEL is the filter containing the sine at the selected frequency
int convolveAndGetPeak(short[] in,int dataSize, double[] kernel) {
//per non rischiare l'overflow, il kernel deve avere un ampiezza massima pari a 1/10 del max
int i, j, k;
int kernelSize=kernel.length;
int tmpSignalAfterFilter=0;
double out;
// convolution from out[0] to out[kernelSize-2]
//iniziamo
for(i=0; i < kernelSize - 1; ++i)
{
out = 0; // init to 0 before sum
for(j = i, k = 0; j >= 0; --j, ++k)
out += in[j] * kernel[k];
if (Math.abs((int) out)>tmpSignalAfterFilter ){
tmpSignalAfterFilter=Math.abs((int) out);
}
}
// start convolution from out[kernelSize-1] to out[dataSize-1] (last)
//iniziamo da dove eravamo arrivati
for( ; i < dataSize; ++i)
{
out = 0; // initialize to 0 before accumulate
for(j = i, k = 0; k < kernelSize; --j, ++k)
out += in[j] * kernel[k];
if (Math.abs((int) out)>tmpSignalAfterFilter ){
tmpSignalAfterFilter=Math.abs((int) out);
}
}
return tmpSignalAfterFilter;
}
the kernel, used as filter, is generated this way:
//curFreq is the frequency of the filter in Hz
//kernelSamplesSize is the desired length of the filter (number of samples), for time precision reasons i'm using 20 samples length.
//sampleRate is the sampling frequency
double[] generateKernel(int curFreq,int kernelSamplesSize,int sampleRate){
double[] curKernel= new double[kernelSamplesSize] ;
for (int kernelIndex=0;kernelIndex<curKernel.length;kernelIndex++){
curKernel[kernelIndex]=Math.sin( (double)kernelIndex * ((double)(2*Math.PI) * (double)curFreq / (double)sampleRate)); //the part that makes this a sine wave....
}
return curKernel;
}
if you want to try a convolution, the data contained in the IN array is the following:
http://www.tr3ma.com/Dati/signal.txt
Note1: the sampling frequency is 44100Hz
Note2: the tone contained in the signal is a single 4kHz tone (even if the convolution has the max peak with a 4200Hz filter.
EDIT: I also repeated the test on a excel sheet. the result is the same (of course, i'm using the same algorithm) and the algorithms seems to me to be correct...
this is the excel sheet i prepared, if you prefer to work on excel: http://www.tr3ma.com/Dati/convolutions.xlsm
You change the bandwidth by two factors:
a) The length of your kernel (e.g. a length t of 5ms produces a rough bandwidth of f >= 200Hz, estimated with 1/0.005 because Δt·Δf >= 1, see "Heisenberg"), and
b) the window function (which you definitely should implement to make your algorithm working in real-world applications because otherwise in some cases sidelobes of some filter outputs could yield more energy than the main lobe of the expected filter output).
But you have another problem: you need to convolve with a 2nd kernel consisting of cosine waves (which means that you need the same waves as in the 1st kernel but shifted by 90 degrees). Why is that? Because with only the sine kernel, you get a phase-dependent modulation of the filter outputs (e.g. if the phase difference between the input signal and the kernel wave with the identical frequency is 90 degrees you get the amplitude 0).
Finally, you combine the outputs of both kernels with Pythagoras.
it seems all correct, apart the number of samples of the kernel (the filter).
Increasing the size of the filter the result is more accurate.
I don't know how to calculate the bandwidth of this filter but it seems clear to me that it's a matter of filter bandwidth. So, the filter bandwidth depends also on the number of samples of the filter used in the convolution, with reference to the sampling frequency(and may be also with reference to the tone frequency). Unfortunately i can not increase too much the number of samples of my filter since otherwise the phone can not perform the filtering in realtime.
Note: i need the convolution cause i need to identify the precise moment when the tone was fired.
EDIT: i made a compare between filter with 20 samples and filter with 40 samples.
I don't know the formula to obtain the fitler bandwidth but it's clear, in the following image, the difference between the 2 filters.
EDIT2: FEW DAYS AFTER POSTING THE SOLUTION I FOUND HOW TO CALCULATE THE BANDWIDTH OF SUCH FILTER: IT'S JUST THE INVERSE OF THE FILTER DURATION. SO IN EXAMPLE A KERNEL OF 40 SAMPLES AT 44100KhZ HAS A DURATION OF ABOUT 907uS, THEN THE FILTER BANDWIDTH, WITH THIS KERNEL AND A WINDOW OF THE SAME LENGTH IS 1/907uS= 1,1KhZ
(source: tr3ma.com)

Explanation of how this MIDI lib for Android works

I'm using the library of #LeffelMania : https://github.com/LeffelMania/android-midi-lib
I'm musician but I've always recorded as studio recordings, not MIDI, so I don't understand some things.
The thing I want to understand is this piece of code:
// 2. Add events to the tracks
// Track 0 is the tempo map
TimeSignature ts = new TimeSignature();
ts.setTimeSignature(4, 4, TimeSignature.DEFAULT_METER, TimeSignature.DEFAULT_DIVISION);
Tempo tempo = new Tempo();
tempo.setBpm(228);
tempoTrack.insertEvent(ts);
tempoTrack.insertEvent(tempo);
// Track 1 will have some notes in it
final int NOTE_COUNT = 80;
for(int i = 0; i < NOTE_COUNT; i++)
{
int channel = 0;
int pitch = 1 + i;
int velocity = 100;
long tick = i * 480;
long duration = 120;
noteTrack.insertNote(channel, pitch, velocity, tick, duration);
}
Ok, I have 228 Beats per minute, and I know that I have to insert the note after the previous note. What I don't understand is the duration.. is it in milliseconds? it doesn't have sense if I keep the duration = 120 and I set my BPM to 60 for example. Neither I understand the velocity
MY SCOPE
I want to insert notes of X pitch with Y duration.
Could anyone give me some clue?
The way MIDI files are designed, notes are in terms of musical length, not time. So when you insert a note, its duration is a number of ticks, not a number of seconds. By default, there are 480 ticks per quarter note. So that code snippet is inserting 80 sixteenth notes since there are four sixteenths per quarter and 480 / 4 = 120. If you change the tempo, they will still be sixteenth notes, just played at a different speed.
If you think of playing a key on a piano, the velocity parameter is the speed at which the key is struck. The valid values are 1 to 127. A velocity of 0 means to stop playing the note. Typically a higher velocity means a louder note, but really it can control any parameter the MIDI instrument allows it to control.
A note in a MIDI file consists of two events: a Note On and a Note Off. If you look at the insertNote code you'll see that it is inserting two events into the track. The first is a Note On command at time tick with the specified velocity. The second is a Note On command at time tick + duration with a velocity of 0.
Pitch values also run from 0 to 127. If you do a Google search for "MIDI pitch numbers" you'll get dozens of hits showing you how pitch number relates to note and frequency.
There is a nice description of timing in MIDI files here. Here's an excerpt in case the link dies:
In a standard MIDI file, there’s information in the file header about “ticks per quarter note”, a.k.a. “parts per quarter” (or “PPQ”). For the purpose of this discussion, we’ll consider “beat” and “quarter note” to be synonymous, so you can think of a “tick” as a fraction of a beat. The PPQ is stated in the last word of information (the last two bytes) of the header chunk that appears at the beginning of the file. The PPQ could be a low number such as 24 or 96, which is often sufficient resolution for simple music, or it could be a larger number such as 480 for higher resolution, or even something like 500 or 1000 if one prefers to refer to time in milliseconds.
What the PPQ means in terms of absolute time depends on the designated tempo. By default, the time signature is 4/4 and the tempo is 120 beats per minute. That can be changed, however, by a “meta event” that specifies a different tempo. (You can read about the Set Tempo meta event message in the file format description document.) The tempo is expressed as a 24-bit number that designates microseconds per quarter-note. That’s kind of upside-down from the way we normally express tempo, but it has some advantages. So, for example, a tempo of 100 bpm would be 600000 microseconds per quarter note, so the MIDI meta event for expressing that would be FF 51 03 09 27 C0 (the last three bytes are the Hex for 600000). The meta event would be preceded by a delta time, just like any other MIDI message in the file, so a change of tempo can occur anywhere in the music.
Delta times are always expressed as a variable-length quantity, the format of which is explained in the document. For example, if the PPQ is 480 (standard in most MIDI sequencing software), a delta time of a dotted quarter note (720 ticks) would be expressed by the two bytes 82 D0 (hexadecimal).

Categories

Resources