I'm using Exoplayer to play a video in my application, what I want to do is change the speed of playback, for which Exoplayer provides a straightforward solution:
val playbackParameters = PlaybackParameters(whateverSpeedFloat)
exoPlayer.setPlaybackParameters(playbackParameters)
Now this works, but the problem I have is that the effect is not immediate, when you change the speed it takes a few frames for the actual speed to change. I guess it's because some of the frames are preloaded or buffered and the set playback parameters only affect the frames after this.
If I stop the video, and change speed from say 0.5x to 2x, then press play, it's very obvious that there is a delay in playback speed change. But, if I press stop, change the speed from 0.5x to 2x AND seek a different point in the video, and press play, it works great, there is no delay. I guess it reloads/buffers the new frames with the right playback parameters. I tried doing
exoPlayer.clearVideoDecoderOutputBufferRenderer()
after changing speeds to try and rebuffer the frames after setting the playback parameters but it doesn't seem to change anything.
Any ideas on how to fix this? Or other video player libraries that wouldn't have this problem?
UPDATE:
I still haven't found a solution, I took the problem to ExoPlayer and an issue was raised and "fixed"(https://github.com/google/ExoPlayer/issues/7982), however I'm still having the same problem, so atm I'll just get back to them and wait.
However, what was mentioned is that the delay is a know issue, and that right now there is no solution,
Correct. We looked at options to address the delay a while ago but couldn't find a clean/general/easy to implement approach (there is no API we can use to process the audio just before the mixer, so we have to do it upstream of the audio track buffer, which introduces latency).
Instead, they suggested initialising Exoplayer with a DefaultRenderersFactory with AudioTrackPlaybackParams set to true:
val defaultRenderersFactory =
DefaultRenderersFactory(this).setEnableAudioTrackPlaybackParams(true)
exoPlayer = SimpleExoPlayer.Builder(this, defaultRenderersFactory).build()
And this does in fact get rid of the delay (not 100% but I'd say around 80% which is good enough), but then the video speed gets all clunky and starts freezing and changing speeds every time it is paused/played or seeked to a different point.
I also tried modifying the buffering configuration #GensaGames suggested but even though I tested different configurations for a while, I never saw any change in the behaviour so discarded the solution and went to the exoPlayer repo.
I'll update this question when I finally have a working video speed changer.
I think you can decrease time of the buffering within configuration setup during ExpPlayer initializing. Below ex. on configuration, you can go throw documentation and check possible values.
/* Instantiate a DefaultLoadControl.Builder. */
DefaultLoadControl.Builder builder = new
DefaultLoadControl.Builder();
/* Maximum amount of media data to buffer (in milliseconds). */
final long loadControlMaxBufferMs = 60000;
/*Configure the DefaultLoadControl to use our setting for how many
Milliseconds of media data to buffer. */
builder.setBufferDurationsMs(
DefaultLoadcontrol.DEFAULT MIN BUFFER MS,
loadControlMaxBufferMs,
/* To reduce the startup time, also change the line below */
DefaultLoadControl.DEFAULT_BUFFER_FOR_PLAYBACK_MS,
DefaultLoadControl.DEFAULT_BUFFER_FOR_PLAYBACK_AFTER_REBUFFER_MS);
/* Build the actual DefaultLoadControl instance */
DefaultLoadControl loadControl = builder.createDefaultLoadControl();
/* Instantiate ExoPlayer with our configured DefaultLoadControl */
ExoPlayer player = ExoPlayerFactory.newSimpleInstance(
new DefaultRenderersFactory(this),
new DefaultTrackSelector(),
loadControl);
There is also good article about changing buffer time options.
Related
I tried to play a local mp4 video file on my TV Box. I found a weird issue with MediaPlayer's playback speed. Here're my logs:
19:30:09.346 E/MediaPlayerManager: currentMediaPlayer's duration = 16021
19:30:09.715 E/MediaPlayerManager: setOnInfoListener - MediaPlayer.MEDIA_INFO_VIDEO_RENDERING_START
19:30:27.982 E/MediaPlayerManager: onComplete
The media duration was 16s, but it took about 18s to complete the video's playback. There's always a 2s delay before the onComplete listener got called. Does anyone have a solution for this?
After some experiments, I found out the issue was the large SurfaceView. If I replaced it with a TextureView or create the SurfaceView with smaller size, there's no extra delays.
P/s: I knew ExoPlayer, but for some specific reason, I could not use it.
In the application which I want to create, I face some technical obstacles. I have two music tracks in the application. For example, a user imports the music background as a first track. The second path is a voice recorded by the user to the rhythm of the first track played by the speaker device (or headphones). At this moment we face latency. After recording and playing back in the app, the user hears the loss of synchronisation between tracks, which occurs because of the microphone and speaker latencies.
Firstly, I try to detect the delay by filtering the input sound. I use android’s AudioRecord class, and the method read(). This method fills my short array with audio data.
I found that the initial values of this array are zeros so I decided to cut them out before I will start to write them into the output stream.
So I consider those zeros as a „warmup” latency of the microphone. Is this approach correct? This operation gives some results, but it doesn’t resolve the problem, and at this stage, I’m far away from that.
But the worse case is with the delay between starting the speakers and playing the music. This delay I cannot filter or detect. I tried to create some calibration feature which counts the delay. I play a „beep” sound through the speakers, and when I start to play it, I also begin to measure time. Then, I start recording and listen for this sound being detected by the microphone. When I recognise this sound in the app, I stop measuring time. I repeat this process several times, and the final value is the average from those results. That is how I try to measure the latency of the device. Now, when I have this value, I can simply shift the second track backwards to achieve synchronisation of both records (I will lose some initial milliseconds of the recording, but I skip this case, for now, there are some possibilities to fix it).
I thought that this approach would resolve the problem, but it turned out this is not as simple as I thought. I found two issues here:
1. Delay while playing two tracks simultaneously
2. Random in device audio latency.
The first: I play two tracks using AudioTrack class and I run method play() like this:
val firstTrack = //creating a track
val secondTrack = //creating a track
firstTrack.play()
secondTrack.play()
This code causes delays at the stage of playing tracks. Now, I don’t even have to think about latency while recording; I cannot play two tracks simultaneously without delays. I tested this with some external audio file (not recorded in my app) - I’m starting the same audio file using the code above, and I can see a delay. I also tried it with MediaPlayer class, and I have the same results. In this case, I even try to play tracks when callback OnPreparedListener invoke:
val firstTrack = //AudioPlayer
val secondTrack = //AudioPlayer
second.setOnPreparedListener {
first.start()
second.start()
}
And it doesn’t help.
I know that there is one more class provided by Android called SoundPool. According to the documentation, it can be better with playing tracks simultaneously, but I can’t use it because it supports only small audio files and that can't limit me.
How can I resolve this problem? How can I start playing two tracks precisely at the same time?
The second: Audio latency is not deterministic - sometimes it is smaller, and sometimes it’s huge, and it’s out of my hands. So measuring device latency can help but again - it cannot resolve the problem.
To sum up: is there any solution, which can give me exact latency per device (or app session?) or other triggers which detect actual delay, to provide the best synchronisation while playback two tracks at the same time?
Thank you in advance!
Synchronising audio for karaoke apps is tough. The main issue you seem to be facing is variable latency in the output stream.
This is almost certainly caused by "warm up" latency: the time it takes from hitting "play" on your backing track to the first frame of audio data being rendered by the audio device (e.g. headphones). This can have large variance and is difficult to measure.
The first (and easiest) thing to try is to use MODE_STREAM when constructing your AudioTrack and prime it with bufferSizeInBytes of data prior to calling play (more here). This should result in lower, more consistent "warm up" latency.
A better way is to use the Android NDK to have a continuously running audio stream which is just outputting silence until the moment you hit play, then start sending audio frames immediately. The only latency you have here is the continuous output latency.
If you decide to go down this route I recommend taking a look at the Oboe library (full disclosure: I am one of the authors).
To answer one of your specific questions...
Is there a way to calculate the latency of the audio output stream programatically?
Yes. The easiest way to explain this is with a code sample (this is C++ for the AAudio API but the principle is the same using Java AudioTrack):
// Get the index and time that a known audio frame was presented for playing
int64_t existingFrameIndex;
int64_t existingFramePresentationTime;
AAudioStream_getTimestamp(stream, CLOCK_MONOTONIC, &existingFrameIndex, &existingFramePresentationTime);
// Get the write index for the next audio frame
int64_t writeIndex = AAudioStream_getFramesWritten(stream);
// Calculate the number of frames between our known frame and the write index
int64_t frameIndexDelta = writeIndex - existingFrameIndex;
// Calculate the time which the next frame will be presented
int64_t frameTimeDelta = (frameIndexDelta * NANOS_PER_SECOND) / sampleRate_;
int64_t nextFramePresentationTime = existingFramePresentationTime + frameTimeDelta;
// Assume that the next frame will be written into the stream at the current time
int64_t nextFrameWriteTime = get_time_nanoseconds(CLOCK_MONOTONIC);
// Calculate the latency
*latencyMillis = (double) (nextFramePresentationTime - nextFrameWriteTime) / NANOS_PER_MILLISECOND;
A caveat: This method relies on accurate timestamps being reported by the audio hardware. I know this works on Google Pixel devices but have heard reports that it isn't so accurate on other devices so YMMV.
Following the answer of donturner, here's a Java version (that also uses other methods depending on the SDK version)
/** The audio latency has not been estimated yet */
private static long AUDIO_LATENCY_NOT_ESTIMATED = Long.MIN_VALUE+1;
/** The audio latency default value if we cannot estimate it */
private static long DEFAULT_AUDIO_LATENCY = 100L * 1000L * 1000L; // 100ms
/**
* Estimate the audio latency
*
* Not accurate at all, depends on SDK version, etc. But that's the best
* we can do.
*/
private static void estimateAudioLatency(AudioTrack track, long audioFramesWritten) {
long estimatedAudioLatency = AUDIO_LATENCY_NOT_ESTIMATED;
// First method. SDK >= 19.
if (Build.VERSION.SDK_INT >= 19 && track != null) {
AudioTimestamp audioTimestamp = new AudioTimestamp();
if (track.getTimestamp(audioTimestamp)) {
// Calculate the number of frames between our known frame and the write index
long frameIndexDelta = audioFramesWritten - audioTimestamp.framePosition;
// Calculate the time which the next frame will be presented
long frameTimeDelta = _framesToNanoSeconds(frameIndexDelta);
long nextFramePresentationTime = audioTimestamp.nanoTime + frameTimeDelta;
// Assume that the next frame will be written at the current time
long nextFrameWriteTime = System.nanoTime();
// Calculate the latency
estimatedAudioLatency = nextFramePresentationTime - nextFrameWriteTime;
}
}
// Second method. SDK >= 18.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED && Build.VERSION.SDK_INT >= 18) {
Method getLatencyMethod;
try {
getLatencyMethod = AudioTrack.class.getMethod("getLatency", (Class<?>[]) null);
estimatedAudioLatency = (Integer) getLatencyMethod.invoke(track, (Object[]) null) * 1000000L;
} catch (Exception ignored) {}
}
// If no method has successfully gave us a value, let's try a third method
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
AudioManager audioManager = (AudioManager) CRT.getInstance().getSystemService(Context.AUDIO_SERVICE);
try {
Method getOutputLatencyMethod = audioManager.getClass().getMethod("getOutputLatency", int.class);
estimatedAudioLatency = (Integer) getOutputLatencyMethod.invoke(audioManager, AudioManager.STREAM_MUSIC) * 1000000L;
} catch (Exception ignored) {}
}
// No method gave us a value. Let's use a default value. Better than nothing.
if (estimatedAudioLatency == AUDIO_LATENCY_NOT_ESTIMATED) {
estimatedAudioLatency = DEFAULT_AUDIO_LATENCY;
}
return estimatedAudioLatency
}
private static long _framesToNanoSeconds(long frames) {
return frames * 1000000000L / SAMPLE_RATE;
}
The android MediaPlayer class is notoriously slow to begin audio playback, I experienced an issue in an app I was creating where there was a greater than one second delay to begin playing an audio clip. I resolved it by switching to ExoPlayer which resulted in the playback starting within 100ms. I've also read that ffmpeg has even faster start audio startup time than ExoPlayer but I haven't used it so I can't make any promises.
My app needs to record video with a maximum time of 8 seconds. This is already implemented with MediaRecorder.setMaxDuration(long milliseconds).
The app also needs a progress bar in the top and a label with a count down of the remaining time.
The problem here is that there's an offset between the UI and the MediaRecorder progress, and this leads to confusion in the user. For example, the user thinks that he/she recorded something because the progress in the UI said so, but the media recorder cut off the video a second earlier.
The challenge is to start the progress bar and counter at the exact same time as the recorder actually starts recording.
I've tried starting the timer after MediaRecorder.start(), in a callback when the created file is modified for the first time, but I haven't found a way to achieve this in a correct way. We tried setting a hard coded offset to these values but of course it didn't work the same for every device.
I wish there was a callback from the MediaRecorder to inform that it has actually started to record the video, or maybe the current length.
Is the problem clear? Has someone solved this before?
MediaRecorder has known issues with cutting off audio early. I implemented a recorder with a button - clicking the button to stop the recorder would actually yield an audio file with the last second cut off.
Not sure if your UI offset is a separate issue, but I would try extending the MediaRecorder by half a second after the user attempts to end it. You can either do this by changing the maximum time to 8.5 seconds, or just using this line of code:
android.os.SystemClock.sleep(500);
When using MediaPlayer, I noticed that whenever my phone stucks, the MediaPlayer glitches and then continues playing from the position in the audio it glitched.
This is bad for my implementation since I want the audio to be played at a specific time.
If I have a song of 1000 millisecond length, I want is the ability to set MediaPlayer to start playing at some specific time t, and then exactly stop at at time t+1000.
This means that I actually need two things:
1) Start MediaPlayer at a specific time with a very small delay.
2) Making MediaPlayer glitches ignore the audio they glitched on and continue playing in order to finish the song on time.
The delay of the functions is very important to me and I need the audio to be played exactly(~) at the time it was supposed to be played.
Thanks!
You will need to use possibly mp.getDuration(); and/or mp.getCurrentPosition(); although it's impossible to know exactly what you mean by "I need the audio to be played exactly(~) at the time it was supposed to be played."
Something like this should get you started:
int a = (mp.getCurrentPosition() + b);
Thanks for the answer Mike. but unfortunately this won't help me. Let's say that I asked MediaPlayer to start playing a song of length 3:45 at 00:00. At 01:00 I started using the phone's resources, due to the heavy usage my phone glitched making MediaPlayer pause for 2 seconds.
Time:
00:00-01:00 - I heard the audio at 00:00-01:00
01:00-01:02 - I heard silence because the phone glitched
01:02-03:47 - I heard the audio at 01:00-03:45 with 2 second time skew
Now from what I understood MediaPlayer is a bad choice of usage on this problem domain, since MediaPlayer provides a high level API.I am currently experimenting with the
AudioTrack class which should provide me with what I need:
//Creating a new audio track
AudioTrack audioTrack = new AudioTrack(...)
//Get start time
long start = System.currentTimeMillis();
// loop until finished
for (...) {
// Get time in song
long now = System.currentTimeMillis();
long nowInSong = now - start;
// get a buffer from the song at time nowInSong with a length of 1 second
byte[] b = getAudioBuffer(nowInSong);
// play 1 second of music
audioTrack.write(b, 0, b.length);
// remove any unplayed data
audioTrack.flush();
}
Now if I glitch I only glitch for 1 second and then I correct myself by playing the right audio at the right time!
NOTE
I haven't tested this code but it seems like the right way to do it. If it will actually work I will update this post again.
P.S. seeking in MediaPlayer is:
1. A heavy operation that will surely delay my music (every millisecond counts here)
2. Is not thread safe and cannot be used from multiple threads (seeks, starts etc...)
I am programming for android 2.2 and am trying to using the
SoundPool class to play several sounds simultaneously but at what feel like random times sound will stop coming out of the speakers.
for each sound that would have been played this is printed in the logcat:
AudioFlinger could not create track. status: -12
Error creating AudioTrack
Audio track delete
No exception is thrown and the program continues to execute without any changes except for the lack of volume. I've had a really hard time tracking down what conditions cause the error or recreating it after it happens. I can't find the error in the documentation anywhere and am pretty much at a loss.
Any help would be greatly appreciated!
Edit: I forgot to mention that I am loading mp3 files, not ogg.
i had almost this exact same problem with some sounds i was attempting to load and play recently.
i even broke it down to loading a single mp3 that was causing this error.
one thing i noted: when i loaded with a loop of -1, it would fail with the "status 12" error, but when i loaded it to loop 0 times, it would succeed. even attempting to load 1 time failed.
the final solution was to open the mp3 in an audio editor and re-edit it with slightly lesser quality so that the file is now smaller, and doesn't seem to take up quite as many resources in the system.
finally, there is this discussion that encourages performing a release on the objects you are using, because there is indeed a hard limit on the resources that can be used, and it is system-wide, so if you use several of the resources, other apps will not be able to use them.
https://groups.google.com/forum/#!topic/android-platform/tyITQ09vV3s/discussion%5B1-25%5D
For audio, there's a hard limit of 32 active AudioTrack objects per
device (not per app: you need to share those 32 with rest of the system), and AudioTrack is used internally beneath SoundPool,
ToneGenerator, MediaPlayer, native audio based on OpenSL ES, etc. But
the actual AudioTrack limit is < 32; it depends more on soft factors
such as memory, CPU load, etc. Also note that the limiter in the
Android audio mixer does not currently have dynamic range compression,
so it is possible to clip if you have a large number of active sounds
and they're all loud.
For video players the limit is much much lower due to the intense load
that video puts on the device.
I'll use this as an opportunity to remind media developers: please
remember to call release() for media objects when your app is paused.
This frees up the underlying resources that other apps will need.
Don't rely on the media objects being cleaned up in finalize by the
garbage collector, as that has unpredictable timing.
I had a similar issue where the music tracker within my Android game would drop notes and I got the Audioflinger error (although my status was -22). I got it working however so this might help some people.
The problem occurred when a single sample was being output multiple times simultaneously. So in my case it was a single sample being played on two or more tracks. This seemed to occasionally deadlock or something and one of the two notes would be dropped. The solution was to have two copies of the sample (two actual ogg files - identical but both in the assets). Then on each track even although I was playing the same sample, it was coming from a different file. This totally fixed the issue for me.
Not sure why it works as I cache the samples into memory, but even loading the same file into two different sounds didn't fix it. Only when the samples came out of two different files did the errors go away.
I'm sure this won't help everyone and it's not the prettiest fix but it might help someone.
john.k.doe is right. You must reduce the size of your mp3 file. You should keep the size under 100kb per file. I had to reduce my 200kb file to 72kb using a constante bit rate(CBR) of 32kbps instead of the usual 128kbps. That worked for me!
Try
final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 50);
tg.startTone(ToneGenerator.TONE_PROP_BEEP, 200);
tg.release();
Releasing should keep your resources.
I was with this problem. In order to solve it i run the method .release() of SoundPool object after finish playing the sound.
Here's my code:
SoundPool pool = new SoundPool(10, AudioManager.STREAM_MUSIC, 50);
final int teste = pool.load(this.ctx,this.soundS,1);
pool.setOnLoadCompleteListener(new OnLoadCompleteListener(){
#Override
public void onLoadComplete(SoundPool sound,int sampleId,int status){
pool.play(teste, 20,20, 1, 0, 1);
new Thread(new Runnable(){
#Override
public void run(){
try {
Thread.sleep(2000);
pool.release();
} catch (InterruptedException e) { e.printStackTrace(); }
}
}).start();
}
});
Note that in my case my sounds had length 1-2 seconds max, so i put the value of 2000 miliseconds in Thread.sleep(), in order to only release the resources after the player have had finished.
Like said above, there is a problem with looping: when I set repeat to -1 I get this error, but with 0 everything is working properly.
I've noticed that some sounds give this error when I'm trying to play them one by one. For example:
mSoundPool.stop(mStreamID);
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
In such case, first track is played ok, but when I switch sounds, next track gives this error. It seems that using looping, a buffer is somehow overloaded, and mSoundPool.stop cannot release resources immediately.
Solution:
final Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
}, 350);
And it's working, but delay is different for different devices.
In my case, reducing the quality and thereby the file sizes of the MP3's to under 100kb wasn't sufficient, as some 51kb files worked while some longer duration 41kb files still did not.
What helped us was reducing the sample rate from 44100 to 22050 or shortening the duration to less than 5 seconds.
I see too many overcomplicated answer. Error -12 means that you did not release the variables.
I had the same problem after I played an OGG audio file 8 times.
This worked for me:
SoundPoolPlayer onBeep; //Global variable
if(onBeep!=null){
onBeep.release();
}
onBeep = SoundPoolPlayer.create(getContext(), R.raw.micon);
onBeep.setOnCompletionListener(
new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) { //mp will be null here
loge("ON Beep! END");
startGoogleASR_API_inner();
}
}
);
onBeep.play();
Releasing the variable right after .play() would mess things up, and it is not possible to release the variable inside onCompletion, so notice how I release the variable before using it(and checking for null to avoid nullpointer exceptions).
It works like charm!
A single soundPool has an internal memory limitation of 1 (one) Mb. You might be hitting this if your sound is very high quality. If you have many sounds and are hitting this limit, just create more soundpools, and distribute your sounds across them.
You may not even be able to reach the hard track limit if you are running out of memory before you get there.
That error not only appears when the stream or track limit has been reached, but also the memory limit. Soundpool will stop playing old and/or de-prioritized sounds in order to play a new sound.