I created a simple application that generates a square wave of given frequency and plays it using AudioTrack in STREAM mode (STREAM_MUSIC). Everything seems to be working fine and the sound plays okay, however when the stream is finished I get messages in the log:
W/AudioTrack( 7579): obtainBuffer() track 0x14c228 disabled, restarting ...
Even after calling the stop() function I still get these.
I believe I properly set the AudioTrack buffer size, based on minimal size required by AudioTrack (in my case 6x1024). I feed it with smaller buffers of 1024 shorts.
Is it okay that I'm getting these and should I leave it like that?
Ok, I think the problem is solved. The error is generated when the buffer is not completely filled with data on time (buffer underrun) . I have no idea what the timeout is but if you experience this make sure that:
You don't call the play method until you have some data in the buffer.
You can generate the data fast enough to beat the timeout.
After you are finished feeding the buffer with data, before you call stop() method, make sure that the "last" buffer was completely filled with data before timeout.
I dealt with the last issue by always waiting a little (until timeout) then sending 1 buffer full of zeroes and finally calling the stop() function.
Keep in mind that you must always send the buffer in smaller chunks, even if you have the big chunk ready. It still bothers me a bit that I'm not 100% sure if that is the right way but the errors are gone so I guess I can live with that :)
I've found that even when the buffer is technically long enough, and filled with bytes, if they aren't properly formatted (audio shorts converted to a byte array) it will still throw you that error.
I was getting that warning when I instantiated the Audiotrack, called audioTrack.play() and there was a slight delay between the play() call and the audioTrack.write(). If I called play() right before write() the warning disappeared.
I've solved by this
if (mAudioTrack.getPlayState()!=AudioTrack.PLAYSTATE_PLAYING)
mAudioTrack.play();
mAudioTrack.write(b, 0, sz * 2);
mAudioTrack.stop();
mAudioTrack.flush();
Related
The video decoding code of an app is typical, just like the example code in the MediaCodec document. Nothing special. The configuration statement is like the following:
myMediaCodec.configure(myMediaFormat, mySurface, null, 0);
Everything works fine. However, if I change the above code to the following to decode the video to a buffer instead of a surface:
myMediaCodec.configure(myMediaFormat, null, null, 0);
then the following code:
int iOutputBufferIndex = myMediaCodec.dequeueOutputBuffer(myBufferInfo, 100000);
will always return MediaCodec.INFO_TRY_AGAIN_LATER. Even more strangly, any subsequent call of myMediaCodec.stop() or myMediaCodec.release() will hang (i.e. the call never returns or generates an exception).
This happens on a generic (AGPTek) tablet (Allwinner A31S, 1.5GHz Cortex A7 Quad Core). On a simulator and another tablet (Asus Memo Pad), everything works fine.
I am asking for any tip to help get around this problem.
Do you provide one single input buffer worth of data before trying this, or do you pass as many packets as you can before dequeueInputBuffer also blocks or returns INFO_TRY_AGAIN_LATER? A decoder might not output data after only one packet of input (if the decoder has got some delay), but if it works with Suface output it should probably behave in the same way there.
If that (queueing as many input buffers as possible) doesn't work, I would say that this sounds like a decoder bug.
I'm trying to play some looping sound in Android, and I have that going pretty well for me. All good things must come to an end, though, and I would like for that to include my audio loop. However, if I call AudioTrack.release() after this loop, as I should, the end of my audio stream gets cut off - there is extra data that I know I'm supposed to hear, but don't.
I've verified this by putting in a Thread.sleep(2000) before the release - the sound plays correctly with that in there. My code looks something like this:
// Initialize Audiotrack
int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, 2 * minBufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
// Play looping sound
while (stuff) {
mAudioTrack.write(stuff);
}
// Play one last bit of sound before returning
mAudioTrack.write(lastSound);
// Block until the AudioTrack has played everything we've given it
Thread.sleep(2000);
// Get rid of the Audiotrack
mAudioTrack.release();
I suppose I could leave the Thread.sleep(2000) in there and call it a day, but that sounds messy and irresponsible to me. I'd like to either have a while() loop block for the most appropriate amount of time, or use AudioTrack.setPlaybackPositionUpdateListener() and put the release() in there.
If I go the first route, I need something to pend on, and AudioTrack.getPlayState() appears to always report the track as playing. So I'm stuck there.
If I go the second route, I need a way of getting the position in the AudioTrack buffer that was written to last, so I can tell the AudioTrack what position I'm waiting for it to play up to. I don't have any ideas as to how to get that information, though.
I guess I don't really care which way I do it, so any help towards solving the problem one way or the other would be much appreciated.
The problem is related to the buffer size in the AudioTrack.
Imagine the minBufferSize is 8k. This means that the AudioTrack will play sound when the buffer is full.
mAudioTrack.write(stuff);
If stuff is only 4K, the AudioTrack will wait until the next call to write until it has enough data to play.
Conclusion: You need to keep track on how much data you have written, and at the end of your playback feed the AudioTrack with some dummy bytes to complete minBufferSize. To make thing easier you could just feed a whole minBufferSize amount of silence bytes.
By the way, to feed dummy or silence just fill the data with zeroes.
I am using the new MediaCodec API on Jelly Bean to decode an h264 stream.
Using the code snippets in the developer page , instantiated a decoder by name (taken from media_codec.xml), passed a surface and configured the codec.
The problem I am facing is, dequeOutputBuffer always returns -1.
Tried with a negative timeout to wait indefenitely, no luck with that.
Whenever I get a -1, refreshed the buffers using getOutputBuffers.
Please note that the same issue is seen when a custom app is used to parse the data from a media source and provide to decoder.
Any inputs on the above will be helpful
I had faced same problem. Incrementing presentationTimeUs parameter of queueInputBuffer() on each call solved the issue.
For example,
codec.queueInputBuffer(inputBufferIndex, 0, data.size, time, 0)
time += 66 //incrementing by 1 works too
If anyone else is facing this problem (as I did today) while starting with MediaCodec make sure to release the output codecs after you're done with them:
mediaCodec.releaseOutputBuffer(index, render);
or else the codec will run out of available buffers pretty soon.
It may be necessary to feed several input buffers before obtaining data in output buffer.
-1 is INFO_TRY_AGAIN_LATER, meaning the output buffer queue is still being prepared and you just need to call dequeueOutputBuffer again.
Try using a work loop that calls dequeueOutputBuffer in a loop similar to ExoPlayer:
while (drainOutputBuffer(positionUs, elapsedRealtimeUs)) {}
if (feedInputBuffer(true)) {
while (feedInputBuffer(false)) {}
}
where drainOutputBuffer is a method that calls dequeueOutputBuffer.
I am programming for android 2.2 and am trying to using the
SoundPool class to play several sounds simultaneously but at what feel like random times sound will stop coming out of the speakers.
for each sound that would have been played this is printed in the logcat:
AudioFlinger could not create track. status: -12
Error creating AudioTrack
Audio track delete
No exception is thrown and the program continues to execute without any changes except for the lack of volume. I've had a really hard time tracking down what conditions cause the error or recreating it after it happens. I can't find the error in the documentation anywhere and am pretty much at a loss.
Any help would be greatly appreciated!
Edit: I forgot to mention that I am loading mp3 files, not ogg.
i had almost this exact same problem with some sounds i was attempting to load and play recently.
i even broke it down to loading a single mp3 that was causing this error.
one thing i noted: when i loaded with a loop of -1, it would fail with the "status 12" error, but when i loaded it to loop 0 times, it would succeed. even attempting to load 1 time failed.
the final solution was to open the mp3 in an audio editor and re-edit it with slightly lesser quality so that the file is now smaller, and doesn't seem to take up quite as many resources in the system.
finally, there is this discussion that encourages performing a release on the objects you are using, because there is indeed a hard limit on the resources that can be used, and it is system-wide, so if you use several of the resources, other apps will not be able to use them.
https://groups.google.com/forum/#!topic/android-platform/tyITQ09vV3s/discussion%5B1-25%5D
For audio, there's a hard limit of 32 active AudioTrack objects per
device (not per app: you need to share those 32 with rest of the system), and AudioTrack is used internally beneath SoundPool,
ToneGenerator, MediaPlayer, native audio based on OpenSL ES, etc. But
the actual AudioTrack limit is < 32; it depends more on soft factors
such as memory, CPU load, etc. Also note that the limiter in the
Android audio mixer does not currently have dynamic range compression,
so it is possible to clip if you have a large number of active sounds
and they're all loud.
For video players the limit is much much lower due to the intense load
that video puts on the device.
I'll use this as an opportunity to remind media developers: please
remember to call release() for media objects when your app is paused.
This frees up the underlying resources that other apps will need.
Don't rely on the media objects being cleaned up in finalize by the
garbage collector, as that has unpredictable timing.
I had a similar issue where the music tracker within my Android game would drop notes and I got the Audioflinger error (although my status was -22). I got it working however so this might help some people.
The problem occurred when a single sample was being output multiple times simultaneously. So in my case it was a single sample being played on two or more tracks. This seemed to occasionally deadlock or something and one of the two notes would be dropped. The solution was to have two copies of the sample (two actual ogg files - identical but both in the assets). Then on each track even although I was playing the same sample, it was coming from a different file. This totally fixed the issue for me.
Not sure why it works as I cache the samples into memory, but even loading the same file into two different sounds didn't fix it. Only when the samples came out of two different files did the errors go away.
I'm sure this won't help everyone and it's not the prettiest fix but it might help someone.
john.k.doe is right. You must reduce the size of your mp3 file. You should keep the size under 100kb per file. I had to reduce my 200kb file to 72kb using a constante bit rate(CBR) of 32kbps instead of the usual 128kbps. That worked for me!
Try
final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 50);
tg.startTone(ToneGenerator.TONE_PROP_BEEP, 200);
tg.release();
Releasing should keep your resources.
I was with this problem. In order to solve it i run the method .release() of SoundPool object after finish playing the sound.
Here's my code:
SoundPool pool = new SoundPool(10, AudioManager.STREAM_MUSIC, 50);
final int teste = pool.load(this.ctx,this.soundS,1);
pool.setOnLoadCompleteListener(new OnLoadCompleteListener(){
#Override
public void onLoadComplete(SoundPool sound,int sampleId,int status){
pool.play(teste, 20,20, 1, 0, 1);
new Thread(new Runnable(){
#Override
public void run(){
try {
Thread.sleep(2000);
pool.release();
} catch (InterruptedException e) { e.printStackTrace(); }
}
}).start();
}
});
Note that in my case my sounds had length 1-2 seconds max, so i put the value of 2000 miliseconds in Thread.sleep(), in order to only release the resources after the player have had finished.
Like said above, there is a problem with looping: when I set repeat to -1 I get this error, but with 0 everything is working properly.
I've noticed that some sounds give this error when I'm trying to play them one by one. For example:
mSoundPool.stop(mStreamID);
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
In such case, first track is played ok, but when I switch sounds, next track gives this error. It seems that using looping, a buffer is somehow overloaded, and mSoundPool.stop cannot release resources immediately.
Solution:
final Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
}, 350);
And it's working, but delay is different for different devices.
In my case, reducing the quality and thereby the file sizes of the MP3's to under 100kb wasn't sufficient, as some 51kb files worked while some longer duration 41kb files still did not.
What helped us was reducing the sample rate from 44100 to 22050 or shortening the duration to less than 5 seconds.
I see too many overcomplicated answer. Error -12 means that you did not release the variables.
I had the same problem after I played an OGG audio file 8 times.
This worked for me:
SoundPoolPlayer onBeep; //Global variable
if(onBeep!=null){
onBeep.release();
}
onBeep = SoundPoolPlayer.create(getContext(), R.raw.micon);
onBeep.setOnCompletionListener(
new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) { //mp will be null here
loge("ON Beep! END");
startGoogleASR_API_inner();
}
}
);
onBeep.play();
Releasing the variable right after .play() would mess things up, and it is not possible to release the variable inside onCompletion, so notice how I release the variable before using it(and checking for null to avoid nullpointer exceptions).
It works like charm!
A single soundPool has an internal memory limitation of 1 (one) Mb. You might be hitting this if your sound is very high quality. If you have many sounds and are hitting this limit, just create more soundpools, and distribute your sounds across them.
You may not even be able to reach the hard track limit if you are running out of memory before you get there.
That error not only appears when the stream or track limit has been reached, but also the memory limit. Soundpool will stop playing old and/or de-prioritized sounds in order to play a new sound.
I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...