I've got an app on the Android Market and have been using the SoundPool classes for the sound effects. I've noticed that, of all the parts of the Android API, this seems to have caused me the most problems. For example:
HTC Desire has problems playing WAV files (this causes it to lock up randomly). Using .ogg files fixes this
On the Droid, if you exceed the number of channels in the init setup call:
mSoundPool = new SoundPool(4, AudioManager.STREAM_MUSIC, 0);
the handset would lock up. If you can imagine the difficulty in debugging that! On a handset I don't own. It required a lot of selfless help from my customers. Changing the '4' to '16' eliminated the problem. I have no doubt that if 16 sounds were played simultaneously it would still crash. Thankfully the chances of that are low.
Also getting random crashes on various devices. I have got a catlog from one of my customers which has 'Heap overflow' errors pertaining to playing sounds.
I have now changed my sound manager to use MediaPlayer. This seems to be working out fine for now. I am just wondering if any other developers are experiencing these problems?
It seems AudioFlinger can have up to 1 Mb worth of audio going on at any given time.
The heap errors occur if this limit is exceeded. This guess is based on some code I found in AudioFlinger source code:
AudioFlinger::Client::Client(const sp<AudioFlinger>& audioFlinger, pid_t pid)
: RefBase(),
mAudioFlinger(audioFlinger),
mMemoryDealer(new MemoryDealer(1024*1024)),
mPid(pid)
{
// 1 MB of address space is good for 32 tracks, 8 buffers each, 4 KB/buffer
}
And this:
size_t size = sizeof(audio_track_cblk_t);
size_t bufferSize = frameCount*channelCount*sizeof(int16_t);
if (sharedBuffer == 0) {
size += bufferSize;
}
mCblkMemory = client->heap()->allocate(size);
if (mCblkMemory != 0) {
...
} else {
LOGE("not enough memory for AudioTrack size=%u", size);
client->heap()->dump("AudioTrack");
}
Anyone else better informed?
Related
I upgraded my Samsung Galaxy S4 from latest KitKat to Lollipop (5.0.1) yesterday and my IR remote control app that I have used for months stopped working.
Since I was using a late copy of KitKat ConsumerIrManager, the transmit( ) function was sending the number of pulses using the code below. It worked very nicely.
private void irSend(int freqHz, int[] pulseTrainInMicroS) {
int [] pulseCounts = new int [pulseTrainInMicroS.length];
for (int i=0; i<pulseTrainInMicroS.length; i++) {
long iValue = pulseTrainInMicroS[i] * freqHz / 1000000;
pulseCounts[i] = (int) iValue;
}
m_IRService.transmit(freqHz, pulseCounts);
}
when it stopped working yesterday, I began looking closely at it.
I noticed that the transmitted waveform is not having any relationship with the requested pulse train. even the code below doesn't work correctly! there is
private void TestSend() {
int [] pulseCounts = {100, 100, 100};
m_IRService.transmit(38000, pulseCounts);
}
the resulting waveforms had many problems and so are entirely useless.
the waveforms were entirely wrong
the frequency was wrong and the pulse spacing was not regular
they were not repeatable
looking at the demodulated waveform:
if my 100, 100, 100 were correctly rendered, I should have seen two pulses 2.6ms (before 4.4.3(?) 100 us) long. instead I received (see attached) "[demodulated] not repeatable 1.BMP" and "[demodulated] not repeatable 2.BMP". note that the waveform isn't 2 pulses...in fact, it's not even repeatable.
as for the captures below, the signal goes low when the IR is detected.
we should have seen two pulses going low for 2.6 ms and 2.6 ms between them (see green line below).
I had also tried shorter pulses using 50, 50, 50 and have observed that the first pulse isn't correct either (see below).
looking at the modulated waveform:
the frequency was not correct; instead, it was about 18kHz and irregular.
I'm quite experienced with this and have formal education in electronics.
It seems to me there's a bug in ConsumerIrManager.transmit( )...
curiously, the "WatchOn" application that comes with the phone still works.
thank you for any insights you can give.
Test equipment:
Tektronix TDS-2014B, 100 MHz, used in peak-detect mode.
As #IvanTellez says, a change was made in Android in respect to this functionality. Strangely, when I had it outputting simple IR signals (for troubleshooting purposes), the function behaves as shown above (erratically, wrong carrier frequency, etc). When I eventually returned to normal types of IR signals, it worked correctly.
I am transcoding videos based on the example given by Google (https://android.googlesource.com/platform/cts/+/master/tests/tests/media/src/android/media/cts/ExtractDecodeEditEncodeMuxTest.java)
Basically, transocding of MP4 files works, but on some phones I get some weird results. If for example I transcode a video with audio on an HTC One, the code won't give any errors but the file cannot play afterward on the phone. If I have a 10 seconds video it jumps to almost the last second and you only here some crackling noise. If you play the video with VLC the audio track is completely muted.
I did not alter the code in terms of encoding/decoding and the same code gives correct results on a Nexus 5 or MotoX for example.
Anybody having an idea why it might fail on that specific device?
Best regard and thank you,
Florian
I made it work in Android 4.4.2 devices by following changes:
Set AAC profile to AACObjectLC instead of AACObjectHE
private static final int OUTPUT_AUDIO_AAC_PROFILE = MediaCodecInfo.CodecProfileLevel.AACObjectLC;
During creation of output audio format, use sample rate and channel count of input format instead of fixed values
MediaFormat outputAudioFormat = MediaFormat.createAudioFormat(OUTPUT_AUDIO_MIME_TYPE,
inputFormat.getInteger(MediaFormat.KEY_SAMPLE_RATE),
inputFormat.getInteger(MediaFormat.KEY_CHANNEL_COUNT));
Put a check just before audio muxing audio track to control presentation timestamps. (To avoid timestampUs X < lastTimestampUs X for Audio track error)
if (audioPresentationTimeUsLast == 0) { // Defined in the begining of method
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
} else {
if (audioPresentationTimeUsLast > audioEncoderOutputBufferInfo.presentationTimeUs) {
audioEncoderOutputBufferInfo.presentationTimeUs = audioPresentationTimeUsLast + 1;
}
audioPresentationTimeUsLast = audioEncoderOutputBufferInfo.presentationTimeUs;
}
// Write data
if (audioEncoderOutputBufferInfo.size != 0) {
muxer.writeSampleData(outputAudioTrack, encoderOutputBuffer, audioEncoderOutputBufferInfo);
}
Hope this helps...
If original CTS tests fail you need to go to device vendors and ask for fixes
I am programming for android 2.2 and am trying to using the
SoundPool class to play several sounds simultaneously but at what feel like random times sound will stop coming out of the speakers.
for each sound that would have been played this is printed in the logcat:
AudioFlinger could not create track. status: -12
Error creating AudioTrack
Audio track delete
No exception is thrown and the program continues to execute without any changes except for the lack of volume. I've had a really hard time tracking down what conditions cause the error or recreating it after it happens. I can't find the error in the documentation anywhere and am pretty much at a loss.
Any help would be greatly appreciated!
Edit: I forgot to mention that I am loading mp3 files, not ogg.
i had almost this exact same problem with some sounds i was attempting to load and play recently.
i even broke it down to loading a single mp3 that was causing this error.
one thing i noted: when i loaded with a loop of -1, it would fail with the "status 12" error, but when i loaded it to loop 0 times, it would succeed. even attempting to load 1 time failed.
the final solution was to open the mp3 in an audio editor and re-edit it with slightly lesser quality so that the file is now smaller, and doesn't seem to take up quite as many resources in the system.
finally, there is this discussion that encourages performing a release on the objects you are using, because there is indeed a hard limit on the resources that can be used, and it is system-wide, so if you use several of the resources, other apps will not be able to use them.
https://groups.google.com/forum/#!topic/android-platform/tyITQ09vV3s/discussion%5B1-25%5D
For audio, there's a hard limit of 32 active AudioTrack objects per
device (not per app: you need to share those 32 with rest of the system), and AudioTrack is used internally beneath SoundPool,
ToneGenerator, MediaPlayer, native audio based on OpenSL ES, etc. But
the actual AudioTrack limit is < 32; it depends more on soft factors
such as memory, CPU load, etc. Also note that the limiter in the
Android audio mixer does not currently have dynamic range compression,
so it is possible to clip if you have a large number of active sounds
and they're all loud.
For video players the limit is much much lower due to the intense load
that video puts on the device.
I'll use this as an opportunity to remind media developers: please
remember to call release() for media objects when your app is paused.
This frees up the underlying resources that other apps will need.
Don't rely on the media objects being cleaned up in finalize by the
garbage collector, as that has unpredictable timing.
I had a similar issue where the music tracker within my Android game would drop notes and I got the Audioflinger error (although my status was -22). I got it working however so this might help some people.
The problem occurred when a single sample was being output multiple times simultaneously. So in my case it was a single sample being played on two or more tracks. This seemed to occasionally deadlock or something and one of the two notes would be dropped. The solution was to have two copies of the sample (two actual ogg files - identical but both in the assets). Then on each track even although I was playing the same sample, it was coming from a different file. This totally fixed the issue for me.
Not sure why it works as I cache the samples into memory, but even loading the same file into two different sounds didn't fix it. Only when the samples came out of two different files did the errors go away.
I'm sure this won't help everyone and it's not the prettiest fix but it might help someone.
john.k.doe is right. You must reduce the size of your mp3 file. You should keep the size under 100kb per file. I had to reduce my 200kb file to 72kb using a constante bit rate(CBR) of 32kbps instead of the usual 128kbps. That worked for me!
Try
final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 50);
tg.startTone(ToneGenerator.TONE_PROP_BEEP, 200);
tg.release();
Releasing should keep your resources.
I was with this problem. In order to solve it i run the method .release() of SoundPool object after finish playing the sound.
Here's my code:
SoundPool pool = new SoundPool(10, AudioManager.STREAM_MUSIC, 50);
final int teste = pool.load(this.ctx,this.soundS,1);
pool.setOnLoadCompleteListener(new OnLoadCompleteListener(){
#Override
public void onLoadComplete(SoundPool sound,int sampleId,int status){
pool.play(teste, 20,20, 1, 0, 1);
new Thread(new Runnable(){
#Override
public void run(){
try {
Thread.sleep(2000);
pool.release();
} catch (InterruptedException e) { e.printStackTrace(); }
}
}).start();
}
});
Note that in my case my sounds had length 1-2 seconds max, so i put the value of 2000 miliseconds in Thread.sleep(), in order to only release the resources after the player have had finished.
Like said above, there is a problem with looping: when I set repeat to -1 I get this error, but with 0 everything is working properly.
I've noticed that some sounds give this error when I'm trying to play them one by one. For example:
mSoundPool.stop(mStreamID);
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
In such case, first track is played ok, but when I switch sounds, next track gives this error. It seems that using looping, a buffer is somehow overloaded, and mSoundPool.stop cannot release resources immediately.
Solution:
final Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
}, 350);
And it's working, but delay is different for different devices.
In my case, reducing the quality and thereby the file sizes of the MP3's to under 100kb wasn't sufficient, as some 51kb files worked while some longer duration 41kb files still did not.
What helped us was reducing the sample rate from 44100 to 22050 or shortening the duration to less than 5 seconds.
I see too many overcomplicated answer. Error -12 means that you did not release the variables.
I had the same problem after I played an OGG audio file 8 times.
This worked for me:
SoundPoolPlayer onBeep; //Global variable
if(onBeep!=null){
onBeep.release();
}
onBeep = SoundPoolPlayer.create(getContext(), R.raw.micon);
onBeep.setOnCompletionListener(
new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) { //mp will be null here
loge("ON Beep! END");
startGoogleASR_API_inner();
}
}
);
onBeep.play();
Releasing the variable right after .play() would mess things up, and it is not possible to release the variable inside onCompletion, so notice how I release the variable before using it(and checking for null to avoid nullpointer exceptions).
It works like charm!
A single soundPool has an internal memory limitation of 1 (one) Mb. You might be hitting this if your sound is very high quality. If you have many sounds and are hitting this limit, just create more soundpools, and distribute your sounds across them.
You may not even be able to reach the hard track limit if you are running out of memory before you get there.
That error not only appears when the stream or track limit has been reached, but also the memory limit. Soundpool will stop playing old and/or de-prioritized sounds in order to play a new sound.
Here is what I'm doing:
private SoundPool pool;
private int soundId;
#Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
// new SoundPool with three channels, STREAM_MUSIC, default quality
pool = new SoundPool(3, AudioManager.STREAM_MUSIC, 0);
// load sound with current context, click resource, priority 1
soundId = pool.load(this, R.raw.click, 1);
// originally I wasn't using this but it seemed to help a bit
// set no loop for soundId
pool.setLoop(soundId, 0);
}
private void play()
{
Log.v(TAG, "Play the sound once!");
// half volume L & R, priority 1, loop 0, rate 1
pool.play(soundId, 0.5f, 0.5f, 1, 0, 1);
}
#Override
public void onDestroy()
{
super.onDestroy();
// release the pool
pool.release();
}
Originally I was using a .wav file for the click sound and the problem would occur 90% of the time. I added setLoop and it appears to reduce it a tiny bit.
I thought it might be a problem with loading .wav files so I converted the file to .mp3. Now the problem happens 5% of the time, but it still happens. I built with and without the setLoop call and it appears that including it helps a tiny bit.
As you can see I have added a debug log message to ensure that I am not accidentally calling the play function twice. According to the log output, the sound is only played once but I hear the sound twice.
I have also used several different sound files. Some of the files seem to repeat more frequently than others. I don't see any correlations except it happens more frequently with .wav files.
I see the problem happening on Samsung Continuum running 2.1 (min & target API: level 7). I haven't experienced any extra looping with any market apps I've downloaded on the same device. Unfortunately, I don't have other devices to test with.
I have only found one other person experiencing this issue and he or she was also using a Samsung device.
Here is a link to the other issue reported:
https://stackoverflow.com/q/4873995/695336
At this point I guess I'll try making a release build to see if it still happens and then maybe convert to .ogg format. After that I'll probably try switching to MediaPlayer to see if I get the same results.
Thanks for your help.
I've got an AudioTrack in my application, which is set to Stream mode. I want to write audio which I receive over a wireless connection. The AudioTrack is declared like this:
mPlayer = new AudioTrack(STREAM_TYPE,
FREQUENCY,
CHANNEL_CONFIG_OUT,
AUDIO_ENCODING,
PLAYER_CAPACITY,
PLAY_MODE);
Where the parameters are defined like:
private static final int FREQUENCY = 8000,
CHANNEL_CONFIG_OUT = AudioFormat.CHANNEL_OUT_MONO,
AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT,
PLAYER_CAPACITY = 2048,
STREAM_TYPE = AudioManager.STREAM_MUSIC,
PLAY_MODE = AudioTrack.MODE_STREAM;
However, when I write data to the AudioTrack with write(), it will play choppy... The call
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
is made whenever a packet is received over the network connection. Does anybody have an idea on why it sounds choppy? Maybe it has something to do with the WiFi connection itself? I don't think so, as the sound doesn't sound horrible the other way around, when I send data from the Android phone to another source over UDP. The sound then sounds complete and not choppy at all... So does anybody have an idea on why this is happening?
Do you know how many bytes per second you are recieving, the average time between packets compares, and the maximum time between packets? If not, can you add code to calculate it?
You need to be averaging 8000 samples/second * 2 bytes/sample = 16,000 bytes per second in order to keep the stream filled.
A gap of more than 2048 bytes / (16000 bytes/second) = 128 milliseconds between incoming packets will cause your stream to run dry and the audio to stutter.
One way to prevent it is to increase the buffer size (PLAYER_CAPACITY). A larger buffer will be more able to handle variation in the incoming packet size and rate. The cost of the extra stability is a larger delay in starting playback while you wait for the buffer to initially fill.
I have partially solved it by placing the mPlayer.write(audio, 0, audio.length); in it's own Thread. This does take away some of the choppy-ness (due to the fact that write is a blocking call), but it still sounds choppy after a good second or 2. It still has a significant delay of 2-3 seconds.
new Thread(){
public void run(){
byte[] audio = packet.getData();
mPlayer.write(audio, 0, audio.length);
}
}.start();
Just a little anonymous Thread that does the writing now...
Anybody have an idea on how to solve this issue?
Edit:
After some further checking and debugging, I've noticed that this is an issue with obtainBuffer.
I've looked at the java code of the AudioTrack and the C++ code of AudioTrack And I've noticed that it only can appear in the C++ code.
if (__builtin_expect(result!=NO_ERROR, false)) {
LOGW( "obtainBuffer timed out (is the CPU pegged?) "
"user=%08x, server=%08x", u, s);
mAudioTrack->start(); // FIXME: Wake up audioflinger
timeout = 1;
}
I've noticed that there is a FIXME in this piece of code. :< But anyway, could anybody explain how this C++ code works? I've had some experience with it, but it was never as complicated as this...
Edit 2:
I've tried somewhat different now, the difference being that I buffer the data I receive, and then when the buffer is filled with some data, it is being written to the player. However, the player keeps up with consuming for a few cycles, then the obtainBuffer timed out (is the CPU pegged?) warning kicks in, and there is no data at all written to the player untill it is kick started back to life... After that, it will continually get data written to it untill the buffer is emptied.
Another slight difference is that I stream a file to the player now. That is, reading it in chunks, the writing those chunks to the buffer. This simulates the packages being received over wifi...
I am beginning to wonder if this is just an OS issue that Android has, and it isn't something I can solve on my own... Anybody got any ideas on that?
Edit 3:
I've done more testing, but this doesn't help me any further. This test shows me that I only get lag when I try to write to the AudioTrack for the first time. This takes somewhat between 1 and 3 seconds to complete. I did this by using the following bit of code:
long beforeTime = Utilities.getCurrentTimeMillis(), afterTime = 0;
mPlayer.write(data, 0, data.length);
afterTime = Utilities.getCurrentTimeMillis();
Log.e("WriteToPlayerThread", "Writing a package took " + (afterTime - beforeTime) + " milliseconds");
However, I get the following results:
Logcat Image http://img810.imageshack.us/img810/3453/logcatimage.png
These show that the lag initially occurs at the beginning, after which the AudioTrack keeps getting data continuously... I really need to get this one fixed...