I am struggling playing sounds with SoundPool on Flutter.
I have read/Googled about the dreaded message "AUDIO_OUTPUT_FLAG_FAST denied by server; frameCount 0 -> 48000"
So far, here are the steps I have taken :
Image running in emulator is Pixel 3A
Files are wav, with a size of 90K, they last 1 second each, they are 48K sampled (ffmprobe says :Duration: 00:00:01.00, bitrate: 768 kb/s
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 48000 Hz, 1 channels, s16, 768 kb/s)
I have pre-loaded then during app startup with Asset, follows code (within a foreach):
var content = await _bundle.load(notePath);
sounds[notePath] = content;
newNote.soundId = await _pool.load(content);
new Timer(const Duration(milliseconds: (2000)), () => print("loaded $notePath"));
I have put a duration, else some (randomly) sounds were not loaded (I have 72 sounds to load).
When I need to play the sound (upon tap on the screen), I simply do a
_pool.stop(1);
_pool.play(noteToPlay.soundId);
Which creates some crackle (one guy proposed instead to always play a sound at volume 0 to avoid this, but this will drain the battery).
Now the problem :
The message appears in the console, but I could live with it if there were not the following issue :
Randomly, some sounds are not played, the taps are simply ignored. The app never crashes. If I remove the soundPool.play call, then everything is ok (corresponding file is found, tap is handled...), so I gather that soundPool gets lost during one of the Future playback. Player can tap very quickly, but even If I go slow, it still fails.
SoundPool is instanciated like this :
class NoteService {
final Soundpool _pool = Soundpool(streamType: StreamType.notification);
[snip class definition]
Thank you for your insight.
Solved it by changing streamType in SoundPool's constructor to StreamType.alarm
pool = Soundpool(streamType: StreamType.alarm);
Related
I'm using the Android oboe library for high performance audio in a music game.
In the assets folder I have 2 .raw files (both 48000Hz 16 bit PCM wavs and about 60kB)
std_kit_sn.raw
std_kit_ht.raw
These are loaded into memory as SoundRecordings and added to a Mixer. kSampleRateHz is 48000:
stdSN= SoundRecording::loadFromAssets(mAssetManager, "std_kit_sn.raw");
stdHT= SoundRecording::loadFromAssets(mAssetManager, "std_kit_ht.raw");
mMixer.addTrack(stdSN);
mMixer.addTrack(stdFT);
// Create a builder
AudioStreamBuilder builder;
builder.setFormat(AudioFormat::I16);
builder.setChannelCount(1);
builder.setSampleRate(kSampleRateHz);
builder.setCallback(this);
builder.setPerformanceMode(PerformanceMode::LowLatency);
builder.setSharingMode(SharingMode::Exclusive);
LOGD("After creating a builder");
// Open stream
Result result = builder.openStream(&mAudioStream);
if (result != Result::OK){
LOGE("Failed to open stream. Error: %s", convertToText(result));
}
LOGD("After openstream");
// Reduce stream latency by setting the buffer size to a multiple of the burst size
mAudioStream->setBufferSizeInFrames(mAudioStream->getFramesPerBurst() * 2);
// Start the stream
result = mAudioStream->requestStart();
if (result != Result::OK){
LOGE("Failed to start stream. Error: %s", convertToText(result));
}
LOGD("After starting stream");
They are called appropriately to play with standard code (as per Google tutorials) at required times:
stdSN->setPlaying(true);
stdHT->setPlaying(true); //Nasty Sound
The audio callback is standard (as per Google tutorials):
DataCallbackResult SoundFunctions::onAudioReady(AudioStream *mAudioStream, void *audioData, int32_t numFrames) {
// Play the stream
mMixer.renderAudio(static_cast<int16_t*>(audioData), numFrames);
return DataCallbackResult::Continue;
}
The std_kit_sn.raw plays fine. But std_kit_ht.raw has a nasty distortion. Both play with low latency. Why is one playing fine and the other has a nasty distortion?
I loaded your sample project and I believe the distortion you hear is caused by clipping/wraparound during mixing of sounds.
The Mixer object from the sample is a summing mixer. It just adds the values of each track together and outputs the sum.
You need to add some code to reduce the volume of each track to avoid exceeding the limits of an int16_t (although you're welcome to file a bug on the oboe project and I'll try to add this in an upcoming version). If you exceed this limit you'll get wraparound which is causing the distortion.
Additionally, your app is hardcoded to run at 22050 frames/sec. This will result in sub-optimal latency across most mobile devices because the stream is forced to upsample to the audio device's native frame rate. A better approach would be to leave the sample rate undefined when opening the stream - this will give you the optimal frame rate for the current audio device - then use a resampler on your source files to supply audio at this frame rate.
I have an app that uses OpenSL ES. When I try to use it on a Nexus9 6.0.1, I hear a noise like I have the wrong sampling rate. On other devices all is OK.
My SLDataFormat_PCM structure:
SLDataFormat_PCM format_pcm = {
SL_DATAFORMAT_PCM,
aChannels,
48000 * 1000,
SL_PCMSAMPLEFORMAT_FIXED_16,
SL_PCMSAMPLEFORMAT_FIXED_16,
aChannels == 2 ? SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT
: SL_SPEAKER_FRONT_CENTER,
SL_BYTEORDER_LITTLEENDIAN
};
When I change the sample rate (+/- 1Hz) in this structure, the output sounds OK, but I receive an AudioTrack debug message:
W/AudioTrack: AUDIO_OUTPUT_FLAG_FAST denied by client; transfer 1, track 47999 Hz, output 48000 Hz
Why do I have a problem in FAST mode, if the Nexus9 has 48000Hz?
I checked it using this method:
jclass clazz = env.getEnv()->FindClass("android/media/AudioSystem");
jmethodID mid = env.getEnv()->GetStaticMethodID(clazz, "getPrimaryOutputSamplingRate", "()I");
int nSampleRate = env.getEnv()->CallStaticIntMethod(clazz, mid);
LOGDEBUG << "Sample Rate: " << nSampleRate;
[ DBG:c894860f] 11:16:14.902: Sample Rate: 48000
Is there a better method to get device's sample rate?
Yes there's a method to find the preferred sample rate for a device though it'll work for API level > 16. You can have a look at my answer here.
And about your SLDataFormat_PCM structure. You've initialized with sample rate 48k*1k! If you want to sample your PCM data in 48k, try using the code below.
// configure audio source
SLDataFormat_PCM format_pcm = {
SL_DATAFORMAT_PCM,
aChannels,
SL_SAMPLINGRATE_48,
SL_PCMSAMPLEFORMAT_FIXED_16,
SL_PCMSAMPLEFORMAT_FIXED_16,
aChannels == 2 ? SL_SPEAKER_FRONT_LEFT | SL_SPEAKER_FRONT_RIGHT
: SL_SPEAKER_FRONT_CENTER,
SL_BYTEORDER_LITTLEENDIAN
};
I didn't work with Nexus 9 before, so I don't know if it supports 48k sampling rate. But, anyway, you can check if it supports.
The problem was with mutex in callback function.
UPD:
OpenSLES Readme
Known issues
At 48000Hz, Galaxy Nexus and Nexus 10 produce glitchy output. At
44100Hz, Galaxy Nexus tends to glitch when switching activities or
bringing up large dialogs. Touch sounds occasionally cause OpenSL to
glitch. It's probably a good idea to disable touch sounds in audio
apps. These problems are not specific to opensl_stream and have been
reproduced in other settings.
I have a number of mp3 files that I use with Android MediaPlayer to play from certain offsets.
Using seekTo() seems to stop at correct location. player.getCurrrentPosition() returns the correct offset, but in some cases the real position is off for as much as 200 ms. The files are about 3 minutes worth of recording and the incorrect offsets seem to appear at the end. Of some of the files.
I have the same effect either trying with Android 4.0.3 device or 4.3 emulator.
Anybody has experience with "finetuning" MediaPlayer offsets? Any experience why MediaPlayer might not be working correctly with some files? They are all CBR, stereo, some have sampling frequency 22050, some 44100, different bitrates.
I'm setting the offsets from another program and saving to mp3 tags, then in case of doubt verifying manually using Audacity. Audacity agrees with my estimate of what the correct offset is, MediaPlayer seems to disagree.
I'm aware that I could use AudioTrack with raw sound files and have a better control, however it might be impractical as there are many mp3 files, so using raw sound data will make pretty large application or many large data files.
The code is nothing fancy:
player.seekTo(start);
player.start();
CountDownTimer timer = new CountDownTimer(length, 100) {
#Override
public void onTick(long millisUntilFinished) {
if (player!=null) setInt(R.id.nLocation, player.getCurrentPosition());
}
#Override
public void onFinish() {
if (player!=null) {
if (player.isPlaying()) {
player.pause();
}
setInt(R.id.nLocation, player.getCurrentPosition());
player.stop();
player.release();
player = null;
}
}
};
timer.start();
I did not manage to find the rule why the MediaPlayer interprets offset (seekTo) differently for a group of MP3 files. For example when creating a new MP3 file with the same parameters from Audacity+Lame (MPEG1, Layer III, 44100 Hz, 192 Kb/s) it worked perfectly.
However:
this can be reproduced - rip MP3 file using Windows Media Player, settings: MP3, 192 kb/s [added when edited]
I found the workaround that seems to work for any recording.
The background - in order to tell MediaPlayer to play from certain offset, I store certain data in MP3 tags. I use a separate program to set up the playback (in frames): Label A, start frame=1000, length=100 frames, Label B, start #1500 etc. Now when I need to play it back, I read the MP3 headers, determine the frame length, for example 26.12245 ms/frame and calculate the offset (1000 frames will be 26122 ms).
The workaround is to store in MP3 tag also the frame count and length in ms (or pass through again and count the frames). Then when start MediaPlayer, compare MediaPlayer.getDuration() (MediaPlayer estimate) with the duration stored in MP3 tag. Then adjust the frame size:
adjustedFrameSizeMs = realFrameSizeMs + (player.getDuration()-storedDurationMs)/storedframeCount;
In my case (for the files with incorrect offset) the adjusted frame length always was between 26.08 and 26.09 ms (instead of 26.12245).
I attempted to try see if this is because Android plays the recording quicker (so it estimates the "real time", not the time according to frame size and frame count). It seems that it really does plays quicker. But even quicker than its own estimate. For example a recording of about 1 hour:
my estimate: 2448 s
MediaPlayer: 2444 s (4 sec difference)
Audacity: 2442 s (here we are in disagreement)
Foobar: 2448 s (another witness that agrees with my estimate :-)
MediaPlayer, real play time: 2438 s
The real playtime was 6 s (0.25%) less than MediaPlayer own estimate. Another attempt on a different sample gave the same percentage difference. However the fact that Audacity and Foobar did not always agree with my estimates, does not let me put all the blame on MediaPlayer.
I am programming for android 2.2 and am trying to using the
SoundPool class to play several sounds simultaneously but at what feel like random times sound will stop coming out of the speakers.
for each sound that would have been played this is printed in the logcat:
AudioFlinger could not create track. status: -12
Error creating AudioTrack
Audio track delete
No exception is thrown and the program continues to execute without any changes except for the lack of volume. I've had a really hard time tracking down what conditions cause the error or recreating it after it happens. I can't find the error in the documentation anywhere and am pretty much at a loss.
Any help would be greatly appreciated!
Edit: I forgot to mention that I am loading mp3 files, not ogg.
i had almost this exact same problem with some sounds i was attempting to load and play recently.
i even broke it down to loading a single mp3 that was causing this error.
one thing i noted: when i loaded with a loop of -1, it would fail with the "status 12" error, but when i loaded it to loop 0 times, it would succeed. even attempting to load 1 time failed.
the final solution was to open the mp3 in an audio editor and re-edit it with slightly lesser quality so that the file is now smaller, and doesn't seem to take up quite as many resources in the system.
finally, there is this discussion that encourages performing a release on the objects you are using, because there is indeed a hard limit on the resources that can be used, and it is system-wide, so if you use several of the resources, other apps will not be able to use them.
https://groups.google.com/forum/#!topic/android-platform/tyITQ09vV3s/discussion%5B1-25%5D
For audio, there's a hard limit of 32 active AudioTrack objects per
device (not per app: you need to share those 32 with rest of the system), and AudioTrack is used internally beneath SoundPool,
ToneGenerator, MediaPlayer, native audio based on OpenSL ES, etc. But
the actual AudioTrack limit is < 32; it depends more on soft factors
such as memory, CPU load, etc. Also note that the limiter in the
Android audio mixer does not currently have dynamic range compression,
so it is possible to clip if you have a large number of active sounds
and they're all loud.
For video players the limit is much much lower due to the intense load
that video puts on the device.
I'll use this as an opportunity to remind media developers: please
remember to call release() for media objects when your app is paused.
This frees up the underlying resources that other apps will need.
Don't rely on the media objects being cleaned up in finalize by the
garbage collector, as that has unpredictable timing.
I had a similar issue where the music tracker within my Android game would drop notes and I got the Audioflinger error (although my status was -22). I got it working however so this might help some people.
The problem occurred when a single sample was being output multiple times simultaneously. So in my case it was a single sample being played on two or more tracks. This seemed to occasionally deadlock or something and one of the two notes would be dropped. The solution was to have two copies of the sample (two actual ogg files - identical but both in the assets). Then on each track even although I was playing the same sample, it was coming from a different file. This totally fixed the issue for me.
Not sure why it works as I cache the samples into memory, but even loading the same file into two different sounds didn't fix it. Only when the samples came out of two different files did the errors go away.
I'm sure this won't help everyone and it's not the prettiest fix but it might help someone.
john.k.doe is right. You must reduce the size of your mp3 file. You should keep the size under 100kb per file. I had to reduce my 200kb file to 72kb using a constante bit rate(CBR) of 32kbps instead of the usual 128kbps. That worked for me!
Try
final ToneGenerator tg = new ToneGenerator(AudioManager.STREAM_NOTIFICATION, 50);
tg.startTone(ToneGenerator.TONE_PROP_BEEP, 200);
tg.release();
Releasing should keep your resources.
I was with this problem. In order to solve it i run the method .release() of SoundPool object after finish playing the sound.
Here's my code:
SoundPool pool = new SoundPool(10, AudioManager.STREAM_MUSIC, 50);
final int teste = pool.load(this.ctx,this.soundS,1);
pool.setOnLoadCompleteListener(new OnLoadCompleteListener(){
#Override
public void onLoadComplete(SoundPool sound,int sampleId,int status){
pool.play(teste, 20,20, 1, 0, 1);
new Thread(new Runnable(){
#Override
public void run(){
try {
Thread.sleep(2000);
pool.release();
} catch (InterruptedException e) { e.printStackTrace(); }
}
}).start();
}
});
Note that in my case my sounds had length 1-2 seconds max, so i put the value of 2000 miliseconds in Thread.sleep(), in order to only release the resources after the player have had finished.
Like said above, there is a problem with looping: when I set repeat to -1 I get this error, but with 0 everything is working properly.
I've noticed that some sounds give this error when I'm trying to play them one by one. For example:
mSoundPool.stop(mStreamID);
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
In such case, first track is played ok, but when I switch sounds, next track gives this error. It seems that using looping, a buffer is somehow overloaded, and mSoundPool.stop cannot release resources immediately.
Solution:
final Handler handler = new Handler();
handler.postDelayed(new Runnable() {
#Override
public void run() {
mStreamID = mSoundPool.play(mRandID, mVolume, mVolume, 1, -1, 1f);
}, 350);
And it's working, but delay is different for different devices.
In my case, reducing the quality and thereby the file sizes of the MP3's to under 100kb wasn't sufficient, as some 51kb files worked while some longer duration 41kb files still did not.
What helped us was reducing the sample rate from 44100 to 22050 or shortening the duration to less than 5 seconds.
I see too many overcomplicated answer. Error -12 means that you did not release the variables.
I had the same problem after I played an OGG audio file 8 times.
This worked for me:
SoundPoolPlayer onBeep; //Global variable
if(onBeep!=null){
onBeep.release();
}
onBeep = SoundPoolPlayer.create(getContext(), R.raw.micon);
onBeep.setOnCompletionListener(
new MediaPlayer.OnCompletionListener() {
#Override
public void onCompletion(MediaPlayer mp) { //mp will be null here
loge("ON Beep! END");
startGoogleASR_API_inner();
}
}
);
onBeep.play();
Releasing the variable right after .play() would mess things up, and it is not possible to release the variable inside onCompletion, so notice how I release the variable before using it(and checking for null to avoid nullpointer exceptions).
It works like charm!
A single soundPool has an internal memory limitation of 1 (one) Mb. You might be hitting this if your sound is very high quality. If you have many sounds and are hitting this limit, just create more soundpools, and distribute your sounds across them.
You may not even be able to reach the hard track limit if you are running out of memory before you get there.
That error not only appears when the stream or track limit has been reached, but also the memory limit. Soundpool will stop playing old and/or de-prioritized sounds in order to play a new sound.
guys i have a audio file which i am reading from sdcard it.On click of a button i am playing the audio file from sdcard.It successfully plays the audio file for 6 to 7 times but after that it shows unable to load (null) sample 1 not ready
i am working in Android 2.1 i.e., API 7.
String path="/sdcard/var/audio.mp3"
sound1 = mSoundPool.load(path,1);
mSoundPool.play(sound1, 1, 1, 1, time - 1, 1);
What is soundpool indicating me by saying unable to load (null) sample 1 not ready?
How can i fix this please help me i am struggling fro long time.
I have had similar problem with a wav file. Let's start discarding some things:
It is NOT a problem of waiting, although it may be in another cases. Please note that you have posted two DIFFERENT error messages like it were one:
unable to load (null)
sample 1 not ready
The first error is raised when you try to load the sample into the SoundPool, the second one when you try to play it. But in this case the second error is clearly a consequence of the first one: the sample could not be ready if it has not been loaded.
So you should concentrate before in the first error.
It is NOT related to MediaPlayer neither, since you are using the SoundPool that is a different thing AFAIK.
So the source of the problem may be, and should be discarded by order:
The file is not there.
The file is there, but it is not readable for some reason.
The file is there, and it is readable, but it is corrupt or not an audio file.
The file is there, and it is readable, and it is a non corrupted audio file, but SoundPool dislikes it.
This last was my case. For some reason SoundPool were unable to load a wav sample that in fact works in every other player that I have used. So I simply ended using another file with a different format. Here is the format of the offending file, as told by mplayer on my GNU/LiNUX box:
Opening audio decoder: [pcm] Uncompressed PCM audio decoder
AUDIO: 96000 Hz, 1 ch, s16le, 1536.0 kbit/100.00% (ratio: 192000->192000)
Selected audio codec: [pcm] afm: pcm (Uncompressed PCM)
And here it is the one for the other file that does work:
Opening audio decoder: [pcm] Uncompressed PCM audio decoder
AUDIO: 22050 Hz, 2 ch, s16le, 705.6 kbit/100.00% (ratio: 88200->88200)
Selected audio codec: [pcm] afm: pcm (Uncompressed PCM)
There are many differences, but the point is that the second works, so I simply discarded the first one and move into the second. Knowing this if I needed the first sound sample, I just need to adjust its rate and channels with an audio editor like Audacity to solve the problem.
Why it did not just work in first case? Who knows, but if I can solve it so easy... who cares at all?
Regards,
You will need to check file is loaded successfully before playing it using SoundPool.setOnLoadCompleteListener
public void loadSound (String strSound, int stream) {
boolean loaded = false;
mSoundPool.setOnLoadCompleteListener(new OnLoadCompleteListener() {
#Override
public void onLoadComplete(SoundPool soundPool, int sampleId,
int status) {
mSoundPool.play(stream, streamVolume, streamVolume, 1, LOOP_1_TIME, 1f);
}
});
try {
stream= mSoundPool.load(aMan.openFd(strSound), 1);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
I think that's your problem. Whole file is not loaded and you're trying to play it. You have to wait some time before playing it.
Have you tried calling mediaplayer.release() and re-initilizing? also http://developer.android.com/reference/android/media/MediaPlayer.html