I am encountering problems with playing looping sounds using SoundPool and .OGG files. I have this HashMap set up for finding a sound associated to a name and playing it/stopping it
public void playLoopSound(String soundName){
currentSound = (Integer) soundMap.get(soundName);
if(currentSound != -1){
try{
Logger.log("Playing Loop Sound: " + currentSound);
loopingSound = soundPool.play(currentSound, 1, 1, 0, -1, 1);
} catch (Exception e) {
Logger.log("Sound Playing Error: " + e.getMessage());
}
} else {
Logger.log("Sound Not Found");
}
}
public void stopLoopSound(){
soundPool.stop(loopingSound);
loopingSound = 0;
}
This set up works fine, i start the loop when the character starts walking and stop it when it stops walking.
The sound would however stop playing randomly, usually a minute or so after having been used (being turned on and off)...
Has anyone else encountered similar problems with SoundPool and looped sounds?
Reading the documentation on Soundpool clears up a lot:
-playing a sound you pass an int referring to the loaded sound. This method (and others starting a sound playing), soundpool.play(int) returns an int referring to the threadID in which the sound now plays.
-when you want to stop the sound (or the looping sound) you have to use the int of the threadID you just got back when you started playing the sound, NOT the int of the sound itself!
Within the soundpool class then, you set a private int threadIDInt = mSoundPool.play/setloop/whatever(int soundtobeplayed, int/float volumeleft, int/float volumeright, float speed, int loopornot); [note: the arguments are a bit made up; look 'em up]
To then stop the sound (or pause, or set it to loop or not) you pass mSoundPool.stop(threadIDINT);
TLDR: there's a differnce between the int which denotes your sound and the internal int soundpool uses to denote the stream in which your current sound is playing.
Related
I am developing an app which at some point is supposed to display the loudness of the user's voice input over a time span of 5s. The app is meant for voice training so the user is supposed to be able to hold the sound on a certain loudness level for the whole time.
I used MediaRecorder to record the user's input and on the Emulator it is working completely fine.
However, when I run the app on an external device (motorola moto g5s), the app seems to downregulate the loudness after 2 s to a very low constant value.
I thought that the problem might be the AudioSource so I switched it from AudioSource.MIC to AudioSource.UNPROCESSED but then it didn't let me record anything on the mobile device (on the emulator it was working).
Is this due to some filter in the audio hardware of the device? Is it possible to fix this or might the problem lay somewhere completely different?
The Audio is recorded when a button is clicked:
public void onClick(View view) {
// when record button is clicked MediaRecorder will record for 5 seconds
// the time passed recording is shown to the user in the progress bar under the button
if(view.getId() == R.id.btnRecord){
System.out.println("\n record was clicked \n");
// scale volume to range [0:1]
int max_val = Settings_menu.volumeMax;
float noiseNow_scaled = (float) Settings_menu.noiseNow/max_val;
try {
// recording
mediaRecorder = new MediaRecorder();
nativeSampleRate = Integer.parseInt(am.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE));
System.out.println("\n Native sampling rate: " + nativeSampleRate);
//nativeSampleBufSize = Integer.parseInt(am.getProperty(AudioManager.PROPERTY_OUTPUT_FRAMES_PER_BUFFER));
mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
path = getRecordingFilePath();
mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
mediaRecorder.setAudioSamplingRate(nativeSampleRate);
mediaRecorder.setOutputFile(getRecordingFilePath());
mediaRecorder.prepare();
mediaRecorder.start();
// play pink noise for masking auditory feedback
streamID = soundPool.play(pinknoise,noiseNow_scaled , noiseNow_scaled, 0, 0, 1);
} catch (Exception e){
e.printStackTrace();
}
CountDownTimer timer = new CountDownTimer(5000,LoudnessCalibration.interval) {
// the maximum loudness of the recording within a LoudnessCalibration.interval ms interval is computed
// progress bar loads new progress every LoudnessCalibration.interval ms
// progressbar is filled when 5000 is reached
#Override
public void onTick(long l) {
counter += LoudnessCalibration.interval;
pb.setProgress(counter);
amplitudes.add((float) mediaRecorder.getMaxAmplitude());
length ++;
}
// when 5 seconds are over MediaRecorder has to be released & pink noise stops playing
#Override
public void onFinish() {
mediaRecorder.stop();
mediaRecorder.reset();
mediaRecorder.release();
mediaRecorder = null;
soundPool.stop(streamID);
pb.setProgress(0);
counter = 0;
}
}.start();
{...}
}
Thank you for any ideas or thoughts on this!
I want to play float arrays on Android. I wanted to go the (what I thought was the) easy way first so thought I'd use AudioTrack. I chose to use static mode because the audio stream is not continuous. In the documentation it says:
In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. Note that the actual playback of this data might occur after this function returns.
So initially my implementation looked like:
// constructor ..
audioTrack = new AudioTrack(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_NAVIGATION_GUIDANCE)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION).build(),
new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
.setSampleRate(sample_rate)
.setChannelMask(AudioFormat.CHANNEL_OUT_STEREO).build(),
sample_rate * 2 * 10, AudioTrack.MODE_STATIC, AudioManager.AUDIO_SESSION_ID_GENERATE);
[...]
public ... () { // function
audioTrack.write(sound, 0, sound.length, AudioTrack.WRITE_NON_BLOCKING);
audioTrack.play()
Which lead to the first problem that a sound could only play once. I found I need to be calling stop() and reloadStaticData() after play(). But this results in the first sound not being played at all. And any following sounds to be played wrongly (meaning the sound is wrong).
I tried out putting stop() and reloadStaticData() after write() and before play(), then the first sound is played but the next ones are wrong (they're a cut off version of the first sound instead of a different sound).
From these experiences it seems to me like write() is not actually writing to the start of the buffer. In general how to use AudioTrack is completely unclear to me. stop() seems to stop playback. But I have to call it after play() to reset the playback head position.
Turns out my phone speakers couldn't play the sound... this is my solution
public void play_sound(float[] sound) {
if (audioTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING) {
audioTrack.stop();
audioTrack.reloadStaticData();
}
float[] buffer = new float[buffer_size_in_bytes / 4];
for (int i = 0; i < sound.length; i++) {
buffer[i] = sound[i];
}
// Last arg is ignored.
audioTrack.write(buffer, 0, buffer.length, AudioTrack.WRITE_BLOCKING);
audioTrack.play();
}
Basically, my program records user input via the microphone and store it as a .pcm file in the sdcard/ directory. It'll be overwritten should there be an existing one. The file is then later sent for playback and analysis (mainly FFT, RMS computation).
I have added another function which allows the program to record system audio, so it allows user's mp3 files to be analyzed as well. It streams the system audio and store it as a .pcm file for later playback and analysis.
It's all functioning well. However, there's a slight issue, when the program streams audio, it captures input from the mic and there'll be noises in the playback. I do not want this as it'll affect the analysis reading. I googled for a solution and found that I can actually mute the mic. So now, I want to mute the mic when the mp3 file is being streamed.
The code I have found is,
AudioManager.setMicrophoneMute(true);
I tried to implement it but it just crashes my application. I tried to find for solutions these few days but I cannot seem to get any.
Here is my code snippet for the part where I want to stream system audio and muting the microphone before it starts streaming.
//create a new AudioRecord object to record the audio data of an mp3 file
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(AudioManager.STREAM_MUSIC,
frequency, channelConfiguration,
audioEncoding, bufferSize);
//a short array to store raw pcm data
short[] buffer = new short[bufferSize];
Log.i("decoder", "The audio record created fine ready to record");
try {
audioManager.setMicrophoneMute(true);
} catch (Exception e) {
e.printStackTrace();
}
audioRecord.startRecording();
isDecoding = true;
When the setMicrophoneMute(true) line is surrounded with try-catch, the program would only crash when I want to send the recording for play back. Errors are as follow:
"AudioFlinger could not create track, status: -12"
"Error initializing AudioTrack"
"[android.media.AudioTrack] Error code -20 when initializing AudioTrack."
When it is not surrounded with try-catch, the program would just crash the moment I click on the start streaming button.
"Decoding failed" < this is an error log from catching a throwable.
How can I mute the microphone input while streaming the system audio? Let me know if I can provide you with more codes. Thank you!
**EDIT
I have implemented my mutemicrophone successfully, it even returns me a true for isMicrophoneMute(), however, it's not muted as it still records from the microphone; it's a false true.
Based on the suggested answer, I have already created a class for audio focus as below:
private final Context c;
private final AudioManager.OnAudioFocusChangeListener changeListener =
new AudioManager.OnAudioFocusChangeListener()
{
public void onAudioFocusChange(int focusChange)
{
//nothing to do
}
};
AudioFocus(Context context)
{
c = context;
}
public void grabFocus()
{
final AudioManager am = (AudioManager) c.getSystemService(Context.AUDIO_SERVICE);
final int result = am.requestAudioFocus(changeListener,
AudioManager.STREAM_MUSIC,
AudioManager.AUDIOFOCUS_GAIN);
Log.d("audiofocus","Grab audio focus: " + result);
}
public void releaseFocus()
{
final AudioManager am = (AudioManager) c.getSystemService(Context.AUDIO_SERVICE);
final int result = am.abandonAudioFocus(changeListener);
Log.d("audiofocus","Abandon audio focus: " + result);
}
I then call the method from my Decoder class to request for audio focus:
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioFocus.grabFocus();
audioRecord = new AudioRecord(AudioManager.STREAM_MUSIC,
frequency, channelConfiguration,
audioEncoding, bufferSize);
//a short array to store raw pcm data
short[] buffer = new short[bufferSize];
Log.i("decoder", "The audio record created fine ready to record");
audioRecord.startRecording();
isDecoding = true;
Log.i("decoder", "Start recording fine");
And then release the focus when stop decoding is pressed:
//stops recording
public void stopDecoding(){
isDecoding = false;
Log.i("decoder", "Out of recording");
audioRecord.stop();
try {
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
mp.stop();
mp.release();
audioFocus.releaseFocus();
}
However, this makes my application crash. Where did I went wrong?
The following snippet requests permanent audio focus on the music audio stream. You should request the audio focus immediately before you begin playback, such as when the user presses play. I think this would be the way to go rather than muting the input microphone. Check out the developer audio focus docs for more information
AudioManager am = mContext.getSystemService(Context.AUDIO_SERVICE);
...
// Request audio focus for playback
int result = am.requestAudioFocus(afChangeListener,
// Use the music stream.
AudioManager.STREAM_MUSIC,
// Request permanent focus.
AudioManager.AUDIOFOCUS_GAIN);
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
am.registerMediaButtonEventReceiver(RemoteControlReceiver);
// Start playback.
}
What is the best way to play a sound and stop it?
I tried using RingtoneManager, MediaPlayer and SoundPool, but failed to stop the sound.
Is there a way to stop sound when using RingtoneManager TYPE_ALARM?
Please a simple snippet.
This is the last thing I tried:
SoundPool pool = new SoundPool(10, AudioManager.STREAM_MUSIC, 0);
List<Integer> streams = new ArrayList<Integer>();
int soundId = pool.load(getApplicationContext(), R.drawable.alarm, 1); //There are several versions of this, pick which fits your sound
try{
if(myWifiInfo.getRssi() < -55)
{
/*
Uri notification = RingtoneManager.getDefaultUri(RingtoneManager.TYPE_NOTIFICATION);
Ringtone r = RingtoneManager.getRingtone(getApplicationContext(), notification);
r.play();*/
Thread.sleep(1000);
int streamId = -1;
streamId = pool.play(soundId, 1.0f, 1.0f, 1, 0, 1.0f);
streams.add(streamId);
textRssi.setText(String.valueOf(myWifiInfo.getRssi() + " WARNING"));
}
else {
Log.e("WIFI","Usao");
for (Integer stream : streams) {
pool.stop(stream);
}
streams.clear();
Log.e("WIFI","Izasao");
}
I'm assuming this function gets called many times? The problem is that you're adding the streamID to a local list. That list is getting recreated each time the function is called, so when you try to call stop the list will always be empty.
If its only being called once, then the problem is you're never calling stop, only play (they're in separate branches of that if).
I have searched StackOverflow and cannot find a situtation like mine. I am using four buttons with each button playing a sound file.
I am using SoundPool:
SoundPool sound = new SoundPool(4, AudioManager.STREAM_MUSIC, 0);
I am also using the OnLoadCompleteListener() which uses Log to create an I notification in LogCat.
When I launch the program in the emulator I see all four samples complete loading. During the program three of the sounds will play, however, one will always say:
WARN/SoundPool(4842): sample 0 not READY
Any Ideas.. cause i'm quite flabbergasted. The sound files are 16-bit pcm wave file playing squarewave tones.
Load Code:
sound.setOnLoadCompleteListener(new OnLoadCompleteListener(){
#Override
public void onLoadComplete(SoundPool sound, int sampleId, int status) {
if(status != 0)
Log.e("SOUND LOAD"," Sound ID: " + sampleId + " Failed to load.");
else
Log.i("SOUND LOAD"," Sound ID: " + sampleId + " loaded.");
}
});
soundID[0] = sound.load(this, R.raw.greennote, 1);
soundID[1] = sound.load(this, R.raw.rednote, 1);
soundID[2] = sound.load(this, R.raw.yellownote, 1);
soundID[3] = sound.load(this, R.raw.bluenote, 1);
Play Sound:
streamid.setStreamId(myActivity.sound.play(id, 0.5f, 0.5f, 0, 0, 1));
I'm having the same issues. From my experiments, it looks like there's something wrong with the ID handling. SoundPool just doesn't like sound IDs with the number 0.
So I have found a work-around. Start my sample IDs with 1, not 0. Hope this works.