audioTrack.stop doesn't stop playing audio - android

I am creating AudioTrack with following definition.
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffer.length * 2,
AudioTrack.MODE_STATIC);
audioTrack.write(buffer, 0, buffer.length);
audioTrack.setPositionNotificationPeriod(500);
audioTrack.setNotificationMarkerPosition(buffer.length);
progressListener = new PlaybackProgress(buffer.length);
audioTrack.setPlaybackPositionUpdateListener(progressListener);
When the audioTrack finishes, the following is called to stop the audio and reset the head position.
private void resetAudioPlayback() {
ViewGroup.LayoutParams params = playbackView.getLayoutParams();
params.width = 0;
playbackView.setLayoutParams(params);
audioTrack.stop();
audioTrack.reloadStaticData();
playImage.animate().alpha(100).setDuration(500).start();
}
The above code works perfectly fine with Android 5.1. But I having issues with 4.4.4. audioTrack.stop() is called but the audio is not stopped, since the reloadStaticData rewinds the audio back to the start position, it replays the audio. but with 5.1, it correctly stops and resets the buffer back to the start of the playback and when play button is pressed, plays from beginning.
Can someone help me how can I this issue with Android 4.4.4?

I'm not absolutely certain if this will solve your problem, but consider using pause() instead of stop(). By documentation, stop() for MODE_STREAM will actually keep playing the remainder of the last buffer that was written. You're using MODE_STATIC, but it might be worth trying.
Also (possibly unrelated), consider that write() returns the number of bytes written, so you shouldn't depend on a single write filling the entire buffer of the AudioTrack every time. write() should be treated like an OutputStream write in that it may not write the entire contents of the buffer it was given, so it's better to write a loop and check how much has been written with each call to write(), then continue to write from a new index in the buffer array until the sum of all the writes equals the length of the buffer.

Related

Regulate Android AudioTrack playback speed

I'm currently trying to playback audio using AudioTrack. Audio is received over the network and application continuously read data and add to an internal buffer. A separate thread is consuming data and using AudioTrack to playback.
Problems:
Audio playback fluctuate (feels like audio drop at a regular interval) continuously making it unclear.
Playback speed is too high or too low making them unrealistic.
In order to avoid the network latency and other factors I made the application to wait till it read enough data and playback at the end.
This makes the audio to play really fast. Here is a basic sample of logic I use.
sampleRate = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT),
AudioTrack.MODE_STREAM);
audioTrack.play();
short shortBuffer[] = new short[AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT)];
while (!stopRequested){
readData(shortBuffer);
audioTrack.write(shortBuffer, 0, shortBuffer.length, AudioTrack.WRITE_BLOCKING);
}
Is it correct to say that Android AudiTrack class doesn't have in built functionality to control the audio playback based on environment conditions? If so, are there better libraries available with a simplified way for audio playback?
The first issue that I see, it is an arbitrary sampling rate.
AudioTrack.getNativeOutputSampleRate will return the sampling rate that used by the sound system. It may be 44100, 48000, 96000, 192000 or whatever. But looks like you have audio data from some independent source, which produces the data on the very exact sampling rate.
Let's say audio data from the source is sampled at 44100 samples per second. If you start playing it at 96000 it will be speeded up and higher pitched.
So, use the sampling rate setting, along with the number of channels, sample format etc, as it given by the source, not relying on system defaults.
The second: are you sure the readData procedure always will be fast enough to successfully fill the buffer, whatever small the buffer is, and return back faster than the buffer is played?
You have created AudioTrack with AudioTrack.getMinBufferSize passed as bufferSizeInBytes parameter.
The getMinBufferSize function returns a minimum possible size of the buffer that can be used at this parameter. Let's say it returned the size corresponding to a buffer of 10ms length.
That means the new data should be prepared within this time interval. I.e. The time interval between previous write returned control and new write is performed should be less than the time size of the buffer.
So, if the readData function may delay for some reason longer than that time interval, the playback will be paused for that time, you'll hear small gaps in the playback.
The reasons why readData may delay could be various: if it's reading data from the file, then it may delay waiting for IO operations; if it allocates java objects, it may be bumped into garbage collector's delay; if it uses some kind of decoder of another kind of audio source which uses it's own buffering, it may periodically delay refilling the buffer.
But anyway, if you're not creating some kind of real-time synthesizer which should react as soon as possible to the user input, always use the buffer size reasonably high, but not less than getMinBufferSize returned. I.e.:
sampleRate = 44100;// sampling rate of the source
int bufSize = sampleRate * 4; // 1 second length; 4 - is the frame size: 2 chanels * 2 bytes per each sample
bufSize = max(bufSize, AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT)); // Not less than getMinBufferSize returns
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
bufSize,
AudioTrack.MODE_STREAM);
Like user #pskink said,
Most likely your sampleRate (or any other parameter passed to the
AudioTrack constructor) is invalid.
So I would start by checking what value you are actually setting the sample rate.
For reference, you can also set the speed of AudioTrack by calling the setPlayBackParams method:
public void setPlaybackParams (PlaybackParams params)
If you check the AudioTrack docs, you can see the PlaybackParams docs and can set the speed and pitch of the output audio. This object can then be passed to set the playback parameters within your AudioTrack object.
However, it is unlikely that you will need to use this if your only issue is the original constructor sampleRate (since we cannot see where the variable sampleRate comes from).

Way to fade out Android AudioTrack on .pause()?

I've been delving into Android AudioTrack against my better interest. I am trying to seamlessly transition between two AudioTrack's playback, that is, one should pause and the other should start and there should be no gap between the two.
This works okay, but I have noticed that when calling the .pause() method on AudioTrack, it will 'pop' or 'crackle' when stopping playback of the sound. This is unsurprising, as suddenly stopping the playback of a sound in this manner (especially if it is at a high point) is bound to create these kinds of artifacts.
However, if I could fade out the playback of the AudioTrack when pause is called, this would be a non-issue. This is easier said than done, however, because it appears Android AudioTracks cannot be modified in place. I also can't use .setVolume() because I am targeting API 17 as my minimum so Android 4.0 users can still use my app.
Is there any way of doing this? My immediate thoughts were to create a new pause(AudioTrack at) method that would modify the AudioTrack buffer and allow it to quickly fade out, and then calling pause once it had faded. It isn't a huge deal for me if the pause occurs a few frames late if it means the popping sound will be gone. Unfortunately I don't see an easy way to do this.
Here's what I have so far:
if(event.getAction() == MotionEvent.ACTION_DOWN) {
audioTracks[noteToPlay].release();
audioTracks[noteToPlay] = new AudioTrack(AudioManager.STREAM_MUSIC,
sr, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, BUFFSIZE,
AudioTrack.MODE_STATIC);
writeSample(noteToPlay);
audioTracks[noteToPlay].play();
}
else if (event.getAction() == MotionEvent.ACTION_UP) {
short[] release = makeReleaseSample(noteToPlay);
AudioTrack releaseTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sr, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, BUFFSIZE,
AudioTrack.MODE_STATIC);
releaseTrack.write(release, 0, release.length);
audioTracks[noteToPlay].pause();
releaseTrack.play();
}
As you can see, within the ACTION_UP handler I pause the audioTracks[noteToPlay] track and play the release track right after. The pop occurs on the pause, because the audioTracks[noteToPlay] contains a sine wave and pause is not pausing at the low point so it is creating artifacts.
Something to note is that the last frame of audioTrack and the first frame of release contain the same frame, so I know it's not a case of jumping from the point in the first audioTrack to the point in the second -- I am fairly certain it is due to the sudden cut-off of the first audioTrack.
Any ideas?

Android AudioTrack stream cuts out early

I'm trying to play some looping sound in Android, and I have that going pretty well for me. All good things must come to an end, though, and I would like for that to include my audio loop. However, if I call AudioTrack.release() after this loop, as I should, the end of my audio stream gets cut off - there is extra data that I know I'm supposed to hear, but don't.
I've verified this by putting in a Thread.sleep(2000) before the release - the sound plays correctly with that in there. My code looks something like this:
// Initialize Audiotrack
int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, 2 * minBufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
// Play looping sound
while (stuff) {
mAudioTrack.write(stuff);
}
// Play one last bit of sound before returning
mAudioTrack.write(lastSound);
// Block until the AudioTrack has played everything we've given it
Thread.sleep(2000);
// Get rid of the Audiotrack
mAudioTrack.release();
I suppose I could leave the Thread.sleep(2000) in there and call it a day, but that sounds messy and irresponsible to me. I'd like to either have a while() loop block for the most appropriate amount of time, or use AudioTrack.setPlaybackPositionUpdateListener() and put the release() in there.
If I go the first route, I need something to pend on, and AudioTrack.getPlayState() appears to always report the track as playing. So I'm stuck there.
If I go the second route, I need a way of getting the position in the AudioTrack buffer that was written to last, so I can tell the AudioTrack what position I'm waiting for it to play up to. I don't have any ideas as to how to get that information, though.
I guess I don't really care which way I do it, so any help towards solving the problem one way or the other would be much appreciated.
The problem is related to the buffer size in the AudioTrack.
Imagine the minBufferSize is 8k. This means that the AudioTrack will play sound when the buffer is full.
mAudioTrack.write(stuff);
If stuff is only 4K, the AudioTrack will wait until the next call to write until it has enough data to play.
Conclusion: You need to keep track on how much data you have written, and at the end of your playback feed the AudioTrack with some dummy bytes to complete minBufferSize. To make thing easier you could just feed a whole minBufferSize amount of silence bytes.
By the way, to feed dummy or silence just fill the data with zeroes.

Android active noise cancellation

I'm working a somewhat ambitious project to get active noise-reduction achieved on Android with earbuds or headphones on.
My objective is to record ambient noise with the android phone mic, invert the phase (a simple *-1 on the short-value pulled from the Audio Record?), and playback that inverted waveform through the headphones. If the latency and amplitude are close to correct, it should nullify a good amount of mechanical structured noise in the environment.
Here's what I've got so far:
#Override
public void run()
{
Log.i("Audio", "Running Audio Thread");
AudioRecord recorder = null;
AudioTrack track = null;
short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded audio data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
//NoiseSuppressor ns = NoiseSuppressor.create(recorder.getAudioSessionId());
//ns.setEnabled(true);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
recorder.startRecording();
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(!stopped)
{
short[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
for(int iii = 0;iii<buffer.length;iii++){
//Log.i("Data","Value: "+buffer[iii]);
buffer[iii] = buffer[iii] *= -1;
}
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
Log.w("Audio", "Error reading voice audio", x);
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
recorder.stop();
recorder.release();
track.stop();
track.release();
}
}
I was momentarily excited to find the Android API actually already has a NoiseSuppression algorithm (you'll see it commented out above). I tested with it and found NoiseSuppressor wasn't doing much to null out constant tones which leads me to believe it's actually just performing a band-pass filter at non-vocal frequencies.
So, my questions:
1) The above code takes about 250-500ms from mic record through playback in headphones. This latency sucks and it would be great to reduce it. Any suggestions there would be appreciated.
2) Regardless of how tight the latency is, my understanding is that the playback waveform WILL have phase offset from the actual ambient noise waveform. This suggests I need to execute some kind of waveform matching to calculate this offset and compensate. Thoughts on how that gets calculated?
3) When it comes to compensating for latency, what would that look like? I've got an array of shorts coming in every cycle, so what would a 30ms or 250ms latency look like?
I'm aware of fundamental problems with this approach being that the location of the phone being not next to the head is likely to introduce some error, but I'm hopeful with some either dynamic or fixed latency correction it maybe be possible to overcome it.
Thanks for any suggestions.
Even if you were able to do something about the latency, it's a difficult problem as you don't know the distance of the phone from the ear, plus there's the fact that distance is not fixed (as the user will move the phone), plus the fact that you don't have a microphone for each ear (so you can't know what the wave will be at one ear until after it's got there, even if you have zero latency)
Having said that, you might be able to do something that could cancel highly periodic waveforms. All you could do though is allow the user to manually adjust the time delay for each ear - as you have no microphones near the ears themselves, you can have no way in your code to know if you're making the problem better or worse.

Playing back sound coming from microphone in real-time

I've been trying to get my application recording the sound coming from the microphone and playing it back in (approximately) real-time, however without success.
I'm using AudioRecord and AudioTrack classes for record and playback, respectively. I've tried different approaches, I've tried to record the incoming sound and write it to a file and it worked fine. I've also tried to playback sound from that file AFTER with AudioTrack and it worked fine too. The problem is when I try to play the sound in real-time, instead of reading a file after it's written.
Here is the code:
//variables
private int audioSource = MediaRecorder.AudioSource.MIC;
private int samplingRate = 44100; /* in Hz*/
private int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
private int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
private int bufferSize = AudioRecord.getMinBufferSize(samplingRate, channelConfig, audioFormat);
private int sampleNumBits = 16;
private int numChannels = 1;
// …
AudioRecord recorder = new AudioRecord(audioSource, samplingRate, channelConfig, audioFormat, bufferSize);
recorder.startRecording();
isRecording = true;
AudioTrack audioPlayer = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
if(audioPlayer.getPlayState() != AudioTrack.PLAYSTATE_PLAYING)
audioPlayer.play();
//capture data and record to file
int readBytes=0, writtenBytes=0;
do{
readBytes = recorder.read(data, 0, bufferSize);
if(AudioRecord.ERROR_INVALID_OPERATION != readBytes){
writtenBytes += audioPlayer.write(data, 0, readBytes);
}
}
while(isRecording);
It is thrown a java.lang.IllegalStateException with the reason being caused by "play() called on a uninitialized AudioTrack".
However, if I change the AudioTrack initialization for example to use sampling rate 8000Hz and sample format 8 bits (instead of 16), it doesn't throw the exception anymore and the application runs, although it produces horrible noise.
When I play AudioTrack from a file, there is no problem with the initialization of the AudioTrack, I tried 44100 and 16 bits and it worked properly, producing the correct sound.
Any help ?
All native Android audio is encoded. You can only play out PCM formats in real time, or use a special streaming codec, which I don't think is trivial on Android.
The point is that if you want to record/play out audio simultaneously, you would have to create your own audio buffer and store raw PCM-encoded audio samples in there (I'm not sure if you're thinking duh! or whether this is all over your head, so I'll try to be clear but not to chew your own gum).
PCM is a digital representation of an analog signal in which your audio samples are a set of "snapshots" of the original acoustic wave. Because all kinds of clever mathematicians and engineers saw the potential in trying to reduce the number of bits you represent this data with, they came up with all sorts of encoders. The encoded (compressed) signal is represented very differently from the raw PCM signal and has to be decoded (en-cod-er+dec-oder = codec). Unless you're using special algorithms and media streaming codecs, it's impossible to play back an encoded signal like you're trying to, because it's not encoded sample by sample, but rather frame by frame, where you need the whole frame of samples, if not the complete signal, to decode this frame.
The way to do it is to manually store audio samples coming from the microphone buffer and manually feeding them to the output buffer. You will have to do some coding for that, but I believe there are some open-source apps that you can look at and take a peak at their source (unless you're willing to sell your app later on, of course, but that's a whole different discussion).
If you're developing for Android 2.3 or later and are not too scared of programming in native code, you can try using OpenSL ES. The Android-specific features of OpenSL ES are listed here. This platform allows you somewhat more flexible audio manipulation and you might find just what you need, if your app will be highly reliant on audio processing.
It is thrown a java.lang.IllegalStateException with the reason being
caused by "play() called on a uninitialized AudioTrack".
It is because the buffer size too small. I tried "bufferSize += 2048;", it's ok then.
I had a similar problem and I solved it by adding this permission to the manifest file:
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
make sure that your var data is enough for samplingRate
Ex: if you use samplingRate as 44100 your data bytearrays's length should be 44101 or more

Categories

Resources