I've been delving into Android AudioTrack against my better interest. I am trying to seamlessly transition between two AudioTrack's playback, that is, one should pause and the other should start and there should be no gap between the two.
This works okay, but I have noticed that when calling the .pause() method on AudioTrack, it will 'pop' or 'crackle' when stopping playback of the sound. This is unsurprising, as suddenly stopping the playback of a sound in this manner (especially if it is at a high point) is bound to create these kinds of artifacts.
However, if I could fade out the playback of the AudioTrack when pause is called, this would be a non-issue. This is easier said than done, however, because it appears Android AudioTracks cannot be modified in place. I also can't use .setVolume() because I am targeting API 17 as my minimum so Android 4.0 users can still use my app.
Is there any way of doing this? My immediate thoughts were to create a new pause(AudioTrack at) method that would modify the AudioTrack buffer and allow it to quickly fade out, and then calling pause once it had faded. It isn't a huge deal for me if the pause occurs a few frames late if it means the popping sound will be gone. Unfortunately I don't see an easy way to do this.
Here's what I have so far:
if(event.getAction() == MotionEvent.ACTION_DOWN) {
audioTracks[noteToPlay].release();
audioTracks[noteToPlay] = new AudioTrack(AudioManager.STREAM_MUSIC,
sr, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, BUFFSIZE,
AudioTrack.MODE_STATIC);
writeSample(noteToPlay);
audioTracks[noteToPlay].play();
}
else if (event.getAction() == MotionEvent.ACTION_UP) {
short[] release = makeReleaseSample(noteToPlay);
AudioTrack releaseTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sr, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, BUFFSIZE,
AudioTrack.MODE_STATIC);
releaseTrack.write(release, 0, release.length);
audioTracks[noteToPlay].pause();
releaseTrack.play();
}
As you can see, within the ACTION_UP handler I pause the audioTracks[noteToPlay] track and play the release track right after. The pop occurs on the pause, because the audioTracks[noteToPlay] contains a sine wave and pause is not pausing at the low point so it is creating artifacts.
Something to note is that the last frame of audioTrack and the first frame of release contain the same frame, so I know it's not a case of jumping from the point in the first audioTrack to the point in the second -- I am fairly certain it is due to the sudden cut-off of the first audioTrack.
Any ideas?
Related
I would like to modify Android OS (official image from AOSP) to add preprocessing to a normal phone call playback sound.
I've already achieved this filtering for app audio playback (by modifying HAL and audioflinger).
I'm OK with targeting only a specific device (Nexus 5X). Also, I only need to filter playback - I don't care about recording (uplink).
UPDATE #1:
To make it clear - I'm OK with modifying Qualcomm-specific drivers, or whatever part that it is that runs on Nexus 5X and can help me modify in-call playback.
UPDATE #2:
I'm attempting to create a Java layer app that routes the phone playback to the music stream in real time.
I've already succeeded in installing it as a system app, getting permissions for initializing AudioRecord with AudioSource.VOICE_DOWNLINK. However, the recording gives blank samples; it doesn't record the voice call.
This is the code inside my worker thread:
// Start recording
int recBufferSize = AudioRecord.getMinBufferSize(44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mRecord = new AudioRecord(MediaRecorder.AudioSource.VOICE_DOWNLINK, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, recBufferSize);
// Start playback
int playBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, playBufferSize, AudioTrack.MODE_STREAM);
mRecord.startRecording();;
mTrack.play();
int bufSize = 1024;
short[] buffer = new short[bufSize];
int res;
while (!interrupted())
{
// Pull recording buffers and play back
res = mRecord.read(buffer, 0, bufSize, AudioRecord.READ_NON_BLOCKING);
mTrack.write(buffer, 0, res, AudioTrack.WRITE_BLOCKING);
}
// Stop recording
mRecord.stop();
mRecord.release();
mRecord = null;
// Stop playback
mTrack.stop();
mTrack.release();;
mTrack = null;
I'm running on a Nexus 5X, my own AOSP custom ROM, Android 7.1.1. I need to find the place which will allow call recording to work - probably somewhere in hardware/qcom/audio/hal in platform code.
Also I've been looking at the function voice_check_and_set_incall_rec_usecase at hardware/qcom/audio/hal/voice.c However, I wasn't able to make sense of it (how to make it work the way I want it to).
UPDATE #3:
I've opened a more-specific question about using AudioSource.VOICE_DOWNLINK, which might draw the right attention and will eventually help me solve this question's problem as well.
There are several possible issues that come to my mind. The blank buffer might indicate that you have the wrong source selected. Also since according to https://developer.android.com/reference/android/media/AudioRecord.html#AudioRecord(int,%20int,%20int,%20int,%20int) you might not always get an exception even if something's wrong with the configuration, you might want to confirm whether your object has been initialized properly. If all else fails, you could also do an
"mRecord.setPreferredDevice(AudioDeviceInfo.TYPE_BUILTIN_EARPIECE);"
to route the phone's built-in earpiece directly to the input of your recorder. Yeah, it's kinda dirty and hacky, but perhaps suits the purpose.
The other thing what was puzzling me that instead of using the builder class you've tried to configure the object directly via its constructor. Is there a specific reason why you don't want to use AudioRecord.Builder (there's even a nice example at https://developer.android.com/reference/android/media/AudioRecord.Builder.html ) instead?
I am creating AudioTrack with following definition.
audioTrack = new AudioTrack(
AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
buffer.length * 2,
AudioTrack.MODE_STATIC);
audioTrack.write(buffer, 0, buffer.length);
audioTrack.setPositionNotificationPeriod(500);
audioTrack.setNotificationMarkerPosition(buffer.length);
progressListener = new PlaybackProgress(buffer.length);
audioTrack.setPlaybackPositionUpdateListener(progressListener);
When the audioTrack finishes, the following is called to stop the audio and reset the head position.
private void resetAudioPlayback() {
ViewGroup.LayoutParams params = playbackView.getLayoutParams();
params.width = 0;
playbackView.setLayoutParams(params);
audioTrack.stop();
audioTrack.reloadStaticData();
playImage.animate().alpha(100).setDuration(500).start();
}
The above code works perfectly fine with Android 5.1. But I having issues with 4.4.4. audioTrack.stop() is called but the audio is not stopped, since the reloadStaticData rewinds the audio back to the start position, it replays the audio. but with 5.1, it correctly stops and resets the buffer back to the start of the playback and when play button is pressed, plays from beginning.
Can someone help me how can I this issue with Android 4.4.4?
I'm not absolutely certain if this will solve your problem, but consider using pause() instead of stop(). By documentation, stop() for MODE_STREAM will actually keep playing the remainder of the last buffer that was written. You're using MODE_STATIC, but it might be worth trying.
Also (possibly unrelated), consider that write() returns the number of bytes written, so you shouldn't depend on a single write filling the entire buffer of the AudioTrack every time. write() should be treated like an OutputStream write in that it may not write the entire contents of the buffer it was given, so it's better to write a loop and check how much has been written with each call to write(), then continue to write from a new index in the buffer array until the sum of all the writes equals the length of the buffer.
I'm trying to play some looping sound in Android, and I have that going pretty well for me. All good things must come to an end, though, and I would like for that to include my audio loop. However, if I call AudioTrack.release() after this loop, as I should, the end of my audio stream gets cut off - there is extra data that I know I'm supposed to hear, but don't.
I've verified this by putting in a Thread.sleep(2000) before the release - the sound plays correctly with that in there. My code looks something like this:
// Initialize Audiotrack
int minBufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, 2 * minBufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
// Play looping sound
while (stuff) {
mAudioTrack.write(stuff);
}
// Play one last bit of sound before returning
mAudioTrack.write(lastSound);
// Block until the AudioTrack has played everything we've given it
Thread.sleep(2000);
// Get rid of the Audiotrack
mAudioTrack.release();
I suppose I could leave the Thread.sleep(2000) in there and call it a day, but that sounds messy and irresponsible to me. I'd like to either have a while() loop block for the most appropriate amount of time, or use AudioTrack.setPlaybackPositionUpdateListener() and put the release() in there.
If I go the first route, I need something to pend on, and AudioTrack.getPlayState() appears to always report the track as playing. So I'm stuck there.
If I go the second route, I need a way of getting the position in the AudioTrack buffer that was written to last, so I can tell the AudioTrack what position I'm waiting for it to play up to. I don't have any ideas as to how to get that information, though.
I guess I don't really care which way I do it, so any help towards solving the problem one way or the other would be much appreciated.
The problem is related to the buffer size in the AudioTrack.
Imagine the minBufferSize is 8k. This means that the AudioTrack will play sound when the buffer is full.
mAudioTrack.write(stuff);
If stuff is only 4K, the AudioTrack will wait until the next call to write until it has enough data to play.
Conclusion: You need to keep track on how much data you have written, and at the end of your playback feed the AudioTrack with some dummy bytes to complete minBufferSize. To make thing easier you could just feed a whole minBufferSize amount of silence bytes.
By the way, to feed dummy or silence just fill the data with zeroes.
I'm working a somewhat ambitious project to get active noise-reduction achieved on Android with earbuds or headphones on.
My objective is to record ambient noise with the android phone mic, invert the phase (a simple *-1 on the short-value pulled from the Audio Record?), and playback that inverted waveform through the headphones. If the latency and amplitude are close to correct, it should nullify a good amount of mechanical structured noise in the environment.
Here's what I've got so far:
#Override
public void run()
{
Log.i("Audio", "Running Audio Thread");
AudioRecord recorder = null;
AudioTrack track = null;
short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded audio data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
//NoiseSuppressor ns = NoiseSuppressor.create(recorder.getAudioSessionId());
//ns.setEnabled(true);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
recorder.startRecording();
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(!stopped)
{
short[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
for(int iii = 0;iii<buffer.length;iii++){
//Log.i("Data","Value: "+buffer[iii]);
buffer[iii] = buffer[iii] *= -1;
}
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
Log.w("Audio", "Error reading voice audio", x);
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
recorder.stop();
recorder.release();
track.stop();
track.release();
}
}
I was momentarily excited to find the Android API actually already has a NoiseSuppression algorithm (you'll see it commented out above). I tested with it and found NoiseSuppressor wasn't doing much to null out constant tones which leads me to believe it's actually just performing a band-pass filter at non-vocal frequencies.
So, my questions:
1) The above code takes about 250-500ms from mic record through playback in headphones. This latency sucks and it would be great to reduce it. Any suggestions there would be appreciated.
2) Regardless of how tight the latency is, my understanding is that the playback waveform WILL have phase offset from the actual ambient noise waveform. This suggests I need to execute some kind of waveform matching to calculate this offset and compensate. Thoughts on how that gets calculated?
3) When it comes to compensating for latency, what would that look like? I've got an array of shorts coming in every cycle, so what would a 30ms or 250ms latency look like?
I'm aware of fundamental problems with this approach being that the location of the phone being not next to the head is likely to introduce some error, but I'm hopeful with some either dynamic or fixed latency correction it maybe be possible to overcome it.
Thanks for any suggestions.
Even if you were able to do something about the latency, it's a difficult problem as you don't know the distance of the phone from the ear, plus there's the fact that distance is not fixed (as the user will move the phone), plus the fact that you don't have a microphone for each ear (so you can't know what the wave will be at one ear until after it's got there, even if you have zero latency)
Having said that, you might be able to do something that could cancel highly periodic waveforms. All you could do though is allow the user to manually adjust the time delay for each ear - as you have no microphones near the ears themselves, you can have no way in your code to know if you're making the problem better or worse.
I want to play a short sound (.ogg) on Android and tried soundpool.
The sound should be played several times so I used sound pool loop parameter. On my Nexus 4 (JB4.3), the loop parameter in soundpool gets ignored and the sound will only be played once.
It seems to be a bug in soundpool:
Soundpool not looping in android 4.3
What is the best alternative for soundpool to play a short sound and repeating that sound?
This issue was being discussed in the android issue tracker (http://code.google.com/p/android/issues/detail?id=58113) .
There is apparently no straight forward workaround.
In audioTrack, sound looping can be achieved with setLoopPoints() API call.
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRateInHz, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, totalNumOfSamples,
AudioTrack.MODE_STATIC);
audioTrack.write(pcmdata, 0, pcmdata.length);
//the end frame is the length/4 if it is 16bites
audioTrack.setLoopPoints(0,pcmdata.length/4,-1);
audioTrack.play();
I have find the Solution to fix the problem of looping.
I don't know how but is works with this limitation.
soundpool that can Loop the sound only file that size is < 1mb.