I want to play float arrays on Android. I wanted to go the (what I thought was the) easy way first so thought I'd use AudioTrack. I chose to use static mode because the audio stream is not continuous. In the documentation it says:
In static buffer mode, copies the data to the buffer starting at offset 0, and the write mode is ignored. Note that the actual playback of this data might occur after this function returns.
So initially my implementation looked like:
// constructor ..
audioTrack = new AudioTrack(new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_ASSISTANCE_NAVIGATION_GUIDANCE)
.setContentType(AudioAttributes.CONTENT_TYPE_SONIFICATION).build(),
new AudioFormat.Builder()
.setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
.setSampleRate(sample_rate)
.setChannelMask(AudioFormat.CHANNEL_OUT_STEREO).build(),
sample_rate * 2 * 10, AudioTrack.MODE_STATIC, AudioManager.AUDIO_SESSION_ID_GENERATE);
[...]
public ... () { // function
audioTrack.write(sound, 0, sound.length, AudioTrack.WRITE_NON_BLOCKING);
audioTrack.play()
Which lead to the first problem that a sound could only play once. I found I need to be calling stop() and reloadStaticData() after play(). But this results in the first sound not being played at all. And any following sounds to be played wrongly (meaning the sound is wrong).
I tried out putting stop() and reloadStaticData() after write() and before play(), then the first sound is played but the next ones are wrong (they're a cut off version of the first sound instead of a different sound).
From these experiences it seems to me like write() is not actually writing to the start of the buffer. In general how to use AudioTrack is completely unclear to me. stop() seems to stop playback. But I have to call it after play() to reset the playback head position.
Turns out my phone speakers couldn't play the sound... this is my solution
public void play_sound(float[] sound) {
if (audioTrack.getPlayState() == AudioTrack.PLAYSTATE_PLAYING) {
audioTrack.stop();
audioTrack.reloadStaticData();
}
float[] buffer = new float[buffer_size_in_bytes / 4];
for (int i = 0; i < sound.length; i++) {
buffer[i] = sound[i];
}
// Last arg is ignored.
audioTrack.write(buffer, 0, buffer.length, AudioTrack.WRITE_BLOCKING);
audioTrack.play();
}
Related
I have an Android app where there is some raw audio bytes stored in a variable.
If I use an AudioTrack to play this audio data, it only works if I use AudioTrack.MODE_STREAM:
byte[] recordedAudioAsBytes;
public void playButtonPressed(View v) {
// this verifies that audio data exists as expected
for (int i=0; i<recordedAudioAsBytes.length; i++) {
Log.i("ABC", "byte[" + i + "] = " + recordedAudioAsBytes[i]);
}
// STREAM MODE ACTUALLY WORKS!!
/*
AudioTrack player = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, CHANNELS,
ENCODING, MY_CHOSEN_BUFFER_SIZE, AudioTrack.MODE_STREAM);
player.play();
player.write(recordedAudioAsBytes, 0, recordedAudioAsBytes.length);
*/
// STATIC MODE DOES NOT WORK
AudioTrack player = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, PLAYBACK_CHANNELS,
ENCODING, MY_CHOSEN_BUFFER_SIZE, AudioTrack.MODE_STATIC);
player.write(recordedAudioAsBytes, 0, recordedAudioAsBytes.length);
player.play();
}
If I use AudioTrack.MODE_STATIC, the output is glitchy -- it just makes a nasty pop and sounds very short with hardly anything audible.
So why is that? Does STATIC_MODE require that the audio data have a header?
That's all I can think of.
If you'd like to see all the code, check this question.
It seems to me that you are using the same MY_CHOSEN_BUFFER_SIZE for 'streaming' and 'static' mode!? This might explain why it sounds short...
In order to use Audiotracks 'static-mode' you have to use the size of your Byte-Array (bigger will also work) as buffersize. The Audio will be treated as one big chunk of data.
See: AudioTrack.Builder
setBufferSizeInBytes()... "If using the AudioTrack in static mode (see AudioTrack#MODE_STATIC), this is the maximum size of the sound that will be played by this instance."
Sometimes I want my audio output in Oboe to play nothing, but I dont want it to stop, I just want it to be silent while no data arrives. I tried:
static void writeBlankData(float* pointer, int numFrames) {
std::fill_n(pointer, numFrames, 0);
}
oboe::DataCallbackResult PlayRecordingCallback::onAudioReady(
oboe::AudioStream *audioStream,
void *audioData,
int numFrames) {
float *floatData = (float *) audioData;
writeBlankData(floatData, numFrames);
return oboe::DataCallbackResult::Continue;
}
but I hear a buzzing on the audio output instead of silence. Shouldn't an array of 0s be silence? I tried -1.0f also which gives a different buzzing.
The most likely cause is that the stream is in stereo so has 2 samples per frame. Your current code assumes a mono stream.
Try changing:
writeBlankData(floatData, numFrames);
To:
writeBlankData(floatData, numFrames * audioStream->getChannelCount());
I'm having some trouble getting the AudioTrack class to do what I want it to do. I'm trying to make an app that first lets the user draw a single-cycle waveform on the screen, and then outputs an arbitrary number of cycles of that waveform at an arbitrary frequency through the headphone out. For the primary use, the frequency will be below the audible range, something like 1/60Hz - 20Hz (If anyone's familiar with Eurorack/modular synths, I'm hoping to use the headphone out as a CV source).
The problem I'm having is that the AudioTrack seems to output a highly inaccurate reproduction of the waveform at the low end of this frequency range, even though it does output an accurate reproduction at the high end. The lower the frequency gets, the more the waveform gets 'squished' to the left on my oscilloscope. The pictures below show this phenomenon on a waveform that is supposed to start each cycle low and increase linearly to a max value, but this phenomenon happens with other waveshapes too.
So far, I have the app set up (1) to create an ArrayList to hold the data points from the user's input, (2) to convert the ArrayList into a float[] to feed to the AudioTrack, and (3) to setup the AudioTrack and write the float[] to it when needed. Since the ArrayList and float[] both maintain an accurate reproduction of the waveform, I'm pretty sure my problem is at (3), or else it's a hardware limitation of my phone/scope.
Here's the relevant code for the AudioTrack.
Method that (re)initializes the AudioTrack:
public void updateTrackForWave(Waveform wave, Context context) {
if (wave.getInterpolatedWaveData() == null) return;
int sampleRate = Integer.parseInt(
((AudioManager) context.getSystemService(Context.AUDIO_SERVICE))
.getProperty(AudioManager.PROPERTY_OUTPUT_SAMPLE_RATE));
if (wave.getOutputChannel() == Waveform.OutputChannelEnum.left) {
if (mTrackL != null) mTrackL.release();
mTrackL = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_FLOAT,
Float.BYTES * wave.getInterpolatedWaveData().length,
AudioTrack.MODE_STATIC);
mTrackL.setVolume(AudioTrack.getMaxVolume());
} else if (wave.getOutputChannel() == Waveform.OutputChannelEnum.right) {
if (mTrackR != null) mTrackR.release();
mTrackR = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_FLOAT,
Float.BYTES * wave.getInterpolatedWaveData().length,
AudioTrack.MODE_STATIC);
mTrackR.setVolume(AudioTrack.getMaxVolume());
}
}
Method that plays AudioTrack:
private void playWaveOnTrack(Waveform wave, AudioTrack track) {
if (track == null) return;
if (track.getPlayState() == AudioTrack.PLAYSTATE_PLAYING) {
return;
} else {
if (wave.isCycleMode()) track.setLoopPoints(0,
wave.getInterpolatedWaveData().length, -1);
else if (wave.isOneShotMode()) track.setLoopPoints(0,
wave.getInterpolatedWaveData().length, 0);
if (track.getPlayState() == AudioTrack.PLAYSTATE_PAUSED) {
track.play();
} else if (track.getPlayState() == AudioTrack.PLAYSTATE_STOPPED) {
track.write(wave.getInterpolatedWaveData(), 0,
wave.getInterpolatedWaveData().length, AudioTrack.WRITE_BLOCKING);
track.play();
}
}
}
It's worth noting that wave.getInterpolatedWaveData() returns the float[] from (3) above.
Anyways, any help you could spare on this would be greatly appreciated! Not sure if I've made a mistake in the code I have, or if there's some code I should add (maybe an AudioEffect of some kind?), or if I'm asking too much of my phone, or what.
PS I'm new here and to Android programming in general, so please do point out any forum norms I should be following but am not, or any alternative coding approaches I might not know about.
Pictures:
Waveform at 100Hz
Waveform at 10Hz
Waveform at 2Hz
It's a hardware limitation of your phone. In order to avoid a constant DC current flowing through each headphone speaker, they are capacitively coupled -- each channel is in series with one or more capacitors. This gives your output waveform a time constant; the average voltage level of the waveform will always tend towards 0 (GND), which makes it impossible to output DC, or even low-frequency signals.
I'm trying to record from the MIC direcly to a short array.
The goal is not to write a file with the audio track, just save it within a short array.
If've tried several methods and the best I've found is recording with AudioRecord and to play it with AudioTrack. I've found a good class here:
Android: Need to record mic input
This class makes all I need, I just have to modify it to achieve my desired result, but...I don't get it well, I'm missing something...
Here's is my modification (not working at all):
private class Audio extends Thread {
private boolean stopped = false;
/**
* Give the thread high priority so that it's not canceled unexpectedly, and start it
*/
private Audio()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
start();
}
#Override
public void run()
{
Log.i("Audio", "Running Audio Thread");
AudioRecord recorder = null;
AudioTrack track = null;
//short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded audio data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
short[] buff = new short[N];
recorder.startRecording();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(!stopped) {
//Log.i("Map", "Writing new data to buffer");
//short[] buffer = buffer[ix++ % buffer.length];
N = recorder.read(buff, 0, buff.length);
}
recorder.stop();
recorder.release();
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
for (int i =0; i< buff.length;i++) {
track.write(buff, i, buff.length);
}
} catch(Exception x) {
//Log.e("Audio", x.getMessage());
x.printStackTrace();
} finally {
track.stop();
track.release();
}
}
/**
* Called from outside of the thread in order to stop the recording/playback loop
*/
private void close()
{
stopped = true;
}
}
What I need is to record the sound in the short array buffer and when the user push a button, play it...But right now, I'm trying to record the sound and, when user push a button, recording stop and the sound start playing...
Anyone can help me?
Thanks.
You need to restructure the code to do what you want it to do. If I understand correctly you want to read sound until the 'stopped' is set true, then play the data.
Just so you understand that is potentially a lot of buffered data depending on how long that recording time is. You could write it to a file or store a series of buffers into some abstract data type.
Just to get something to work create a Vector of short [] and allocate a new short [] buffer in your 'while(!stopped)' loop and then stuff it into the vector.
After the while loop stops you can iterate through the vector and write the buffers to the AudioTrack.
As you now understand, the blip you were hearing is just the last 20ms or so of audio since your buffer only kept that last little bit.
I am encountering problems with playing looping sounds using SoundPool and .OGG files. I have this HashMap set up for finding a sound associated to a name and playing it/stopping it
public void playLoopSound(String soundName){
currentSound = (Integer) soundMap.get(soundName);
if(currentSound != -1){
try{
Logger.log("Playing Loop Sound: " + currentSound);
loopingSound = soundPool.play(currentSound, 1, 1, 0, -1, 1);
} catch (Exception e) {
Logger.log("Sound Playing Error: " + e.getMessage());
}
} else {
Logger.log("Sound Not Found");
}
}
public void stopLoopSound(){
soundPool.stop(loopingSound);
loopingSound = 0;
}
This set up works fine, i start the loop when the character starts walking and stop it when it stops walking.
The sound would however stop playing randomly, usually a minute or so after having been used (being turned on and off)...
Has anyone else encountered similar problems with SoundPool and looped sounds?
Reading the documentation on Soundpool clears up a lot:
-playing a sound you pass an int referring to the loaded sound. This method (and others starting a sound playing), soundpool.play(int) returns an int referring to the threadID in which the sound now plays.
-when you want to stop the sound (or the looping sound) you have to use the int of the threadID you just got back when you started playing the sound, NOT the int of the sound itself!
Within the soundpool class then, you set a private int threadIDInt = mSoundPool.play/setloop/whatever(int soundtobeplayed, int/float volumeleft, int/float volumeright, float speed, int loopornot); [note: the arguments are a bit made up; look 'em up]
To then stop the sound (or pause, or set it to loop or not) you pass mSoundPool.stop(threadIDINT);
TLDR: there's a differnce between the int which denotes your sound and the internal int soundpool uses to denote the stream in which your current sound is playing.