Sample code for Android AudioTrack Mixing - android

I have two PCM sound file in resource folder. I used inputstream and converted them into bytearray.
Then I processed them by normalized and adding music1 and music2 and output to the byte array output. Finally, put the output array and feed it to the AudioTrack.
Obviously, I don't hear anything and something is wrong.
private void mixSound() throws IOException {
InputStream in1=getResources().openRawResource(R.raw.cheerapp2);
InputStream in2=getResources().openRawResource(R.raw.buzzer2);
byte[] music1 = null;
music1= new byte[in1.available()];
music1=convertStreamToByteArray(in1);
in1.close();
byte[] music2 = null;
music2= new byte[in2.available()];
music2=convertStreamToByteArray(in2);
in2.close();
byte[] output = new byte[music1.length];
audioTrack.play();
for(int i=0; i < output.length; i++){
float samplef1 = music1[i] / 128.0f; // 2^7=128
float samplef2 = music2[i] / 128.0f;
float mixed = samplef1 + samplef2;
// reduce the volume a bit:
mixed *= 0.8;
// hard clipping
if (mixed > 1.0f) mixed = 1.0f;
if (mixed < -1.0f) mixed = -1.0f;
byte outputSample = (byte)(mixed * 128.0f);
output[i] = outputSample;
audioTrack.write(output, 0, i);
} //for loop
public static byte[] convertStreamToByteArray(InputStream is) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buff = new byte[10240];
int i = Integer.MAX_VALUE;
while ((i = is.read(buff, 0, buff.length)) > 0) {
baos.write(buff, 0, i);
}
return baos.toByteArray(); // be sure to close InputStream in calling function
}

I tried your code (substituting in some audio files of my own). I initialised an AudioTrack instance like this, hopefully this is similar to how you did it:
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, 44100, AudioTrack.MODE_STREAM);
And tried running it. It made a high pitched noise, that got lower as time went on. I checked the code and the problem is that you are writing the entire output byte array to the audioTrack on every iteration of the loop in your mixSound() method.
the line
audioTrack.write(output, 0, i);
needs moved outside the loop and to be changed to
audioTrack.write(output, 0, output.length);
So you mix both files together into the output byte array, then write the whole thing at once.
So the code for the working mixSound method looks like this:
private void mixSound() throws IOException {
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, 44100, AudioTrack.MODE_STREAM);
InputStream in1=getResources().openRawResource(R.raw.track1);
InputStream in2=getResources().openRawResource(R.raw.track2);
byte[] music1 = null;
music1= new byte[in1.available()];
music1=convertStreamToByteArray(in1);
in1.close();
byte[] music2 = null;
music2= new byte[in2.available()];
music2=convertStreamToByteArray(in2);
in2.close();
byte[] output = new byte[music1.length];
audioTrack.play();
for(int i=0; i < output.length; i++){
float samplef1 = music1[i] / 128.0f; // 2^7=128
float samplef2 = music2[i] / 128.0f;
float mixed = samplef1 + samplef2;
// reduce the volume a bit:
mixed *= 0.8;
// hard clipping
if (mixed > 1.0f) mixed = 1.0f;
if (mixed < -1.0f) mixed = -1.0f;
byte outputSample = (byte)(mixed * 128.0f);
output[i] = outputSample;
} //for loop
audioTrack.write(output, 0, output.length);
}

Related

Implementing a high pass audio filter in android

I'm trying to implement an high pass audio filter on the microphone data that I get form the audioRecord.
The data I get form the microphone is a 16-bit PCM audio byte-array. I was trying to use TarsosDSP which provides a API for high pass filtering. However, as an input it requires a float-array so I converted the byte into a float array and ran the highpass filter. To confirm the results I saved the filtered data in a wave file but it sounds totally distorted.
public static byte[] highPassFilter( byte[] buffer, WaveHeader waveHeader, float frequency) {
HighPass highPass = new HighPass(frequency, waveHeader.getSampleRate());
TarsosDSPAudioFormat format = new TarsosDSPAudioFormat(waveHeader.getSampleRate(),waveHeader.getBitsPerSample(),waveHeader.getChannels(),true, false);
AudioEvent audioEvent = new AudioEvent(format);
float[] f_buffer = bytesToFloats(buffer);
audioEvent.setFloatBuffer(f_buffer);
highPass.process(audioEvent);
buffer = audioEvent.getByteBuffer();
byte[] data = PCMtoWav(buffer, waveHeader.getSampleRate(), waveHeader.getChannels(), waveHeader.getBitsPerSample());
writeWavFile(data);
return buffer;
}
public static float[] bytesToFloats(byte[] bytes) {
float[] floats = new float[bytes.length / 2];
for(int i=0; i < bytes.length; i+=2) {
floats[i/2] = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
}
return floats;
}
The data in the waveHeader is:
Sample rate = 11025
getBitsPerSample = 16
getChannels = 1
My best guess is that the bytesToFloats conversion is wrong. To verify this I just set the float buffer of the audioEvent with audioEvent.setFloatBuffer and then retrieved it with audioEvent.getByteBuffer which also resulted in a totally distorted audio file.
The byte buffer is read from the audioRecord:
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, 220500);
....
buffer = new byte[frameByteSize];
byte[] audioRecord.read(buffer, 0, frameByteSize);
Anybody have any idea how to fix this or suggestions for different high pass filters that I could use on a byte array in android.
Update: I figured it out. This is my updated function to convert from bytes to floats:
public static float[] bytesToFloats(byte[] bytes) {
float[] floats = new float[bytes.length / 2];
short[] shorts = new short[bytes.length/2];
ByteBuffer.wrap(bytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shorts);
for(int i=0; i < bytes.length; i+=2) {
floats[i/2] = shorts[i/2] / 32768f;
}
return floats;
}
Do the two bytes samples represent float values? They could be signed short within the range of -32,768 to 32,767. Also, for floating point representation of samples the values within the range of -1.0 to 1.0 are common.
I would try:
short sample = bytes[i] | (bytes[i+1] < 128 ? (bytes[i+1] << 8) : ((bytes[i+1] - 256) << 8));
floats[i/2] = (float)sample / 32,768f;
You need to convert pairs of bytes into signed short and then scale it to a float in the range of -1.0 to 1.0.
One of the following lines depending on the endianness of the data will convert to signed 16-bit.
short shortSample = (short)(bytes[i]) | (short)(bytes[i+1]) << 8);
short shortSample = (short)(bytes[i] << 8) | (short)(bytes[i+1]));
And then scale to float:
float sample = shortSample / 32768f;

getting noise as output instead of mixed sounds

When I run the following code I have no sound as output instead it gives me noise.
I have two audio files in my resource folder and using 1 inputstream these are converted to bytearray.If I add mp3 then the app closes unfortunately.
private void mixSound() throws IOException {
AudioTrack audioTrack =new AudioTrack(AudioManager.STREAM_MUSIC,44100,AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, 44100, AudioTrack.MODE_STREAM);
Log.i(tag,"inside mixSound");
InputStream in1=getResources().openRawResource(R.raw.cut1); s
InputStream in2=getResources().openRawResource(R.raw.cut2);
byte[] music1 = null;
music1= new byte[in1.available()];
Log.i(tag,"in1");
music1=convertStreamToByteArray(in1);
in1.close();
byte[] music2 = null;
music2= new byte[in2.available()];
music2=convertStreamToByteArray(in2);
in2.close();
byte[] output = new byte[music1.length];
audioTrack.play();
for(int i=0; i < output.length; i++){
float samplef1 = music1[i] / 128.0f; // 2^7=128
float samplef2 = music2[i] / 128.0f;
float mixed = samplef1 + samplef2;
// reduce the volume a bit:
mixed *= 0.8;
// hard clipping
if (mixed > 1.0f) mixed = 1.0f;
if (mixed < -1.0f) mixed = -1.0f;
byte outputSample = (byte)(mixed * 128.0f);
output[i] = outputSample;
} //for loop
audioTrack.write(output, 0, output.length);
}
public static byte[] convertStreamToByteArray(InputStream is) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buff = new byte[10240];
int i = Integer.MAX_VALUE;
Log.i(tag,"in csb");
while ((i = is.read(buff, 0, buff.length)) > 0) {
baos.write(buff, 0, i);
}
return baos.toByteArray();
}
Thank you for help in advance.
A few issues here...
If you are working with 16-bit PCM audio (which by your initialization of AudioTrack it appears you are), then you should access your source audio and write to your AudioTrack in shorts (which are 16 bits) rather than bytes (8 bits). If you must read bytes from your source, you'll need to read two of them at a time in your loop and do something like
short curSample = (myByteArr[i] << 8) | myByteArr[i+1];
and then write the result to your stored buffer. This is assuming you have 16-bit shorts stored in the files you're reading from, which you should. Better to just read those as what they are, though.
Using AudioTrack.MODE_STREAM implies you will write continuously to the buffer while audio is playing. The way you've done it here fills the entire buffer and then writes it to the AudioTrack. If this is a one-off playback, you should probably use AudioTrack.MODE_STATIC.
This is a corner case, but consider what happens if mixed == 1.0f. If you multiply that by 128.0f and truncate to byte, you'll get 128, which is actually beyond the range of a signed byte (because of 0, the range is [-128, 127]).
I believe problem #1 is the source of your noise. You need to keep your 16-bit PCM data intact rather than splitting it up.

Android AudioRecord MP3 encoding AudioFormat.CHANNEL_IN_STEREO

I seem to be stuck with this problem,
I am trying to get
https://github.com/yhirano/SimpleLameLibForAndroid
to work on channelConfig AudioFormat.CHANNEL_IN_STEREO mode.
Below code works perfectly if i call it with channelConfig = AudioFormat.CHANNEL_IN_MONO but not with STEREO.
I have played around with
short[] buffer = new short[mSampleRate * (16 / 8) * nChannels * 5]
byte[] mp3buffer = new byte[(int) (7200 + buffer.length * 2 * 1.25)];
bu cannot seem to get it working. I mean it works but recorded sound is like very very slow. Listen to this example https://dl.dropboxusercontent.com/u/1465252/1381762795295.mp3
There seems to be another similar question at Lame encoded mp3 audio slowed down - Android without a solution.
Can anybody help?
Here is the code:
new Mp3Audio(MediaRecorder.AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, A udioFormat.ENCODING_PCM_16BIT, 128);
public Mp3Audio(int audioSource, int sampleRate, int channelConfig, int audioFormat, int bitRate) {
if (sampleRate <= 0) {
throw new InvalidParameterException(
"Invalid sample rate specified.");
}
mSampleRate = sampleRate;
mBitRate = bitRate;
if (channelConfig == AudioFormat.CHANNEL_IN_MONO) {
nChannels = 1;
} else {
nChannels = 2;
}
builder = new Builder(mSampleRate, nChannels, mSampleRate, mBitRate);
//builder = new Builder(44100, 1, 44100, 128);
builder.quality(6);
mEncoder = builder.create();
cAmplitude = 0;
payloadSize = 0;
aFormat = audioFormat;
aSource = audioSource;
mChannelConfig = channelConfig;
}
public void start() {
final int minBufferSize = AudioRecord.getMinBufferSize(mSampleRate, mChannelConfig, aFormat) * mBufferSizeFactor;
if (minBufferSize < 0) {
AppHelper.Log(tag, "MSG_ERROR_GET_MIN_BUFFERSIZE");
return;
}
AppHelper.Log(tag, "minBufferSize: " + AppHelper.humanReadableByteCount(minBufferSize, true));
aRecorder = new AudioRecord(
aSource,
mSampleRate,
mChannelConfig,
aFormat,
minBufferSize);
short[] buffer = new short[mSampleRate * (16 / 8) * nChannels * 5]; // SampleRate[Hz] * 16bit * Mono * 5sec
AppHelper.Log(tag, "buffer: " + AppHelper.humanReadableByteCount(buffer.length, true));
byte[] mp3buffer = new byte[(int) (7200 + buffer.length * 2 * 1.25)];
AppHelper.Log(tag, "mp3buffer: " + AppHelper.humanReadableByteCount(mp3buffer.length, true));
......
.......
To give you a pointer, you need to invoke lame_encode_buffer_interleaved() if you use 2 channels (.stereo) to record.
It took me a few days to figure it out, this is the code you can use:
if (lame_get_num_channels(glf) == 2)
{
result = lame_encode_buffer_interleaved(glf, j_buffer_l, samples/2, j_mp3buf, mp3buf_size);
}
else
{
result = lame_encode_buffer(glf, j_buffer_l, j_buffer_r, samples, j_mp3buf, mp3buf_size);
}

How to generate and play a 20Hz square wave with AudioTrack?

I'm trying to generate and play a square wave with AudioTrack(Android). I've read lots of tutorials but still have some confusions.
int sampleRate = 44100;
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
AudioTrack audioTrack;
int buffer = AudioTrack.getMinBufferSize(sampleRate, channelConfig,
audioFormat);
audioTrack.write(short[] audioData, int offsetInShorts, int sizeInShorts);
In the codes, what makes me confused is How to write the short array "audioData" ...
Anyone can help me? Thanks in advance !
You should use Pulse-code modulation. The linked article has an example of encoding a sine wave, a square wave is even simpler. Remember that the maximum amplitude is encoded by the maximum value of short (32767) , and that the "effective" frequency depends on your sampling rate.
This method generates Square, Sin and Saw Tooth wave forms
// Process audio
protected void processAudio()
{
short buffer[];
int rate =
AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
int minSize =
AudioTrack.getMinBufferSize(rate, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
// Find a suitable buffer size
int sizes[] = {1024, 2048, 4096, 8192, 16384, 32768};
int size = 0;
for (int s : sizes)
{
if (s > minSize)
{
size = s;
break;
}
}
final double K = 2.0 * Math.PI / rate;
// Create the audio track
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, rate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
size, AudioTrack.MODE_STREAM);
// Check audiotrack
if (audioTrack == null)
return;
// Check state
int state = audioTrack.getState();
if (state != AudioTrack.STATE_INITIALIZED)
{
audioTrack.release();
return;
}
audioTrack.play();
// Create the buffer
buffer = new short[size];
// Initialise the generator variables
double f = frequency;
double l = 0.0;
double q = 0.0;
while (thread != null)
{
// Fill the current buffer
for (int i = 0; i < buffer.length; i++)
{
f += (frequency - f) / 4096.0;
l += ((mute ? 0.0 : level) * 16384.0 - l) / 4096.0;
q += (q < Math.PI) ? f * K : (f * K) - (2.0 * Math.PI);
switch (waveform)
{
case SINE:
buffer[i] = (short) Math.round(Math.sin(q) * l);
break;
case SQUARE:
buffer[i] = (short) ((q > 0.0) ? l : -l);
break;
case SAWTOOTH:
buffer[i] = (short) Math.round((q / Math.PI) * l);
break;
}
}
audioTrack.write(buffer, 0, buffer.length);
}
audioTrack.stop();
audioTrack.release();
}
}
Credit goes to billthefarmer.
Complete Source code:
https://github.com/billthefarmer/sig-gen

Mix audio in android

I tried to follow this link:
http://mobilengineering.blogspot.com/2012/06/audio-mix-and-record-in-android.html?showComment=1369622288028#c2333829870074273419
But after mixing audio files, file (mixed.wav) on sdcard can not be played, I do not know why.
Can you help me?. Thank you very much ..
This my code:
public class MainActivity extends Activity {
public static final int FREQUENCY = 44100;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
try {
mixSound();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void mixSound() throws IOException {
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, 44100, AudioTrack.MODE_STREAM);
InputStream in1 = getResources().openRawResource(R.raw.media_b);
InputStream in2 = getResources().openRawResource(R.raw.media_c);
byte[] arrayMusic1 = null;
arrayMusic1 = new byte[in1.available()];
arrayMusic1 = createMusicArray(in1);
in1.close();
byte[] arrayMusic2 = null;
arrayMusic2 = new byte[in2.available()];
arrayMusic2 = createMusicArray(in2);
in2.close();
byte[] output = new byte[arrayMusic1.length];
audioTrack.play();
for (int i = 0; i < output.length; i++) {
float samplef1 = arrayMusic1[i] / 128.0f;
float samplef2 = arrayMusic2[i] / 128.0f;
float mixed = samplef1 + samplef2;
// reduce the volume a bit:
mixed *= 0.8;
// hard clipping
if (mixed > 1.0f) mixed = 1.0f;
if (mixed < -1.0f) mixed = -1.0f;
byte outputSample = (byte) (mixed * 128.0f);
output[i] = outputSample;
}
audioTrack.write(output, 0, output.length);
convertByteToFile(output);
}
public static byte[] createMusicArray(InputStream is) throws IOException {
ByteArrayOutputStream baos = new ByteArrayOutputStream();
byte[] buff = new byte[10240];
int i = Integer.MAX_VALUE;
while ((i = is.read(buff, 0, buff.length)) > 0) {
baos.write(buff, 0, i);
}
return baos.toByteArray(); // be sure to close InputStream in calling function
}
public static void convertByteToFile(byte[] fileBytes) throws FileNotFoundException {
BufferedOutputStream bos = new BufferedOutputStream(new FileOutputStream(Environment.getExternalStorageDirectory().getPath()+"/mixed.wav"));
try {
bos.write(fileBytes);
bos.flush();
bos.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}
What you're outputting is just the PCM data. A valid WAV file also needs a header:
Offset Size Name Description
------------------------------------------------------------------------
0 4 ChunkID Contains the letters "RIFF" in ASCII form
(0x52494646 big-endian form).
4 4 ChunkSize 36 + SubChunk2Size, or more precisely:
4 + (8 + SubChunk1Size) + (8 + SubChunk2Size)
This is the size of the rest of the chunk
following this number. This is the size of the
entire file in bytes minus 8 bytes for the
two fields not included in this count:
ChunkID and ChunkSize.
8 4 Format Contains the letters "WAVE"
(0x57415645 big-endian form).
12 4 Subchunk1ID Contains the letters "fmt "
(0x666d7420 big-endian form).
16 4 Subchunk1Size 16 for PCM. This is the size of the
rest of the Subchunk which follows this number.
20 2 AudioFormat PCM = 1 (i.e. Linear quantization)
Values other than 1 indicate some
form of compression.
22 2 NumChannels Mono = 1, Stereo = 2, etc.
24 4 SampleRate 8000, 44100, etc.
28 4 ByteRate == SampleRate * NumChannels * BitsPerSample/8
32 2 BlockAlign == NumChannels * BitsPerSample/8
The number of bytes for one sample including
all channels. I wonder what happens when
this number isn't an integer?
34 2 BitsPerSample 8 bits = 8, 16 bits = 16, etc.
2 ExtraParamSize if PCM, then doesn't exist
X ExtraParams space for extra parameters
36 4 Subchunk2ID Contains the letters "data"
(0x64617461 big-endian form).
40 4 Subchunk2Size == NumSamples * NumChannels * BitsPerSample/8
This is the number of bytes in the data.
You can also think of this as the size
of the read of the subchunk following this
number.
After this you write the PCM data.
(Reference).

Categories

Resources