Record data from mobile phone's audio jack - android

I am trying to record data from my mobile phone's audio interface. I used audiorecord function. Following is my code:
public void Initialize() {
buffersizebytes = AudioRecord.getMinBufferSize(SAMPPERSEC,channelConfiguration, audioEncoding); // 4096 on ion
buffer = new short[buffersizebytes];
buflen = buffersizebytes / 2;
audioRecord = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC, SAMPPERSEC,
channelConfiguration, audioEncoding, buffersizebytes);
acquire();
for(int i=0; i<4096; i++) buffer[i]=1;
}
public void acquire() {
try {
audioRecord.startRecording();
mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);
audioRecord.stop();
} catch (Throwable t) {
// Log.e("AudioRecord", "Recording Failed");
}
}
I want to put my acquired data into a buffer of 4096 bytes. But my program only put data into 1024 bytes. Also first 432 bytes also zeros. But I am sending data continuously. What could be the issue?

getMinBufferSize, as the name implies gives you the minimum buffer size. You can set anything bigger, including 4096.
As for the first samples after initialization, my phone gives two gigantic peaks that last for about 0.5 seconds, so I guess it is caused by the recorder starting up. Try skipping a few samples (let's say 500) before processing real data.
Furthermore, the size of buffer should be buffersizebytes/2.

Related

How To Record Sound in Android with Better Quality and Reduce Noise

I’m trying to build a music analytics app for android platform.
the app is using MediaRecorder.AudioSource.MIC
to record the music form the MIC and them encode it PCM 16BIT with 11025 freq, but the recorded audio sample are very low quality is there any way to make it better, decrease the noise?
mRecordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC,FREQUENCY, CHANNEL,ENCODING, minBufferSize);
mRecordInstance.startRecording();
do
{
samplesIn += mRecordInstance.read(audioData, samplesIn, bufferSize - samplesIn);
if(mRecordInstance.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED)
break;
}
while (samplesIn < bufferSize);
Thanks in Advance
The solution above didnt work for me.
So, i searched around and found this article.
Long story short, I used MediaRecorder.AudioSource.VOICE_RECOGNITION instead of AudioSource.MIC, which gave me really good results and noise in the background did reduce very much.
The great thing about this solution is, it can be used with both AudioRecord and MediaRecorder class.
The best combination of SR and buffer size is very device dependant, so your results will vary depending on the hardware. I use this utility to figure out what the best combination is for devices running Android 4.2 and above;
public static DeviceValues getDeviceValues(Context context) {
try {
AudioManager am = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
try {
Method getProperty = AudioManager.class.getMethod("getProperty", String.class);
Field bufferSizeField = AudioManager.class.getField("PROPERTY_OUTPUT_FRAMES_PER_BUFFER");
Field sampleRateField = AudioManager.class.getField("PROPERTY_OUTPUT_SAMPLE_RATE");
int bufferSize = Integer.valueOf((String)getProperty.invoke(am, (String)bufferSizeField.get(am)));
int sampleRate = Integer.valueOf((String)getProperty.invoke(am, (String)sampleRateField.get(am)));
return new DeviceValues(sampleRate, bufferSize);
} catch(NoSuchMethodException e) {
return selectBestValue(getValidSampleRates(context));
}
} catch(Exception e) {
return new DeviceValues(DEFAULT_SAMPLE_RATE, DEFAULT_BUFFER_SIZE);
}
}
This uses reflection to check if the getProperty method is available, because this method was introduced in API level 17. If you are developing for a specific device type, you might want to experiment with various buffer sizes and sample rates. The defaults that I use as a fallback are;
private static final int DEFAULT_SAMPLE_RATE = 22050;
private static final int DEFAULT_BUFFER_SIZE = 1024;
Additionally I check the various SR by seeing if getMinBufferSize returns a reasonable value for use;
private static List<DeviceValues> getValidSampleRates(Context context) {
List<DeviceValues> available = new ArrayList<DeviceValues>();
for (int rate : new int[] {8000, 11025, 16000, 22050, 32000, 44100, 48000, 96000}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0 && bufferSize < 2048) {
available.add(new DeviceValues(rate, bufferSize * 2));
}
}
return available;
}
This depends on the logic that if getMinBufferSize returns 0, the sample rate is not available in the device. You should experiment with these values for your particular use case.
Though it is an old question following solution will be helpful.
We can use MediaRecorder to record audio with ease.
private void startRecording() {
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(96000)
recorder.setAudioSamplingRate(44100)
recorder.setOutputFile(".../audioName.m4a");
try {
recorder.prepare();
} catch (IOException e) {
Log.e(LOG_TAG, "prepare() failed");
}
recorder.start();
}
Note:
MediaRecorder.AudioEncoder.AAC is used as MediaRecorder.AudioEncoder.AMR_NB encoding is no longer supported in iOS. Reference
AudioEncodingBitRate should be used either 96000 or 128000 as required for clarity of sound.

Record MIC sound into byte array in android

I'm trying to record from the MIC direcly to a short array.
The goal is not to write a file with the audio track, just save it within a short array.
If've tried several methods and the best I've found is recording with AudioRecord and to play it with AudioTrack. I've found a good class here:
Android: Need to record mic input
This class makes all I need, I just have to modify it to achieve my desired result, but...I don't get it well, I'm missing something...
Here's is my modification (not working at all):
private class Audio extends Thread {
private boolean stopped = false;
/**
* Give the thread high priority so that it's not canceled unexpectedly, and start it
*/
private Audio()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
start();
}
#Override
public void run()
{
Log.i("Audio", "Running Audio Thread");
AudioRecord recorder = null;
AudioTrack track = null;
//short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded audio data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
short[] buff = new short[N];
recorder.startRecording();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(!stopped) {
//Log.i("Map", "Writing new data to buffer");
//short[] buffer = buffer[ix++ % buffer.length];
N = recorder.read(buff, 0, buff.length);
}
recorder.stop();
recorder.release();
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
for (int i =0; i< buff.length;i++) {
track.write(buff, i, buff.length);
}
} catch(Exception x) {
//Log.e("Audio", x.getMessage());
x.printStackTrace();
} finally {
track.stop();
track.release();
}
}
/**
* Called from outside of the thread in order to stop the recording/playback loop
*/
private void close()
{
stopped = true;
}
}
What I need is to record the sound in the short array buffer and when the user push a button, play it...But right now, I'm trying to record the sound and, when user push a button, recording stop and the sound start playing...
Anyone can help me?
Thanks.
You need to restructure the code to do what you want it to do. If I understand correctly you want to read sound until the 'stopped' is set true, then play the data.
Just so you understand that is potentially a lot of buffered data depending on how long that recording time is. You could write it to a file or store a series of buffers into some abstract data type.
Just to get something to work create a Vector of short [] and allocate a new short [] buffer in your 'while(!stopped)' loop and then stuff it into the vector.
After the while loop stops you can iterate through the vector and write the buffers to the AudioTrack.
As you now understand, the blip you were hearing is just the last 20ms or so of audio since your buffer only kept that last little bit.

Android sound recorder bad performance

//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

AudioRecord and AudioTrack echo

I'm streaming the mic audio between two devices, everything is working but i have a bad echo.
Here what i'm doing
Reading thread
int sampleFreq = 22050;
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleFreq,
channelConfig,
audioFormat,
minBuffer,
AudioTrack.MODE_STREAM);
atrack.play();
byte[] buffer = new byte[minBuffer];
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
atrack.flush();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
break;
}
}
Here the recording thread
int sampleRate = 22050;
int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelMode,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
buffer = new byte[buffersize];
arec.startRecording();
while (true) {
arec.read(buffer, 0, buffersize);
new Thread( new Runnable(){
#Override
public void run() {
try {
mOutputStream.write(buffer);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Am I doing something wrong?
You need echo cancellation logic. Here is what I did on my Arm5 (WM8650) processor (Android 2.2) to remove the echo.
I wrapped Speex with JNI and called echo processing routines before sending PCM frames to encoder. No echo was canceled no matter what Speex settings I tried.
Because Speex is very sensitive to delay between playback and echo frames I implemented a queue and queued all packets sent to AudioTrack. The size of the queue should be roughly equal to the size of internal AudioTrack buffer. This way packet were sent to echo_playback roughly at the time when AudioTrack send packets to the sound card from its internal buffer. The delay was removed with this approach but echo was still not cancelled
I wrapped WebRtc echo cancellation part with JNI and called its methods before sending packets to encoder. The echo was still present but the library obviously was trying to cancel it.
I applied the buffer technique described in P2 and it finally started to work. The delay needs to be adjusted for each device though. Note also that WebRtc has mobile and full version of echo cancellation. The full version substantially slows the processor and should probably be run on ARM7 only. The mobile version works but with lower quality
I hope this will help someone.
Could be this:
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
If the buffer remains full from previous call and the new one is not full (so bytes < buffer.length) you re-play hold part of track.

Categories

Resources