Android AudioRecord calculate duration of PCM buffer - android

I am sorry if this is a trivial question but I am new in Android and have spent a few days searching but there is no answer or information satisfies me.
I want to record an audio record of length approximately 3 seconds for every 30 seconds by using an Android phone. Every record is sent to my PC (using TCP/IP protocol) for further processing.
Here is the code in Android side (I refer to the code of #TechEnd in this question: Android AudioRecord example):
private final int AUD_RECODER_SAMPLERATE = 44100; // 44.1 kHz
private final int AUD_RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private final int AUD_RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private final int AUD_RECORDER_BUFFER_NUM_ELEMENTS = 131072; // ~~ 1.486 second ???
private final int AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT = 2;
private AudioRecord audioRecorder = null;
private boolean isAudioRecording = false;
private Runnable runnable = null;
private Handler handler = null;
private final int AUD_RECORDER_RECORDING_PERIOD = 30000; // one fire every 30 seconds
private byte[] bData = new byte[AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT];
public void start() {
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, AUD_RECODER_SAMPLERATE, AUD_RECORDER_CHANNELS, AUD_RECORDER_AUDIO_ENCODING, AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT);
audioRecorder.startRecording();
isAudioRecording = true;
handler = new Handler();
runnable = new Runnable() {
#Override
public void run() {
if (isAudioRecording) {
int nElementRead = audioRecorder.read(bData, 0, bData.length);
net_send(bData, 0, nElementRead);
}
handler.postDelayed(this, AUD_RECORDER_RECORDING_PERIOD);
}
};
handler.postDelayed(runnable, AUD_RECORDER_RECORDING_PERIOD);
}
public void stop() {
isAudioRecording = false;
if (audioRecorder != null) {
audioRecorder.stop();
audioRecorder.release();
audioRecorder = null;
}
handler.removeCallbacks(runnable);
}
public void net_send(byte[] data, int nbytes) {
try {
dataOutputStream.writeInt(nbytes);
dataOutputStream.write(data,0,nbytes);
} catch (IOException e) {
e.printStackTrace();
}
}
And in PC side (server written in C), after receive a record (I checked and they are all 262144 bytes), I first write the byte array to a binary file (with extension .raw) and open with Free Audio Editor (http://www.free-audio-editor.com/) and obtain the result with duration 1.486 seconds
https://www.dropbox.com/s/xzml51jzvagl6dy/aud1.PNG?dl=0
And then I convert every two consecutive bytes into a 2-bytes integer using this function
short bytes2short( const char num_buf[2] )
{
return(
( ( num_buf[1] & 0xFF ) << 8 ) |
( num_buf[0] & 0xFF )
);
}
and write to file (length is 131072 bytes) and plot (the normalized one) with Excel, the similar graph is obtained.
As I calculated, the number of bytes recorded in one second is 44100(sample/sec)*1(sec)*2(byte/sample/channel)*1(channel) = 88200 bytes.
So with my buffer of length 131072*2 (bytes), the corresponding duration should be 262144/88200 = 2.97 seconds. But the result I obtain is just a half. I tried on three different devices running Android OS version 2.3.3, 2.3.4 and 4.3 and obtain the same result. Thus, this is my own problem.
Could anyone tell me where is the problem, in my calculation or in my code? I my understanding is correct?
Any comments or suggestion would be appreciated.

Related

Android Real time silence detection

I am trying to take real time audio input from the user and every 300 milli seconds i am taking the average of the samples. i am using PCM 16 bit samples and 44100 sample rate.
I am reading the data from the audio recorder into a short array and I am printing the average value of all the samples every 300 milliseconds.
But, the problem is the average value appears to be random and it is showing large values even when there is silence( which should not be).
I want to know if I am using the short array in a right way.
public void startRecording() {
recorder = new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, SAMPLE_RATE,
channels, AUDIO_FORMAT, Buffersize);
recorder.startRecording();
isRecording = true;
NoiseSuppressor.create(recorder.getAudioSessionId());
AcousticEchoCanceler.create(recorder.getAudioSessionId());
AutomaticGainControl.create(recorder.getAudioSessionId());
recordingThread = new Thread(new Runnable()
{
public void run() {
writeAudioData();
}
});
recordingThread.start();
}
private void writeAudioData() {
short data[] = new short[Buffersize/2];
while(isRecording) {
samplesread =0;
value = 0;
long ftime = System.currentTimeMillis()+300;
while (ftime > System.currentTimeMillis()) { // loop for taking average of 300 ms
if(isRecording){
samplesread += recorder.read(data, 0, Buffersize/2); // reads the data from the recorder and srores in the short array
int i=0;
while(i<Buffersize/2) {
//value += (long)(data[i]);
value += (long)(data[i] & 0xFFFF); // considering the PCM samples are signed I tried this
i++;
}
}
}
show = value;
div = samplesread;
handler.postDelayed(new Runnable() {
#Override
public void run() {
TextView tv = (TextView) findViewById(R.id.textView);
tv.setText(""+(show/(div))+" "+div);
TextView tv2 = (TextView) findViewById(R.id.textView2);
tv2.setText(show+" "+ System.currentTimeMillis());
}
},0);
}
}
The array values consists of signed numbers , so, it was not showing desired results when average was taken. It works when I took RMS values instead of averaging.

Play sound from array on Android

Solved: I forgot the track.play(); at the end...
I want to play a sound on my Android Smartphone (4.0.4 Api level 15).
I tried to hear some random noise, but its not working:
public class Sound {
private static int length = 22050 * 10; //10 seconds long
private static byte[] data = new byte[length];
static void fillRandom() {
new Random().nextBytes(data); //Create some random noise to listen to.
}
static void play() {
fillRandom();
final int TEST_SR = 22050; //This is from an example I found online.
final int TEST_CONF = AudioFormat.CHANNEL_OUT_MONO;
final int TEST_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
final int TEST_MODE = AudioTrack.MODE_STATIC; //I need static mode.
final int TEST_STREAM_TYPE = AudioManager.STREAM_ALARM;
AudioTrack track = new AudioTrack(TEST_STREAM_TYPE, TEST_SR, TEST_CONF, TEST_FORMAT, length, TEST_MODE);
track.write(data, 0, length);
}
}
I have played a little bit with the variabels, but could not get it to work.
All you have left to do is play it. Add this line to the end of your play() function:
track.play();

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

FSK Modulation and Playing sine Tone in Android

I want to do some FSK Modulation over the audio port. So the problem is that my sinus wave isn't very good. It is disturb by even parts. I used the code original from http://marblemice.blogspot.com/2010/04/generate-and-play-tone-in-android.html with the further modification from Playing an arbitrary tone with Android and https://market.android.com/details?id=re.serialout&feature=search_result .
So where is the failure? What do I wrong?
private static int bitRate=300;
private static int sampleRate=48000;
private static int freq1=600;
public static void loopOnes(){
playque.add(UARTHigh);
athread.interrupt();
}
private static byte[] UARTHigh() {
int numSamples=sampleRate/bitRate;
double sample[]=new double[numSamples];
byte[] buffer=new byte[numSamples*2];
for(int i=0; i<numSamples;++i){
sample[i]=Math.sin(2*Math.PI*i*freq1/sampleRate);
}
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
buffer[idx++] = (byte) (val & 0x00ff);
buffer[idx++] = (byte) ((val & 0xff00) >>> 8);
}
return buffer;
}
private static void playSound(){
active = true;
while(active)
{
try {Thread.sleep(Long.MAX_VALUE);} catch (InterruptedException e) {
while (playque.isEmpty() == false)
{
if (atrk != null)
{
if (generatedSnd != null)
{
// Das letzte Sample erst fertig abspielen lassen
// systemClock.sleep(xx) xx könnte angepasst werden
while (atrk.getPlaybackHeadPosition() < (generatedSnd.length))
SystemClock.sleep(50); // let existing sample finish first: this can probably be set to a smarter number using the information above
}
atrk.release();
}
UpdateParameters(); // might as well do it at every iteration, it's cheap
generatedSnd = playque.poll();
length = generatedSnd.length;
if (minbufsize<length)
minbufsize=length;
atrk = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minbufsize,
AudioTrack.MODE_STATIC);
atrk.setStereoVolume(1,1);
atrk.write(generatedSnd, 0, length);
atrk.play();
}
// Playque is Empty =>send StopBit!
// Set Loop Points
int setLoopError=atrk.setLoopPoints(0, length, -1);
atrk.play();
}
}
}
}
So the answer is to change from MODE_STATIC to MODE_STREAM and don't use Looping Points. In a new thread with low priority a busy loop writes the tracks.

Android: AudioRecord Class Problem: Callback is never called

My Android Java Application needs to record audio data into the RAM and process it.
This is why I use the class "AudioRecord" and not the "MediaRecorder" (records only to file).
Till now, I used a busy loop polling with "read()" for the audio data. this has been working so far, but it peggs the CPU too much.
Between two polls, I put the thread to sleep to avoid 100% CPU usage.
However, this is not really a clean solution, since the time of the sleep is
not guaranteed and you must subtract a security time in order not to loose audio
snippets. This is not CPU optimal. I need as many free CPU cycles as possible for
a parallel running thread.
Now I implemented the recording using the "OnRecordPositionUpdateListener".
This looks very promising and the right way to do it according the SDK Docs.
Everything seems to work (opening the audio device, read()ing the data etc.)
but the Listner is never called.
Does anybody know why?
Info:
I am working with a real Device, not under the Emulator. The Recording using a Busy Loop basically works (however not satifiying). Only the Callback Listener is never called.
Here is a snippet from my Sourcecode:
public class myApplication extends Activity {
/* audio recording */
private static final int AUDIO_SAMPLE_FREQ = 16000;
private static final int AUDIO_BUFFER_BYTESIZE = AUDIO_SAMPLE_FREQ * 2 * 3; // = 3000ms
private static final int AUDIO_BUFFER_SAMPLEREAD_SIZE = AUDIO_SAMPLE_FREQ / 10 * 2; // = 200ms
private short[] mAudioBuffer = null; // audio buffer
private int mSamplesRead; // how many samples are recently read
private AudioRecord mAudioRecorder; // Audio Recorder
...
private OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
mSamplesRead = recorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
if (mSamplesRead > 0) {
// do something here...
}
}
public void onMarkerReached(AudioRecord recorder) {
Error("What? Hu!? Where am I?");
}
};
...
public void onCreate(Bundle savedInstanceState) {
try {
mAudioRecorder = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC,
AUDIO_SAMPLE_FREQ,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
AUDIO_BUFFER_BYTESIZE);
} catch (Exception e) {
Error("Unable to init audio recording!");
}
mAudioBuffer = new short[AUDIO_BUFFER_SAMPLEREAD_SIZE];
mAudioRecorder.setPositionNotificationPeriod(AUDIO_BUFFER_SAMPLEREAD_SIZE);
mAudioRecorder.setRecordPositionUpdateListener(mRecordListener);
mAudioRecorder.startRecording();
/* test if I can read anything at all... (and yes, this here works!) */
mSamplesRead = mAudioRecorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
}
}
I believe the problem is that you still need to do the read loop. If you setup callbacks, they will fire when you've read the number of frames that you specify for the callbacks. But you still need to do the reads. I've tried this and the callbacks get called just fine. Setting up a marker causes a callback when that number of frames has been read since the start of recording. In other words, you could set the marker far into the future, after many of your reads, and it will fire then. You can set the period to some bigger number of frames and that callback will fire every time that number of frames has been read. I think they do this so you can do low-level processing of the raw data in a tight loop, then every so often your callback can do summary-level processing. You could use the marker to make it easier to decide when to stop recording (instead of counting in the read loop).
Here is my code used to find average noise. Notice that it is based on listener notifications so it will save device battery. It is definitely based on examples above. Those example saved much time for me, thanks.
private AudioRecord recorder;
private boolean recorderStarted;
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
protected void startListenToMicrophone() {
if (!recorderStarted) {
recordingThread = new Thread() {
#Override
public void run() {
int minBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
short[] buffer = buffers[++lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i) {
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
}
#Override
public void onMarkerReached(AudioRecord recorder) {
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true) {
if (isInterrupted()) {
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone() {
if (recorderStarted) {
if (recordingThread != null && recordingThread.isAlive() && !recordingThread.isInterrupted()) {
recordingThread.interrupt();
}
recorderStarted = false;
}
}
Now I implemented the recording using the
"OnRecordPositionUpdateListener". This looks very promising and the
right way to do it according the SDK Docs. Everything seems to work
(opening the audio device, read()ing the data etc.) but the Listner is
never called.
Does anybody know why?
I found that the OnRecordPositionUpdateListener is ignored until you do your first .read().
In other words, I found that if I set up everything per the docs, my the Listener never got called. However, if I first called a .read() just after doing my initial .start() then the Listener would get called -- provided I did a .read() every time the Listener was called.
In other words, it almost seems like the Listener event is only good for once per .read() or some such.
I also found that if I requested any less than buffSize/2 samples to be read, that the Listener would not be called. So it seems that the listener is only called AFTER a .read() of at least half of the buffer size. To keep using the Listener callback, one must call read each time the listener is run. (In other words, put the call to read in the Listener code.)
However, the listener seems to be called at a time when the data isn't ready yet, causing blocking.
Also, if your notification period or time are greater than half of your bufferSize they will never get called, it seems.
UPDATE:
As I continue to dig deeper, I have found that the callback seems to ONLY be called when a .read() finishes......!
I don't know if that's a bug or a feature. My initial thought would be that I want a callback when it's time to read. But maybe the android developers had the idea the other way around, where you'd just put a while(1){xxxx.read(...)} in a thread by itself, and to save you from having to keep track of each time read() finished, the callback can essentially tell you when a read has finished.
Oh. Maybe it just dawned on me. And I think others have pointed this out before me here but it didn't sink in: The position or period callback must operate on bytes that have been read already...
I guess maybe I'm stuck with using threads.
In this style, one would have a thread continuously calling read() as soon as it returned, since read is blocking and patiently waits for enough data to return.
Then, independently, the callback would call your specified function every x number of samples.
For the ones who are recording Audio over an Intent Service, they might also experience this callback problem. I have been working on this issue lately and I came up with a simple solution where you call your recording method in a seperate thread. This should call onPeriodicNotification method while recording without any problems.
Something like this:
public class TalkService extends IntentService {
...
#Override
protected void onHandleIntent(Intent intent) {
Context context = getApplicationContext();
tRecord = new Thread(new recordAudio());
tRecord.start();
...
while (tRecord.isAlive()) {
if (getIsDone()) {
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED) {
socketConnection(context, host, port);
}
}
} }
...
class recordAudio implements Runnable {
public void run() {
try {
OutputStream osFile = new FileOutputStream(file);
BufferedOutputStream bosFile = new BufferedOutputStream(osFile);
DataOutputStream dosFile = new DataOutputStream(bosFile);
aRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelInMode, encodingMode, bufferSize);
data = new short[bufferSize];
aRecorder.setPositionNotificationPeriod(sampleRate);
aRecorder
.setRecordPositionUpdateListener(new AudioRecord.OnRecordPositionUpdateListener() {
int count = 1;
#Override
public void onPeriodicNotification(
AudioRecord recorder) {
Log.e(WiFiDirect.TAG, "Period notf: " + count++);
if (getRecording() == false) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG,
"Recorder stopped and released prematurely");
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// TODO Auto-generated method stub
}
});
aRecorder.startRecording();
Log.d(WiFiDirect.TAG, "start Recording");
aRecorder.read(data, 0, bufferSize);
for (int i = 0; i < data.length; i++) {
dosFile.writeShort(data[i]);
}
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG, "Recorder stopped and released");
}
} catch (Exception e) {
// TODO: handle exception
}
}
}

Categories

Resources