Android Real time silence detection - android

I am trying to take real time audio input from the user and every 300 milli seconds i am taking the average of the samples. i am using PCM 16 bit samples and 44100 sample rate.
I am reading the data from the audio recorder into a short array and I am printing the average value of all the samples every 300 milliseconds.
But, the problem is the average value appears to be random and it is showing large values even when there is silence( which should not be).
I want to know if I am using the short array in a right way.
public void startRecording() {
recorder = new AudioRecord(MediaRecorder.AudioSource.VOICE_RECOGNITION, SAMPLE_RATE,
channels, AUDIO_FORMAT, Buffersize);
recorder.startRecording();
isRecording = true;
NoiseSuppressor.create(recorder.getAudioSessionId());
AcousticEchoCanceler.create(recorder.getAudioSessionId());
AutomaticGainControl.create(recorder.getAudioSessionId());
recordingThread = new Thread(new Runnable()
{
public void run() {
writeAudioData();
}
});
recordingThread.start();
}
private void writeAudioData() {
short data[] = new short[Buffersize/2];
while(isRecording) {
samplesread =0;
value = 0;
long ftime = System.currentTimeMillis()+300;
while (ftime > System.currentTimeMillis()) { // loop for taking average of 300 ms
if(isRecording){
samplesread += recorder.read(data, 0, Buffersize/2); // reads the data from the recorder and srores in the short array
int i=0;
while(i<Buffersize/2) {
//value += (long)(data[i]);
value += (long)(data[i] & 0xFFFF); // considering the PCM samples are signed I tried this
i++;
}
}
}
show = value;
div = samplesread;
handler.postDelayed(new Runnable() {
#Override
public void run() {
TextView tv = (TextView) findViewById(R.id.textView);
tv.setText(""+(show/(div))+" "+div);
TextView tv2 = (TextView) findViewById(R.id.textView2);
tv2.setText(show+" "+ System.currentTimeMillis());
}
},0);
}
}

The array values consists of signed numbers , so, it was not showing desired results when average was taken. It works when I took RMS values instead of averaging.

Related

Increase volume output of recorded audio

I am trying to make a call recording app in Android. I am using loudspeaker to record both uplink and downlink audio. The only problem I am facing is the volume is too low. I've increased the volume of device using AudioManager to max and it can't go beyond that.
I've first used MediaRecorder, but since it had limited functions and provides compressed audio, I've tried with AudioRecorder. Still I havn't figured out how to increase the audio. I've checked on projects on Github too, but it's of no use. I've searched on stackoverflow for last two weeks, but couldn't find anything at all.
I am quite sure that it's possible, since many other apps are doing it. For instance Automatic Call recorder does that.
I understand that I have to do something with the audio buffer, but I am not quite sure what needs to be done on that. Can you guide me on that.
Update:-
I am sorry that I forgot to mention that I am already using Gain. My code is almost similar to RehearsalAssistant (in fact I derived it from there). The gain doesn't work for more than 10dB and that doesn't increase the audio volume too much. What I wanted is I should be able to listen to the audio without putting my ear on the speaker which is what lacking in my code.
I've asked a similar question on functioning of the volume/loudness at SoundDesign SE here. It mentions that the Gain and loudness is related but it doesn't set the actual loudness level. I am not sure how things work, but I am determined to get the loud volume output.
You obviously have the AudioRecord stuff running, so I skip the decision for sampleRate and inputSource. The main point is that you need to appropriately manipulate each sample of your recorded data in your recording loop to increase the volume. Like so:
int minRecBufBytes = AudioRecord.getMinBufferSize( sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT );
// ...
audioRecord = new AudioRecord( inputSource, sampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, minRecBufBytes );
// Setup the recording buffer, size, and pointer (in this case quadruple buffering)
int recBufferByteSize = minRecBufBytes*2;
byte[] recBuffer = new byte[recBufferByteSize];
int frameByteSize = minRecBufBytes/2;
int sampleBytes = frameByteSize;
int recBufferBytePtr = 0;
audioRecord.startRecording();
// Do the following in the loop you prefer, e.g.
while ( continueRecording ) {
int reallySampledBytes = audioRecord.read( recBuffer, recBufferBytePtr, sampleBytes );
int i = 0;
while ( i < reallySampledBytes ) {
float sample = (float)( recBuffer[recBufferBytePtr+i ] & 0xFF
| recBuffer[recBufferBytePtr+i+1] << 8 );
// THIS is the point were the work is done:
// Increase level by about 6dB:
sample *= 2;
// Or increase level by 20dB:
// sample *= 10;
// Or if you prefer any dB value, then calculate the gain factor outside the loop
// float gainFactor = (float)Math.pow( 10., dB / 20. ); // dB to gain factor
// sample *= gainFactor;
// Avoid 16-bit-integer overflow when writing back the manipulated data:
if ( sample >= 32767f ) {
recBuffer[recBufferBytePtr+i ] = (byte)0xFF;
recBuffer[recBufferBytePtr+i+1] = 0x7F;
} else if ( sample <= -32768f ) {
recBuffer[recBufferBytePtr+i ] = 0x00;
recBuffer[recBufferBytePtr+i+1] = (byte)0x80;
} else {
int s = (int)( 0.5f + sample ); // Here, dithering would be more appropriate
recBuffer[recBufferBytePtr+i ] = (byte)(s & 0xFF);
recBuffer[recBufferBytePtr+i+1] = (byte)(s >> 8 & 0xFF);
}
i += 2;
}
// Do other stuff like saving the part of buffer to a file
// if ( reallySampledBytes > 0 ) { ... save recBuffer+recBufferBytePtr, length: reallySampledBytes
// Then move the recording pointer to the next position in the recording buffer
recBufferBytePtr += reallySampledBytes;
// Wrap around at the end of the recording buffer, e.g. like so:
if ( recBufferBytePtr >= recBufferByteSize ) {
recBufferBytePtr = 0;
sampleBytes = frameByteSize;
} else {
sampleBytes = recBufferByteSize - recBufferBytePtr;
if ( sampleBytes > frameByteSize )
sampleBytes = frameByteSize;
}
}
Thanks to Hartmut and beworker for the solution. Hartmut's code did worked at near 12-14 dB. I did merged the code from the sonic library too to increase volume, but that increase too much noise and distortion, so I kept the volume at 1.5-2.0 and instead tried to increase gain. I got decent sound volume which doesn't sound too loud in phone, but when listened on a PC sounds loud enough. Looks like that's the farthest I could go.
I am posting my final code to increase the loudness. Be aware that using increasing mVolume increases too much noise. Try to increase gain instead.
private AudioRecord.OnRecordPositionUpdateListener updateListener = new AudioRecord.OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
aRecorder.read(bBuffer, bBuffer.capacity()); // Fill buffer
if (getState() != State.RECORDING)
return;
try {
if (bSamples == 16) {
shBuffer.rewind();
int bLength = shBuffer.capacity(); // Faster than accessing buffer.capacity each time
for (int i = 0; i < bLength; i++) { // 16bit sample size
short curSample = (short) (shBuffer.get(i) * gain);
if (curSample > cAmplitude) { // Check amplitude
cAmplitude = curSample;
}
if(mVolume != 1.0f) {
// Adjust output volume.
int fixedPointVolume = (int)(mVolume*4096.0f);
int value = (curSample*fixedPointVolume) >> 12;
if(value > 32767) {
value = 32767;
} else if(value < -32767) {
value = -32767;
}
curSample = (short)value;
/*scaleSamples(outputBuffer, originalNumOutputSamples, numOutputSamples - originalNumOutputSamples,
mVolume, nChannels);*/
}
shBuffer.put(curSample);
}
} else { // 8bit sample size
int bLength = bBuffer.capacity(); // Faster than accessing buffer.capacity each time
bBuffer.rewind();
for (int i = 0; i < bLength; i++) {
byte curSample = (byte) (bBuffer.get(i) * gain);
if (curSample > cAmplitude) { // Check amplitude
cAmplitude = curSample;
}
bBuffer.put(curSample);
}
}
bBuffer.rewind();
fChannel.write(bBuffer); // Write buffer to file
payloadSize += bBuffer.capacity();
} catch (IOException e) {
e.printStackTrace();
Log.e(NoobAudioRecorder.class.getName(), "Error occured in updateListener, recording is aborted");
stop();
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// NOT USED
}
};
simple use MPEG_4 format
To increase the call recording volume use AudioManager as follows:
int deviceCallVol;
AudioManager audioManager;
Start Recording:
audioManager = (AudioManager)context.getSystemService(Context.AUDIO_SERVICE);
//get the current volume set
deviceCallVol = audioManager.getStreamVolume(AudioManager.STREAM_VOICE_CALL);
//set volume to maximum
audioManager.setStreamVolume(AudioManager.STREAM_VOICE_CALL, audioManager.getStreamMaxVolume(AudioManager.STREAM_VOICE_CALL), 0);
recorder.setAudioSource(MediaRecorder.AudioSource.VOICE_CALL);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(32);
recorder.setAudioSamplingRate(44100);
Stop Recording:
//revert volume to initial state
audioManager.setStreamVolume(AudioManager.STREAM_VOICE_CALL, deviceCallVol, 0);
In my app I use an open source sonic library. Its main purpose is to speed up / slow down speech, but besides this it allows to increase loudness too. I apply it to playback, but it must work for recording similarly. Just pass your samples through it before compressing them. It has a Java interface too. Hope this helps.

Android AudioRecord calculate duration of PCM buffer

I am sorry if this is a trivial question but I am new in Android and have spent a few days searching but there is no answer or information satisfies me.
I want to record an audio record of length approximately 3 seconds for every 30 seconds by using an Android phone. Every record is sent to my PC (using TCP/IP protocol) for further processing.
Here is the code in Android side (I refer to the code of #TechEnd in this question: Android AudioRecord example):
private final int AUD_RECODER_SAMPLERATE = 44100; // 44.1 kHz
private final int AUD_RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private final int AUD_RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
private final int AUD_RECORDER_BUFFER_NUM_ELEMENTS = 131072; // ~~ 1.486 second ???
private final int AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT = 2;
private AudioRecord audioRecorder = null;
private boolean isAudioRecording = false;
private Runnable runnable = null;
private Handler handler = null;
private final int AUD_RECORDER_RECORDING_PERIOD = 30000; // one fire every 30 seconds
private byte[] bData = new byte[AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT];
public void start() {
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, AUD_RECODER_SAMPLERATE, AUD_RECORDER_CHANNELS, AUD_RECORDER_AUDIO_ENCODING, AUD_RECORDER_BUFFER_NUM_ELEMENTS*AUD_RECORDER_BUFFER_BYTES_PER_ELEMENT);
audioRecorder.startRecording();
isAudioRecording = true;
handler = new Handler();
runnable = new Runnable() {
#Override
public void run() {
if (isAudioRecording) {
int nElementRead = audioRecorder.read(bData, 0, bData.length);
net_send(bData, 0, nElementRead);
}
handler.postDelayed(this, AUD_RECORDER_RECORDING_PERIOD);
}
};
handler.postDelayed(runnable, AUD_RECORDER_RECORDING_PERIOD);
}
public void stop() {
isAudioRecording = false;
if (audioRecorder != null) {
audioRecorder.stop();
audioRecorder.release();
audioRecorder = null;
}
handler.removeCallbacks(runnable);
}
public void net_send(byte[] data, int nbytes) {
try {
dataOutputStream.writeInt(nbytes);
dataOutputStream.write(data,0,nbytes);
} catch (IOException e) {
e.printStackTrace();
}
}
And in PC side (server written in C), after receive a record (I checked and they are all 262144 bytes), I first write the byte array to a binary file (with extension .raw) and open with Free Audio Editor (http://www.free-audio-editor.com/) and obtain the result with duration 1.486 seconds
https://www.dropbox.com/s/xzml51jzvagl6dy/aud1.PNG?dl=0
And then I convert every two consecutive bytes into a 2-bytes integer using this function
short bytes2short( const char num_buf[2] )
{
return(
( ( num_buf[1] & 0xFF ) << 8 ) |
( num_buf[0] & 0xFF )
);
}
and write to file (length is 131072 bytes) and plot (the normalized one) with Excel, the similar graph is obtained.
As I calculated, the number of bytes recorded in one second is 44100(sample/sec)*1(sec)*2(byte/sample/channel)*1(channel) = 88200 bytes.
So with my buffer of length 131072*2 (bytes), the corresponding duration should be 262144/88200 = 2.97 seconds. But the result I obtain is just a half. I tried on three different devices running Android OS version 2.3.3, 2.3.4 and 4.3 and obtain the same result. Thus, this is my own problem.
Could anyone tell me where is the problem, in my calculation or in my code? I my understanding is correct?
Any comments or suggestion would be appreciated.

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

How to play back AudioRecord with some delay

I'm implementing an app which will repeat everything I'm telling it.
What I need is to play the sound I'm recording on a buffer just with a second of delay
So that I would be listening myself but 1 second delayed
This is my run method of the Recorder class
public void run()
{
AudioRecord recorder = null;
int ix = 0;
buffers = new byte[256][160];
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
Timer t = new Timer();
SeekBar barra = (SeekBar)findViewById(R.id.barraDelay);
t.schedule(r = new Reproductor(), barra.getProgress());
while(!stopped)
{
byte[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
}
}
catch(Throwable x)
{
}
finally
{
recorder.stop();
recorder.release();
recorder = null;
}
And this is the run one of my player:
public void run() {
reproducir = true;
AudioTrack track = null;
int jx = 0;
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(reproducir)
{
byte[] buffer = buffers[jx++ % buffers.length];
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
track.stop();
track.release();
track = null;
}
}
Reproductor is an inner class extending TimerTask and implementing the "run" method.
Many thanks!
At least you should change the following line of your player
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
to
int N = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
because the API requires that (albeit the constant values are identical).
But this is only a marginal point. The main point is that you did not really present an approach to your problem, but only two generic methods.
The core of a working solution is that you use a ring buffer with a size of 1s and AudioTrack reading a block of it just ahead of writing new data via AudioRecord to the same block, both at the same sample rate.
I would suggest to do that inside a single thread.

Android: AudioRecord Class Problem: Callback is never called

My Android Java Application needs to record audio data into the RAM and process it.
This is why I use the class "AudioRecord" and not the "MediaRecorder" (records only to file).
Till now, I used a busy loop polling with "read()" for the audio data. this has been working so far, but it peggs the CPU too much.
Between two polls, I put the thread to sleep to avoid 100% CPU usage.
However, this is not really a clean solution, since the time of the sleep is
not guaranteed and you must subtract a security time in order not to loose audio
snippets. This is not CPU optimal. I need as many free CPU cycles as possible for
a parallel running thread.
Now I implemented the recording using the "OnRecordPositionUpdateListener".
This looks very promising and the right way to do it according the SDK Docs.
Everything seems to work (opening the audio device, read()ing the data etc.)
but the Listner is never called.
Does anybody know why?
Info:
I am working with a real Device, not under the Emulator. The Recording using a Busy Loop basically works (however not satifiying). Only the Callback Listener is never called.
Here is a snippet from my Sourcecode:
public class myApplication extends Activity {
/* audio recording */
private static final int AUDIO_SAMPLE_FREQ = 16000;
private static final int AUDIO_BUFFER_BYTESIZE = AUDIO_SAMPLE_FREQ * 2 * 3; // = 3000ms
private static final int AUDIO_BUFFER_SAMPLEREAD_SIZE = AUDIO_SAMPLE_FREQ / 10 * 2; // = 200ms
private short[] mAudioBuffer = null; // audio buffer
private int mSamplesRead; // how many samples are recently read
private AudioRecord mAudioRecorder; // Audio Recorder
...
private OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
mSamplesRead = recorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
if (mSamplesRead > 0) {
// do something here...
}
}
public void onMarkerReached(AudioRecord recorder) {
Error("What? Hu!? Where am I?");
}
};
...
public void onCreate(Bundle savedInstanceState) {
try {
mAudioRecorder = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC,
AUDIO_SAMPLE_FREQ,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
AUDIO_BUFFER_BYTESIZE);
} catch (Exception e) {
Error("Unable to init audio recording!");
}
mAudioBuffer = new short[AUDIO_BUFFER_SAMPLEREAD_SIZE];
mAudioRecorder.setPositionNotificationPeriod(AUDIO_BUFFER_SAMPLEREAD_SIZE);
mAudioRecorder.setRecordPositionUpdateListener(mRecordListener);
mAudioRecorder.startRecording();
/* test if I can read anything at all... (and yes, this here works!) */
mSamplesRead = mAudioRecorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
}
}
I believe the problem is that you still need to do the read loop. If you setup callbacks, they will fire when you've read the number of frames that you specify for the callbacks. But you still need to do the reads. I've tried this and the callbacks get called just fine. Setting up a marker causes a callback when that number of frames has been read since the start of recording. In other words, you could set the marker far into the future, after many of your reads, and it will fire then. You can set the period to some bigger number of frames and that callback will fire every time that number of frames has been read. I think they do this so you can do low-level processing of the raw data in a tight loop, then every so often your callback can do summary-level processing. You could use the marker to make it easier to decide when to stop recording (instead of counting in the read loop).
Here is my code used to find average noise. Notice that it is based on listener notifications so it will save device battery. It is definitely based on examples above. Those example saved much time for me, thanks.
private AudioRecord recorder;
private boolean recorderStarted;
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
protected void startListenToMicrophone() {
if (!recorderStarted) {
recordingThread = new Thread() {
#Override
public void run() {
int minBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
short[] buffer = buffers[++lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i) {
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
}
#Override
public void onMarkerReached(AudioRecord recorder) {
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true) {
if (isInterrupted()) {
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone() {
if (recorderStarted) {
if (recordingThread != null && recordingThread.isAlive() && !recordingThread.isInterrupted()) {
recordingThread.interrupt();
}
recorderStarted = false;
}
}
Now I implemented the recording using the
"OnRecordPositionUpdateListener". This looks very promising and the
right way to do it according the SDK Docs. Everything seems to work
(opening the audio device, read()ing the data etc.) but the Listner is
never called.
Does anybody know why?
I found that the OnRecordPositionUpdateListener is ignored until you do your first .read().
In other words, I found that if I set up everything per the docs, my the Listener never got called. However, if I first called a .read() just after doing my initial .start() then the Listener would get called -- provided I did a .read() every time the Listener was called.
In other words, it almost seems like the Listener event is only good for once per .read() or some such.
I also found that if I requested any less than buffSize/2 samples to be read, that the Listener would not be called. So it seems that the listener is only called AFTER a .read() of at least half of the buffer size. To keep using the Listener callback, one must call read each time the listener is run. (In other words, put the call to read in the Listener code.)
However, the listener seems to be called at a time when the data isn't ready yet, causing blocking.
Also, if your notification period or time are greater than half of your bufferSize they will never get called, it seems.
UPDATE:
As I continue to dig deeper, I have found that the callback seems to ONLY be called when a .read() finishes......!
I don't know if that's a bug or a feature. My initial thought would be that I want a callback when it's time to read. But maybe the android developers had the idea the other way around, where you'd just put a while(1){xxxx.read(...)} in a thread by itself, and to save you from having to keep track of each time read() finished, the callback can essentially tell you when a read has finished.
Oh. Maybe it just dawned on me. And I think others have pointed this out before me here but it didn't sink in: The position or period callback must operate on bytes that have been read already...
I guess maybe I'm stuck with using threads.
In this style, one would have a thread continuously calling read() as soon as it returned, since read is blocking and patiently waits for enough data to return.
Then, independently, the callback would call your specified function every x number of samples.
For the ones who are recording Audio over an Intent Service, they might also experience this callback problem. I have been working on this issue lately and I came up with a simple solution where you call your recording method in a seperate thread. This should call onPeriodicNotification method while recording without any problems.
Something like this:
public class TalkService extends IntentService {
...
#Override
protected void onHandleIntent(Intent intent) {
Context context = getApplicationContext();
tRecord = new Thread(new recordAudio());
tRecord.start();
...
while (tRecord.isAlive()) {
if (getIsDone()) {
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED) {
socketConnection(context, host, port);
}
}
} }
...
class recordAudio implements Runnable {
public void run() {
try {
OutputStream osFile = new FileOutputStream(file);
BufferedOutputStream bosFile = new BufferedOutputStream(osFile);
DataOutputStream dosFile = new DataOutputStream(bosFile);
aRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelInMode, encodingMode, bufferSize);
data = new short[bufferSize];
aRecorder.setPositionNotificationPeriod(sampleRate);
aRecorder
.setRecordPositionUpdateListener(new AudioRecord.OnRecordPositionUpdateListener() {
int count = 1;
#Override
public void onPeriodicNotification(
AudioRecord recorder) {
Log.e(WiFiDirect.TAG, "Period notf: " + count++);
if (getRecording() == false) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG,
"Recorder stopped and released prematurely");
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// TODO Auto-generated method stub
}
});
aRecorder.startRecording();
Log.d(WiFiDirect.TAG, "start Recording");
aRecorder.read(data, 0, bufferSize);
for (int i = 0; i < data.length; i++) {
dosFile.writeShort(data[i]);
}
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG, "Recorder stopped and released");
}
} catch (Exception e) {
// TODO: handle exception
}
}
}

Categories

Resources