for a university project my prof. wants me to write an android application, would be my first one. I have some Java experience but I am new to Android programming, so please be gentle with me.
First I create an Activity where I have only two buttons, one for starting an AsyncTask and one for stopping it, I mean I just set the boolean "isRecording" to false, everything else is handled in the AsyncTask, which is attached as source code.
The thing is running quite okay, but after a while I can find some bufferoverflow messages in the LogCat and after that it crashes with an uncaught exception. I figured out why it's crashing, and the uncaught exception shouldn't be the purpose of that question.
03-07 11:34:02.474: INFO/buffer 247:(558): 40
03-07 11:34:02.484: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.494: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.494: INFO/buffer 248:(558): -50
I write out the buffer as you can see, but somehow I think I made a mistake in configuring the AudioRecord correctly, can anybody tell why I get the bufferoverflow?
And the next question would be, how can I handle the buffer? I mean, I have the values inside it and want them to show in graphical spectrogram on the screen. Does anyone have experience with it and can me give a hint? How can I go on ...
Thanks in advance for your help.
Source code of the AsyncTask:
package nomihodai.audio;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.os.AsyncTask;
import android.util.Log;
public class MutantAudioRecorder extends AsyncTask<Void, Void, Void> {
private boolean isRecording = false;
public AudioRecord audioRecord = null;
public int mSamplesRead;
public int buffersizebytes;
public int buflen;
public int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
public int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
public static short[] buffer;
public static final int SAMPLESPERSEC = 8000;
#Override
protected Void doInBackground(Void... params) {
while(isRecording) {
audioRecord.startRecording();
mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);
if(!readerT.isAlive())
readerT.start();
Log.i("MutantAudioRecorder:doInBackground()", "isRecoding");
}
readerT.stop();
return null;
}
Thread readerT = new Thread() {
public void run() {
for(int i = 0; i < 256; i++){
Log.i("buffer " + i + ": ", Short.toString(buffer[i]));
}
}
};
#Override
public void onPostExecute(Void unused) {
Log.i("MutantAudioRecorder:onPostExecute()", "try to release the audio hardware");
audioRecord.release();
Log.i("MutantAudioRecorder:onPostExecute()", "released...");
}
public void setRecording(boolean rec) {
this.isRecording = rec;
Log.i("MutantAudioRecorder:setRecording()", "isRecoding set to " + rec);
}
#Override
protected void onPreExecute() {
buffersizebytes = AudioRecord.getMinBufferSize(SAMPLESPERSEC, channelConfiguration, audioEncoding);
buffer = new short[buffersizebytes];
buflen = buffersizebytes/2;
Log.i("MutantAudioRecorder:onPreExecute()", "buffersizebytes: " + buffersizebytes
+ ", buffer: " + buffer.length
+ ", buflen: " + buflen);
audioRecord = new AudioRecord(android.media.MediaRecorder.AudioSource.MIC,
SAMPLESPERSEC,
channelConfiguration,
audioEncoding,
buffersizebytes);
if(audioRecord != null)
Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord object created");
else
Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord NOT created");
}
}
It's probably some live analyzing process working on the recorded audio bytes?
Since the buffer size for recording is limited, once your "analyzing process" is slower than the rate of recording, the data in the buffer will be stuck, but the recording bytes are always coming thus buffer overflows.
Try use threads on recording and the other process on the recorded bytes, there's a open source sample code for this approach: http://musicg.googlecode.com/files/musicg_android_demo.zip
As we discussed in the chat room, decoding the audio data and displaying it on the screen should be straightforward. You mentioned that the audio buffer has 8000 samples per second, each sample is 16 bit, and it's mono audio.
Displaying this should be straightforward. Treat each sample as a vertical offset in your view. You need to scale the range -32k to +32k to the vertical height of your view. Starting at the left edge of the view, draw one sample per column. When you reach the right edge, wrap around again (erasing the previous line as necessary).
This will end up drawing each sample as a single pixel, which may not look very nice. You can also draw a line between adjacent samples. You can play around with line widths, colors and so on to get the best effect.
One last note: You'll be drawing 8000 times per second, plus more to blank out the previous samples. You may need to take some shortcuts to make sure the framerate can keep up with the audio. You may need to skip samples.
Related
My android OS is Android M. Nexus 6.
I implemented a AndroidSpeakerWriter as
public class AndroidSpeakerWriter {
private final static String TAG= "AndroidSpeakerWriter";
private AudioTrack audioTrack;
short[] buffer;
public AndroidSpeakerWriter() {
buffer = new short[1024];
}
public void init(int sampleRateInHZ){
int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHZ,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRateInHZ,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize,
AudioTrack.MODE_STREAM); // 0-static 1-stream
}
public void fillBuffer(short[] samples) {
if (buffer.length<samples.length) {
buffer = new short[samples.length];
}
System.arraycopy(samples, 0, buffer, 0, samples.length);
}
public void writeSamples(short[] samples) {
fillBuffer(samples);
audioTrack.write(buffer, 0, samples.length);
}
public void stop() {
audioTrack.stop();
}
public void play() {
audioTrack.play();
}
}
Then I just send samples when I click a button
public void play(final short[] signal) {
if (signal == null){
Log.d(TAG, "play: a null signal");
return;
}
Thread t = new Thread(new Runnable() {
#Override
public void run() {
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
androidSpeakerWriter.play();
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.stop();
}
});
t.start();
}
The problem is the device does not beep every time I click the button.
Sometimes it works, sometimes it doesn't.
There is no such a problem when I run this on an old nexus galaxy phone android 4.3. Anybody has encountered a similar problem? Thanks in advance for any help.
One thing is that currently my beep is pretty short (256 samples), not even close to the minBufferSize.
The bufferSizeInBytes in the constructor of AudioTrack for static mode should be the audio sample length you wanna play according to the vague document.
So is it still has a minimal size constraint on the buffer even for static mode? Why a nexus galaxy can play a 256 sample audio in static mode and a nexus 6 can not.
I use AudioManager to get the native buffer size/ sampling rate
nexus galaxy: 144/44100 nexus 6: 192/48000
I found those related:
AudioRecord and AudioTrack latency
Does AudioTrack buffer need to be full always in streaming mode?
https://github.com/igorski/MWEngine/wiki/Understanding-Android-audio-towards-achieving-low-latency-response
I believe it is caused by improper synchronization between thread. Your androidSpeakerWriter instance is running continously in different thread calling play(), writeSamples(), stop() respectively. Click of button will trigger creation of new thread with same androidSpeakerWriter instance.
So while Thread A is executing androidSpeakerWriter.play(), Thread B might be executing androidSpeakerWriter.writeSamples() which might overwrite current audio data being played.
Try
synchronized(androidSpeakerWriter) {
androidSpeakerWriter.play();
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.stop();
}
MODE_STREAM is used if you must play long audio data that will not fit into memory. If you need to play short audio file such beep sound, you can use MODE_STATIC when creating AudioTrack. then change your playback code such following:
synchronized(androidSpeakerWriter) {
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.play();
}
I have implemented a loop buffer (or circular buffer) storing 250 frames raw video data in total (frame resolution 1280x720). As a buffer I am using the ByteBuffer class. The buffer is running in a separate thread using a Looper, every new frame is passed via a message to the thread Handler object. When the limit is reached, the position is set to 0 and the whole buffer is overwritten from the beginning. Like that, the buffer always contains the last 250 video frames.
As the amount of required heap space is huge (around 320 MByte) I am using the tag android:largeHeap="true" in the manifest.
Now we come to the problem. The loop is running well, it consumes slightly less than the allowed heap space size (which is acceptable for me). But at some point of time, I want to store the whole buffer to a raw binary file while respecting the current position of the buffer.
Let me explain that with a small graph:
The loop buffer looks like this:
|========== HEAD ==========|===============TAIL============|
0 -------------------------buffer.position()-----------------------buffer.limit()
At the time of saving, I want to first store the tail to the file (because it contains the beginning of the video) and afterwards the head until the current buffer.position(). I cannot allocate any more byte arrays for extracting the data from the ByteBuffer (heap space is full), thus, I have to directly write the ByteBuffer to the file.
At the moment ByteBuffer does only allow to be written to a file completely (write() method.) Does anybody know what could be the solution? Or is there even a better solution for my task?
I will give my code below:
public class FrameRecorderThread extends Thread {
public int MAX_NUMBER_FRAMES_QUEUE = 25 * 10; // 25 fps * 10 seconds
public Handler frameHandler;
private ByteBuffer byteBuffer;
byte[] image = new byte[1382400]; // bytes for one image
#Override
public void run() {
Looper.prepare();
byteBuffer = ByteBuffer.allocateDirect(MAX_NUMBER_FRAMES_QUEUE * 1382400); // A lot of memory is allocated
frameHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
// Store message content (byte[]) to queue
if(msg.what == 0) { // STORE FRAME TO BUFFER
if(byteBuffer.position() < MAX_NUMBER_IMAGES_QUEUE * 1382400) {
byteBuffer.put((byte[])msg.obj);
}
else {
byteBuffer.position(0); // Start overwriting from the beginning
}
}
else if(msg.what == 1) { // SAVE IMAGES
String fileName = "VIDEO_BUF_1.raw";
File directory = new File(Environment.getExternalStorageDirectory()
+ "/FrameRecorder/");
directory.mkdirs();
try {
FileOutputStream outStream = new FileOutputStream(Environment.getExternalStorageDirectory()
+ "/FrameRecorder/" + fileName);
// This is the current position of the split between head and tail
int position = byteBuffer.position();
try {
// This stores the whole buffer in a file but does
// not respect the order (tail before head)
outStream.getChannel().write(byteBuffer);
} catch (IOException e) {
e.printStackTrace();
}
} catch (FileNotFoundException e) {
Log.e("FMT", "File not found. (" + e.getLocalizedMessage() + ")");
}
}
else if(msg.what == 2) { // STOP LOOPER
Looper looper = Looper.myLooper();
if(looper != null) {
looper.quit();
byteBuffer = null;
System.gc();
}
}
}
};
Looper.loop();
}}
Thank you very much in advance!
Just create a subsection and write that to a file.
Or call set Length and then write it and then set it back.
Ok, in the meanwhile I have investigated a little further and found a solution.
Instead of a ByteBuffer object I am using a simple byte[] array. In the beginning I am allocating all heap space required for the frames. At the time of storing it, I can then write the head and tail of the buffer by using the current position. This works and is easier than expected. :)
I'm working on an Android app and I would like to play some short sounds(~ 2s). I tried Soundpool but it doesn't really suit for me since it can't check if a sounds is already playing. So I decided to use AudioTrack.
It works quite good BUT most of the time, when it begins to play a sound there is a "click" sound.
I checked my audiofiles and they are clean.
I use audiotrack on stream mode. I saw that static mode is better for short sounds but after many searchs I still don't understand how to make it work.
I also read that the clicking noise can be caused by the header of the wav file, so maybe the sound would disappear if I skip this header with setPlaybackHeadPosition(int positionInFrames) function (that is supposed to work only in static mode)
Here is my code (so the problem is the ticking noise at the beginning)
int minBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize, AudioTrack.MODE_STREAM);
audioTrack.play();
int i = 0;
int bufferSize = 2048; //don't really know which value to put
audioTrack.setPlaybackRate(88200);
byte [] buffer = new byte[bufferSize];
//there we open the wav file >
InputStream inputStream = getResources().openRawResource(R.raw.abordage);
try {
while((i = inputStream.read(buffer)) != -1)
audioTrack.write(buffer, 0, i);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
inputStream.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Does anyone has a solution to avoid that noise? I tried this, that works sometimes but not everytime. Could someone show me how to implement audiotrack in MODE_STATIC ?
Thank you
I found that Scott Stensland's reasoning was fitting my issue (thanks!).
I eliminated the pop by running a dead simple linear fade-in filter over the beginning of the sample array. The filter makes sample values start from 0 and slowly increase in amplitude to their original value. By always starting at a value of 0 at the zero cross over point the pop never occurs.
A similar fade-out filter was applied at the end of the sample array. The filter duration can easily be adjusted.
import android.util.Log;
public class FadeInFadeOutFilter
{
private static final String TAG = FadeInFadeOutFilter.class.getSimpleName();
private final int filterDurationInSamples;
public FadeInFadeOutFilter ( int filterDurationInSamples )
{
this.filterDurationInSamples = filterDurationInSamples;
}
public void filter ( short[] audioShortArray )
{
filter(audioShortArray, audioShortArray.length);
}
public void filter ( short[] audioShortArray, int audioShortArraySize )
{
if ( audioShortArraySize/2 <= filterDurationInSamples ) {
Log.w(TAG, "filtering audioShortArray with less samples filterDurationInSamples; untested, pops or even crashes may occur. audioShortArraySize="+audioShortArraySize+", filterDurationInSamples="+filterDurationInSamples);
}
final int I = Math.min(filterDurationInSamples, audioShortArraySize/2);
// Perform fade-in and fade-out simultaneously in one loop.
final int fadeOutOffset = audioShortArraySize - filterDurationInSamples;
for ( int i = 0 ; i < I ; i++ ) {
// Fade-in beginning.
final double fadeInAmplification = (double)i/I; // Linear ramp-up 0..1.
audioShortArray[i] = (short)(fadeInAmplification * audioShortArray[i]);
// Fade-out end.
final double fadeOutAmplification = 1 - fadeInAmplification; // Linear ramp-down 1..0.
final int j = i + fadeOutOffset;
audioShortArray[j] = (short)(fadeOutAmplification * audioShortArray[j]);
}
}
}
In my case. It was WAV-header.
And...
byte[] buf44 = new byte[44];
int read = inputStream.read(buf44, 0, 44);
...solved it.
A common cause of audio "pop" is due to the rendering process not starting/stopping sound at the zero cross over point (assuming min/max of -1 to +1 cross over would be 0). Transducers like speakers or ear-buds are at rest (no sound input) which maps to this zero cross level. If an audio rendering process fails to start/stop from/to this zero, the transducer is being asked to do the impossible, namely instantaneously go from its resting state to some non-zero position in its min/max movement range, (or visa versa if you get a "pop" at the end).
Finally, after a lot of experimentation, I made it work without the click noise. Here is my code (unfortunaly, I can't read the size of the inputStream since the getChannel().size() method only works with FileInputStream type)
try{
long totalAudioLen = 0;
InputStream inputStream = getResources().openRawResource(R.raw.abordage); // open the file
totalAudioLen = inputStream.available();
byte[] rawBytes = new byte[(int)totalAudioLen];
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
(int)totalAudioLen,
AudioTrack.MODE_STATIC);
int offset = 0;
int numRead = 0;
track.setPlaybackHeadPosition(100); // IMPORTANT to skip the click
while (offset < rawBytes.length
&& (numRead=inputStream.read(rawBytes, offset, rawBytes.length-offset)) >= 0) {
offset += numRead;
} //don't really know why it works, it reads the file
track.write(rawBytes, 0, (int)totalAudioLen); //write it in the buffer?
track.play(); // launch the play
track.setPlaybackRate(88200);
inputStream.close();
}
catch (FileNotFoundException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IOException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IllegalArgumentException e) {
Log.e(TAG, "Error loading audio to bytes", e);
}
So the solution to skip the clicking noise is to use MODE_STATIC and setPlaybackHeadPosition function to skip the beginning of the audio file (that is probably the header or I don't know what).
I hope that this part of code will help someone, I spent too many time trying to find a static mode code sample without finding a way to load a raw ressource.
Edit: After testing this solution on various devices, it appears that they have the clicking noise anyway.
For "setPlaybackHeadPosition" to work, you have to play and pause first. It doesn't work if your track is stopped or not started. Trust me. This is dumb. But it works:
track.play();
track.pause();
track.setPlaybackHeadPosition(100);
// then continue with track.write, track.play, etc.
i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.
My Android Java Application needs to record audio data into the RAM and process it.
This is why I use the class "AudioRecord" and not the "MediaRecorder" (records only to file).
Till now, I used a busy loop polling with "read()" for the audio data. this has been working so far, but it peggs the CPU too much.
Between two polls, I put the thread to sleep to avoid 100% CPU usage.
However, this is not really a clean solution, since the time of the sleep is
not guaranteed and you must subtract a security time in order not to loose audio
snippets. This is not CPU optimal. I need as many free CPU cycles as possible for
a parallel running thread.
Now I implemented the recording using the "OnRecordPositionUpdateListener".
This looks very promising and the right way to do it according the SDK Docs.
Everything seems to work (opening the audio device, read()ing the data etc.)
but the Listner is never called.
Does anybody know why?
Info:
I am working with a real Device, not under the Emulator. The Recording using a Busy Loop basically works (however not satifiying). Only the Callback Listener is never called.
Here is a snippet from my Sourcecode:
public class myApplication extends Activity {
/* audio recording */
private static final int AUDIO_SAMPLE_FREQ = 16000;
private static final int AUDIO_BUFFER_BYTESIZE = AUDIO_SAMPLE_FREQ * 2 * 3; // = 3000ms
private static final int AUDIO_BUFFER_SAMPLEREAD_SIZE = AUDIO_SAMPLE_FREQ / 10 * 2; // = 200ms
private short[] mAudioBuffer = null; // audio buffer
private int mSamplesRead; // how many samples are recently read
private AudioRecord mAudioRecorder; // Audio Recorder
...
private OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
mSamplesRead = recorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
if (mSamplesRead > 0) {
// do something here...
}
}
public void onMarkerReached(AudioRecord recorder) {
Error("What? Hu!? Where am I?");
}
};
...
public void onCreate(Bundle savedInstanceState) {
try {
mAudioRecorder = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC,
AUDIO_SAMPLE_FREQ,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
AUDIO_BUFFER_BYTESIZE);
} catch (Exception e) {
Error("Unable to init audio recording!");
}
mAudioBuffer = new short[AUDIO_BUFFER_SAMPLEREAD_SIZE];
mAudioRecorder.setPositionNotificationPeriod(AUDIO_BUFFER_SAMPLEREAD_SIZE);
mAudioRecorder.setRecordPositionUpdateListener(mRecordListener);
mAudioRecorder.startRecording();
/* test if I can read anything at all... (and yes, this here works!) */
mSamplesRead = mAudioRecorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
}
}
I believe the problem is that you still need to do the read loop. If you setup callbacks, they will fire when you've read the number of frames that you specify for the callbacks. But you still need to do the reads. I've tried this and the callbacks get called just fine. Setting up a marker causes a callback when that number of frames has been read since the start of recording. In other words, you could set the marker far into the future, after many of your reads, and it will fire then. You can set the period to some bigger number of frames and that callback will fire every time that number of frames has been read. I think they do this so you can do low-level processing of the raw data in a tight loop, then every so often your callback can do summary-level processing. You could use the marker to make it easier to decide when to stop recording (instead of counting in the read loop).
Here is my code used to find average noise. Notice that it is based on listener notifications so it will save device battery. It is definitely based on examples above. Those example saved much time for me, thanks.
private AudioRecord recorder;
private boolean recorderStarted;
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
protected void startListenToMicrophone() {
if (!recorderStarted) {
recordingThread = new Thread() {
#Override
public void run() {
int minBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
short[] buffer = buffers[++lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i) {
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
}
#Override
public void onMarkerReached(AudioRecord recorder) {
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true) {
if (isInterrupted()) {
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone() {
if (recorderStarted) {
if (recordingThread != null && recordingThread.isAlive() && !recordingThread.isInterrupted()) {
recordingThread.interrupt();
}
recorderStarted = false;
}
}
Now I implemented the recording using the
"OnRecordPositionUpdateListener". This looks very promising and the
right way to do it according the SDK Docs. Everything seems to work
(opening the audio device, read()ing the data etc.) but the Listner is
never called.
Does anybody know why?
I found that the OnRecordPositionUpdateListener is ignored until you do your first .read().
In other words, I found that if I set up everything per the docs, my the Listener never got called. However, if I first called a .read() just after doing my initial .start() then the Listener would get called -- provided I did a .read() every time the Listener was called.
In other words, it almost seems like the Listener event is only good for once per .read() or some such.
I also found that if I requested any less than buffSize/2 samples to be read, that the Listener would not be called. So it seems that the listener is only called AFTER a .read() of at least half of the buffer size. To keep using the Listener callback, one must call read each time the listener is run. (In other words, put the call to read in the Listener code.)
However, the listener seems to be called at a time when the data isn't ready yet, causing blocking.
Also, if your notification period or time are greater than half of your bufferSize they will never get called, it seems.
UPDATE:
As I continue to dig deeper, I have found that the callback seems to ONLY be called when a .read() finishes......!
I don't know if that's a bug or a feature. My initial thought would be that I want a callback when it's time to read. But maybe the android developers had the idea the other way around, where you'd just put a while(1){xxxx.read(...)} in a thread by itself, and to save you from having to keep track of each time read() finished, the callback can essentially tell you when a read has finished.
Oh. Maybe it just dawned on me. And I think others have pointed this out before me here but it didn't sink in: The position or period callback must operate on bytes that have been read already...
I guess maybe I'm stuck with using threads.
In this style, one would have a thread continuously calling read() as soon as it returned, since read is blocking and patiently waits for enough data to return.
Then, independently, the callback would call your specified function every x number of samples.
For the ones who are recording Audio over an Intent Service, they might also experience this callback problem. I have been working on this issue lately and I came up with a simple solution where you call your recording method in a seperate thread. This should call onPeriodicNotification method while recording without any problems.
Something like this:
public class TalkService extends IntentService {
...
#Override
protected void onHandleIntent(Intent intent) {
Context context = getApplicationContext();
tRecord = new Thread(new recordAudio());
tRecord.start();
...
while (tRecord.isAlive()) {
if (getIsDone()) {
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED) {
socketConnection(context, host, port);
}
}
} }
...
class recordAudio implements Runnable {
public void run() {
try {
OutputStream osFile = new FileOutputStream(file);
BufferedOutputStream bosFile = new BufferedOutputStream(osFile);
DataOutputStream dosFile = new DataOutputStream(bosFile);
aRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelInMode, encodingMode, bufferSize);
data = new short[bufferSize];
aRecorder.setPositionNotificationPeriod(sampleRate);
aRecorder
.setRecordPositionUpdateListener(new AudioRecord.OnRecordPositionUpdateListener() {
int count = 1;
#Override
public void onPeriodicNotification(
AudioRecord recorder) {
Log.e(WiFiDirect.TAG, "Period notf: " + count++);
if (getRecording() == false) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG,
"Recorder stopped and released prematurely");
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// TODO Auto-generated method stub
}
});
aRecorder.startRecording();
Log.d(WiFiDirect.TAG, "start Recording");
aRecorder.read(data, 0, bufferSize);
for (int i = 0; i < data.length; i++) {
dosFile.writeShort(data[i]);
}
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG, "Recorder stopped and released");
}
} catch (Exception e) {
// TODO: handle exception
}
}
}