My Android Java Application needs to record audio data into the RAM and process it.
This is why I use the class "AudioRecord" and not the "MediaRecorder" (records only to file).
Till now, I used a busy loop polling with "read()" for the audio data. this has been working so far, but it peggs the CPU too much.
Between two polls, I put the thread to sleep to avoid 100% CPU usage.
However, this is not really a clean solution, since the time of the sleep is
not guaranteed and you must subtract a security time in order not to loose audio
snippets. This is not CPU optimal. I need as many free CPU cycles as possible for
a parallel running thread.
Now I implemented the recording using the "OnRecordPositionUpdateListener".
This looks very promising and the right way to do it according the SDK Docs.
Everything seems to work (opening the audio device, read()ing the data etc.)
but the Listner is never called.
Does anybody know why?
Info:
I am working with a real Device, not under the Emulator. The Recording using a Busy Loop basically works (however not satifiying). Only the Callback Listener is never called.
Here is a snippet from my Sourcecode:
public class myApplication extends Activity {
/* audio recording */
private static final int AUDIO_SAMPLE_FREQ = 16000;
private static final int AUDIO_BUFFER_BYTESIZE = AUDIO_SAMPLE_FREQ * 2 * 3; // = 3000ms
private static final int AUDIO_BUFFER_SAMPLEREAD_SIZE = AUDIO_SAMPLE_FREQ / 10 * 2; // = 200ms
private short[] mAudioBuffer = null; // audio buffer
private int mSamplesRead; // how many samples are recently read
private AudioRecord mAudioRecorder; // Audio Recorder
...
private OnRecordPositionUpdateListener mRecordListener = new OnRecordPositionUpdateListener() {
public void onPeriodicNotification(AudioRecord recorder) {
mSamplesRead = recorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
if (mSamplesRead > 0) {
// do something here...
}
}
public void onMarkerReached(AudioRecord recorder) {
Error("What? Hu!? Where am I?");
}
};
...
public void onCreate(Bundle savedInstanceState) {
try {
mAudioRecorder = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC,
AUDIO_SAMPLE_FREQ,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
AUDIO_BUFFER_BYTESIZE);
} catch (Exception e) {
Error("Unable to init audio recording!");
}
mAudioBuffer = new short[AUDIO_BUFFER_SAMPLEREAD_SIZE];
mAudioRecorder.setPositionNotificationPeriod(AUDIO_BUFFER_SAMPLEREAD_SIZE);
mAudioRecorder.setRecordPositionUpdateListener(mRecordListener);
mAudioRecorder.startRecording();
/* test if I can read anything at all... (and yes, this here works!) */
mSamplesRead = mAudioRecorder.read(mAudioBuffer, 0, AUDIO_BUFFER_SAMPLEREAD_SIZE);
}
}
I believe the problem is that you still need to do the read loop. If you setup callbacks, they will fire when you've read the number of frames that you specify for the callbacks. But you still need to do the reads. I've tried this and the callbacks get called just fine. Setting up a marker causes a callback when that number of frames has been read since the start of recording. In other words, you could set the marker far into the future, after many of your reads, and it will fire then. You can set the period to some bigger number of frames and that callback will fire every time that number of frames has been read. I think they do this so you can do low-level processing of the raw data in a tight loop, then every so often your callback can do summary-level processing. You could use the marker to make it easier to decide when to stop recording (instead of counting in the read loop).
Here is my code used to find average noise. Notice that it is based on listener notifications so it will save device battery. It is definitely based on examples above. Those example saved much time for me, thanks.
private AudioRecord recorder;
private boolean recorderStarted;
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
protected void startListenToMicrophone() {
if (!recorderStarted) {
recordingThread = new Thread() {
#Override
public void run() {
int minBufferSize = AudioRecord.getMinBufferSize(8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener() {
#Override
public void onPeriodicNotification(AudioRecord recorder) {
short[] buffer = buffers[++lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i) {
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
}
#Override
public void onMarkerReached(AudioRecord recorder) {
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true) {
if (isInterrupted()) {
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone() {
if (recorderStarted) {
if (recordingThread != null && recordingThread.isAlive() && !recordingThread.isInterrupted()) {
recordingThread.interrupt();
}
recorderStarted = false;
}
}
Now I implemented the recording using the
"OnRecordPositionUpdateListener". This looks very promising and the
right way to do it according the SDK Docs. Everything seems to work
(opening the audio device, read()ing the data etc.) but the Listner is
never called.
Does anybody know why?
I found that the OnRecordPositionUpdateListener is ignored until you do your first .read().
In other words, I found that if I set up everything per the docs, my the Listener never got called. However, if I first called a .read() just after doing my initial .start() then the Listener would get called -- provided I did a .read() every time the Listener was called.
In other words, it almost seems like the Listener event is only good for once per .read() or some such.
I also found that if I requested any less than buffSize/2 samples to be read, that the Listener would not be called. So it seems that the listener is only called AFTER a .read() of at least half of the buffer size. To keep using the Listener callback, one must call read each time the listener is run. (In other words, put the call to read in the Listener code.)
However, the listener seems to be called at a time when the data isn't ready yet, causing blocking.
Also, if your notification period or time are greater than half of your bufferSize they will never get called, it seems.
UPDATE:
As I continue to dig deeper, I have found that the callback seems to ONLY be called when a .read() finishes......!
I don't know if that's a bug or a feature. My initial thought would be that I want a callback when it's time to read. But maybe the android developers had the idea the other way around, where you'd just put a while(1){xxxx.read(...)} in a thread by itself, and to save you from having to keep track of each time read() finished, the callback can essentially tell you when a read has finished.
Oh. Maybe it just dawned on me. And I think others have pointed this out before me here but it didn't sink in: The position or period callback must operate on bytes that have been read already...
I guess maybe I'm stuck with using threads.
In this style, one would have a thread continuously calling read() as soon as it returned, since read is blocking and patiently waits for enough data to return.
Then, independently, the callback would call your specified function every x number of samples.
For the ones who are recording Audio over an Intent Service, they might also experience this callback problem. I have been working on this issue lately and I came up with a simple solution where you call your recording method in a seperate thread. This should call onPeriodicNotification method while recording without any problems.
Something like this:
public class TalkService extends IntentService {
...
#Override
protected void onHandleIntent(Intent intent) {
Context context = getApplicationContext();
tRecord = new Thread(new recordAudio());
tRecord.start();
...
while (tRecord.isAlive()) {
if (getIsDone()) {
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED) {
socketConnection(context, host, port);
}
}
} }
...
class recordAudio implements Runnable {
public void run() {
try {
OutputStream osFile = new FileOutputStream(file);
BufferedOutputStream bosFile = new BufferedOutputStream(osFile);
DataOutputStream dosFile = new DataOutputStream(bosFile);
aRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelInMode, encodingMode, bufferSize);
data = new short[bufferSize];
aRecorder.setPositionNotificationPeriod(sampleRate);
aRecorder
.setRecordPositionUpdateListener(new AudioRecord.OnRecordPositionUpdateListener() {
int count = 1;
#Override
public void onPeriodicNotification(
AudioRecord recorder) {
Log.e(WiFiDirect.TAG, "Period notf: " + count++);
if (getRecording() == false) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG,
"Recorder stopped and released prematurely");
}
}
#Override
public void onMarkerReached(AudioRecord recorder) {
// TODO Auto-generated method stub
}
});
aRecorder.startRecording();
Log.d(WiFiDirect.TAG, "start Recording");
aRecorder.read(data, 0, bufferSize);
for (int i = 0; i < data.length; i++) {
dosFile.writeShort(data[i]);
}
if (aRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
aRecorder.stop();
aRecorder.release();
setIsDone(true);
Log.d(WiFiDirect.TAG, "Recorder stopped and released");
}
} catch (Exception e) {
// TODO: handle exception
}
}
}
Related
My android OS is Android M. Nexus 6.
I implemented a AndroidSpeakerWriter as
public class AndroidSpeakerWriter {
private final static String TAG= "AndroidSpeakerWriter";
private AudioTrack audioTrack;
short[] buffer;
public AndroidSpeakerWriter() {
buffer = new short[1024];
}
public void init(int sampleRateInHZ){
int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHZ,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRateInHZ,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize,
AudioTrack.MODE_STREAM); // 0-static 1-stream
}
public void fillBuffer(short[] samples) {
if (buffer.length<samples.length) {
buffer = new short[samples.length];
}
System.arraycopy(samples, 0, buffer, 0, samples.length);
}
public void writeSamples(short[] samples) {
fillBuffer(samples);
audioTrack.write(buffer, 0, samples.length);
}
public void stop() {
audioTrack.stop();
}
public void play() {
audioTrack.play();
}
}
Then I just send samples when I click a button
public void play(final short[] signal) {
if (signal == null){
Log.d(TAG, "play: a null signal");
return;
}
Thread t = new Thread(new Runnable() {
#Override
public void run() {
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
androidSpeakerWriter.play();
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.stop();
}
});
t.start();
}
The problem is the device does not beep every time I click the button.
Sometimes it works, sometimes it doesn't.
There is no such a problem when I run this on an old nexus galaxy phone android 4.3. Anybody has encountered a similar problem? Thanks in advance for any help.
One thing is that currently my beep is pretty short (256 samples), not even close to the minBufferSize.
The bufferSizeInBytes in the constructor of AudioTrack for static mode should be the audio sample length you wanna play according to the vague document.
So is it still has a minimal size constraint on the buffer even for static mode? Why a nexus galaxy can play a 256 sample audio in static mode and a nexus 6 can not.
I use AudioManager to get the native buffer size/ sampling rate
nexus galaxy: 144/44100 nexus 6: 192/48000
I found those related:
AudioRecord and AudioTrack latency
Does AudioTrack buffer need to be full always in streaming mode?
https://github.com/igorski/MWEngine/wiki/Understanding-Android-audio-towards-achieving-low-latency-response
I believe it is caused by improper synchronization between thread. Your androidSpeakerWriter instance is running continously in different thread calling play(), writeSamples(), stop() respectively. Click of button will trigger creation of new thread with same androidSpeakerWriter instance.
So while Thread A is executing androidSpeakerWriter.play(), Thread B might be executing androidSpeakerWriter.writeSamples() which might overwrite current audio data being played.
Try
synchronized(androidSpeakerWriter) {
androidSpeakerWriter.play();
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.stop();
}
MODE_STREAM is used if you must play long audio data that will not fit into memory. If you need to play short audio file such beep sound, you can use MODE_STATIC when creating AudioTrack. then change your playback code such following:
synchronized(androidSpeakerWriter) {
androidSpeakerWriter.writeSamples(signal);
androidSpeakerWriter.play();
}
I'm working on an Android app and I would like to play some short sounds(~ 2s). I tried Soundpool but it doesn't really suit for me since it can't check if a sounds is already playing. So I decided to use AudioTrack.
It works quite good BUT most of the time, when it begins to play a sound there is a "click" sound.
I checked my audiofiles and they are clean.
I use audiotrack on stream mode. I saw that static mode is better for short sounds but after many searchs I still don't understand how to make it work.
I also read that the clicking noise can be caused by the header of the wav file, so maybe the sound would disappear if I skip this header with setPlaybackHeadPosition(int positionInFrames) function (that is supposed to work only in static mode)
Here is my code (so the problem is the ticking noise at the beginning)
int minBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize, AudioTrack.MODE_STREAM);
audioTrack.play();
int i = 0;
int bufferSize = 2048; //don't really know which value to put
audioTrack.setPlaybackRate(88200);
byte [] buffer = new byte[bufferSize];
//there we open the wav file >
InputStream inputStream = getResources().openRawResource(R.raw.abordage);
try {
while((i = inputStream.read(buffer)) != -1)
audioTrack.write(buffer, 0, i);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
inputStream.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Does anyone has a solution to avoid that noise? I tried this, that works sometimes but not everytime. Could someone show me how to implement audiotrack in MODE_STATIC ?
Thank you
I found that Scott Stensland's reasoning was fitting my issue (thanks!).
I eliminated the pop by running a dead simple linear fade-in filter over the beginning of the sample array. The filter makes sample values start from 0 and slowly increase in amplitude to their original value. By always starting at a value of 0 at the zero cross over point the pop never occurs.
A similar fade-out filter was applied at the end of the sample array. The filter duration can easily be adjusted.
import android.util.Log;
public class FadeInFadeOutFilter
{
private static final String TAG = FadeInFadeOutFilter.class.getSimpleName();
private final int filterDurationInSamples;
public FadeInFadeOutFilter ( int filterDurationInSamples )
{
this.filterDurationInSamples = filterDurationInSamples;
}
public void filter ( short[] audioShortArray )
{
filter(audioShortArray, audioShortArray.length);
}
public void filter ( short[] audioShortArray, int audioShortArraySize )
{
if ( audioShortArraySize/2 <= filterDurationInSamples ) {
Log.w(TAG, "filtering audioShortArray with less samples filterDurationInSamples; untested, pops or even crashes may occur. audioShortArraySize="+audioShortArraySize+", filterDurationInSamples="+filterDurationInSamples);
}
final int I = Math.min(filterDurationInSamples, audioShortArraySize/2);
// Perform fade-in and fade-out simultaneously in one loop.
final int fadeOutOffset = audioShortArraySize - filterDurationInSamples;
for ( int i = 0 ; i < I ; i++ ) {
// Fade-in beginning.
final double fadeInAmplification = (double)i/I; // Linear ramp-up 0..1.
audioShortArray[i] = (short)(fadeInAmplification * audioShortArray[i]);
// Fade-out end.
final double fadeOutAmplification = 1 - fadeInAmplification; // Linear ramp-down 1..0.
final int j = i + fadeOutOffset;
audioShortArray[j] = (short)(fadeOutAmplification * audioShortArray[j]);
}
}
}
In my case. It was WAV-header.
And...
byte[] buf44 = new byte[44];
int read = inputStream.read(buf44, 0, 44);
...solved it.
A common cause of audio "pop" is due to the rendering process not starting/stopping sound at the zero cross over point (assuming min/max of -1 to +1 cross over would be 0). Transducers like speakers or ear-buds are at rest (no sound input) which maps to this zero cross level. If an audio rendering process fails to start/stop from/to this zero, the transducer is being asked to do the impossible, namely instantaneously go from its resting state to some non-zero position in its min/max movement range, (or visa versa if you get a "pop" at the end).
Finally, after a lot of experimentation, I made it work without the click noise. Here is my code (unfortunaly, I can't read the size of the inputStream since the getChannel().size() method only works with FileInputStream type)
try{
long totalAudioLen = 0;
InputStream inputStream = getResources().openRawResource(R.raw.abordage); // open the file
totalAudioLen = inputStream.available();
byte[] rawBytes = new byte[(int)totalAudioLen];
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
(int)totalAudioLen,
AudioTrack.MODE_STATIC);
int offset = 0;
int numRead = 0;
track.setPlaybackHeadPosition(100); // IMPORTANT to skip the click
while (offset < rawBytes.length
&& (numRead=inputStream.read(rawBytes, offset, rawBytes.length-offset)) >= 0) {
offset += numRead;
} //don't really know why it works, it reads the file
track.write(rawBytes, 0, (int)totalAudioLen); //write it in the buffer?
track.play(); // launch the play
track.setPlaybackRate(88200);
inputStream.close();
}
catch (FileNotFoundException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IOException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IllegalArgumentException e) {
Log.e(TAG, "Error loading audio to bytes", e);
}
So the solution to skip the clicking noise is to use MODE_STATIC and setPlaybackHeadPosition function to skip the beginning of the audio file (that is probably the header or I don't know what).
I hope that this part of code will help someone, I spent too many time trying to find a static mode code sample without finding a way to load a raw ressource.
Edit: After testing this solution on various devices, it appears that they have the clicking noise anyway.
For "setPlaybackHeadPosition" to work, you have to play and pause first. It doesn't work if your track is stopped or not started. Trust me. This is dumb. But it works:
track.play();
track.pause();
track.setPlaybackHeadPosition(100);
// then continue with track.write, track.play, etc.
I'm trying to record from the MIC direcly to a short array.
The goal is not to write a file with the audio track, just save it within a short array.
If've tried several methods and the best I've found is recording with AudioRecord and to play it with AudioTrack. I've found a good class here:
Android: Need to record mic input
This class makes all I need, I just have to modify it to achieve my desired result, but...I don't get it well, I'm missing something...
Here's is my modification (not working at all):
private class Audio extends Thread {
private boolean stopped = false;
/**
* Give the thread high priority so that it's not canceled unexpectedly, and start it
*/
private Audio()
{
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
start();
}
#Override
public void run()
{
Log.i("Audio", "Running Audio Thread");
AudioRecord recorder = null;
AudioTrack track = null;
//short[][] buffers = new short[256][160];
int ix = 0;
/*
* Initialize buffer to hold continuously recorded audio data, start recording, and start
* playback.
*/
try
{
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
short[] buff = new short[N];
recorder.startRecording();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(!stopped) {
//Log.i("Map", "Writing new data to buffer");
//short[] buffer = buffer[ix++ % buffer.length];
N = recorder.read(buff, 0, buff.length);
}
recorder.stop();
recorder.release();
track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
for (int i =0; i< buff.length;i++) {
track.write(buff, i, buff.length);
}
} catch(Exception x) {
//Log.e("Audio", x.getMessage());
x.printStackTrace();
} finally {
track.stop();
track.release();
}
}
/**
* Called from outside of the thread in order to stop the recording/playback loop
*/
private void close()
{
stopped = true;
}
}
What I need is to record the sound in the short array buffer and when the user push a button, play it...But right now, I'm trying to record the sound and, when user push a button, recording stop and the sound start playing...
Anyone can help me?
Thanks.
You need to restructure the code to do what you want it to do. If I understand correctly you want to read sound until the 'stopped' is set true, then play the data.
Just so you understand that is potentially a lot of buffered data depending on how long that recording time is. You could write it to a file or store a series of buffers into some abstract data type.
Just to get something to work create a Vector of short [] and allocate a new short [] buffer in your 'while(!stopped)' loop and then stuff it into the vector.
After the while loop stops you can iterate through the vector and write the buffers to the AudioTrack.
As you now understand, the blip you were hearing is just the last 20ms or so of audio since your buffer only kept that last little bit.
I' m having some issues using AudioRecord class. I want to store recorded data in a buffer, but I' m not sure what is the proper way to achieve that. I went through great number of examples, but most of them were comlicated and representing many different approaches. I' m looking for simple one or simple explanation.
Here are my audio settings for my project:
int audioSource = AudioSource.MIC;
int sampleRateInHz = 8000;
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz, channelConfig, audioFormat);
short[] buffer = new short[bufferSizeInBytes];
AudioRecord audioRecorder = new AudioRecord(audioSource,
sampleRateInHz,
channelConfig,
audioFormat,
bufferSizeInBytes);
I' m trying to create a Recording function:
public void Recording() {
audioRecorder.startRecording();
...
audioRecorder.stop();
audioRecorder.release();
}
I know that I' m supposed to use .read(short[] audioData, int offsetInShorts, int sizeInShorts) function. And here my problems start. I' m not sure how audioData buffer works - I assume function puts recorded samples into the audioData. What happens if it completely filled with data? It starts rewriting from the earliest position? If it does I believe I have to copy all collected samples somwhere else. It raises another question - how can I check if .read(...) function buffer is already full? Do I need to measure time and copy buffer content or there is another way to reach that? Also do I need to create a thread for whole recording operation?
Sorry for asking so many questions in one topic :)
Answer to your questions ::
recorder.read(...) does not necessarily read any data at all. You should probably rewrite that loop to pause for a short while (e.g., 50ms) between calls to read. It should also not queue the buffer until the buffer has data. Also, since the buffer may not be full, you probably need to use a data structure that maintains a count of the number of bytes. A ByteBuffer comes to mind as a good candidate. You can stuff bytes into it in the read loop and when it gets full enough, queue it for transmission and start another one.
offcourse you need to create a thread for looping it. as shown in below code.
Here's a modified version of the recording loop that does proper error checking. It uses a Queue<ByteBuffer> instead of a Queue<byte[]>:
private void startRecording() {
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
bData = ByteBuffer.allocate(BufferElements);
bbarray = new byte[bData.remaining()];
bData.get(bbarray);
while (isRecording) {
int result = recorder.read(bbarray, 0, BufferElements);
System.out.println("READ DATA");
if (result > 0) {
qArray.add(bData);
--your stuffs--
bData = ByteBuffer.allocate(BufferElements);
} else if (result == AudioRecord.ERROR_INVALID_OPERATION) {
Log.e("Recording", "Invalid operation error");
break;
} else if (result == AudioRecord.ERROR_BAD_VALUE) {
Log.e("Recording", "Bad value error");
break;
} else if (result == AudioRecord.ERROR) {
Log.e("Recording", "Unknown error");
break;
}
try {
Thread.sleep(10);
} catch (InterruptedException e) {
break;
}
}
}
}, "AudioRecorder Thread");
recordingThread.start();
}
Of course, somewhere you'll need to call recorder.startRecording() or you won't get any data.
for working sample look at this example.
i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.