I use AudioRecord to record audio on Android device. To create a instance of AudioRecord, you should set a fixed size of buffer to store the audio data. Then, we need pull data from this buffer continuously. If the size of buffer is too small and fetching data is too slow, the buffer will be overflow !
To avoid this exception, I want to set the size of buffer larger as far as possible and fetching data in time.
AudioRecord provides the getMinBufferSize(int,int,int) method to get a min size of buffer that audio hardware can support. But this is mininum size not the proper size of buffer.
My question is how to calculate and set the proper size of the buffer?
Here is what I using the AudioRecord:
audioRecord = new AudioRecord(audio_source, sampleRate, audio_channel, audio_encoding, buffer_size);
new Thread(new Runnable{
while(true){ //Loop start
readSize = audioRecorder.read(readBuffer, 0, useReadBufferSize); // fetch data from buffer to readbuffer
// throw readbuffer data out, do this operation as qucik as possible, Avoid to block thread
// check if the AudioRecord stopped and break if true
}
}).start();
Related
i am developing an android app, which plays live speex audio stream. So i used jspeex library .
The audio stream is 11khz,16 bit.
At android side i have done as follows:
SpeexDecoder decoder = new SpeexDecoder();
decoder.init(1, 11025,1, true);
decoder.processData(subdata, 0, subdata.length);
byte[] decoded_data = new byte[decoder.getProcessedDataByteSize()];
int result= decoder.getProcessedData(decoded_data, 0);
When this decoded data is played by Audiotrack , some part of audio is clipped.
Also when decoder is set to nb-mode( first parameter set to 0) the sound quality is worse.
I wonder there is any parameter configuration mistake in my code.
Any help, advice appreciated.
Thanks in advance.
Sampling rate and buffer size should be set in an optimized way for the specific device. For example you can use AudioRecord.getMinBufferSize() to obtain the best size for your buffer:
int sampleRate = 11025; //try also different standard sampleRate
int bufferSize = AudioRecord.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
If your Audiotrack has a buffer which is too small or too large you will experience audio glitch. I suggest you to take a look here and play around with these values (sampleRate and bufferSize).
I'd like to capture the outgoing audio from a game and record it into an audio file as it's played. Is this possible within the framework in OpenSL? Like by connecting the OutputMix to an AudioRecorder, or something?
You could register a callback to the queue and obtain the output buffer before / after it is enqueued into the buffer queue for output. You could have a wavBuffer (a short array the length of the buffer size) that is written into on each enqueueing of a new buffer. The contents of this buffer are then written to a file.
outBuffer = p->outputBuffer[p->currentOutputBuffer]; // obtain float buffer
for ( int i = 0; i < bufferSize; ++i )
wavBuffer = ( short ) outBuffer[ i ] * 32768; // convert float to short
// now append contents of wavBuffer into a file
The basic OpenSL setup for the queue callback is explained in some detail on this page
And a very basic means of creating a WAV file in C++ can be found here note that you must have a pretty definitive idea of the actual size of the total WAV file as it's part of its header.
I capture sound in real time to buffer and then I process these data. But sometimes I get warning: buffer overflow what cause of problem in the processing.
I created AudioRecord:
bufferSize = ???;
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS,
RECORDER_AUDIO_ENCODING, **bufferSize**);
but there are not method for getMaximumBufferSize or semthing like that (only getMinBufferSize - but here is buffer owerflow). And I think that setting own buffer size is not good solution.
According to the API:
Upon creation, an AudioRecord object initializes its associated audio buffer that it will fill with the new audio data. The size of this buffer, specified during the construction, determines how long an AudioRecord can record before "over-running" data that has not been read yet. Data should be read from the audio hardware in chunks of sizes inferior to the total recording buffer size.
Your buffer should be large enough for the amount of buffered data you want to support, ie bitrate*time, and you need to make sure that you are reading from the AudioRecord consistently so that it does not fill the buffer, and that the size of the chunks that you read are smaller than the buffer size in the AudioRecorder. I like to read/write in 8k chunks but have seen other values too.
I am reading the Android documents about MediaCodec and other online tutorials/examples. As I understand it, the way to use the MediaCodec is like this (decoder example in pseudo code):
//-------- prepare audio decoder, format, buffers, and files --------
MediaExtractor extractor;
MediaCodec codec;
ByteBuffer[] codecInputBuffers;
ByteBuffer[] codecOutputBuffers;
extractor = new MediaExtractor();
extractor.setDataSource();
MediaFormat format = extractor.getTrackFormat(0);
//---------------- start decoding ----------------
codec = MediaCodec.createDecoderByType(mime);
codec.configure(format, null /* surface */, null /* crypto */, 0 /* flags */);
codec.start();
codecInputBuffers = codec.getInputBuffers();
codecOutputBuffers = codec.getOutputBuffers();
extractor.selectTrack(0);
//---------------- decoder loop ----------------
while (MP3_file_not_EOS) {
//-------- grasp control of input buffer from codec --------
codec.dequeueInputBuffer();
//---- fill input buffer with data from MP3 file ----
extractor.readSampleData();
//-------- release input buffer so codec can have it --------
codec.queueInputBuffer();
//-------- grasp control of output buffer from codec --------
codec.dequeueOutputBuffer();
//-- copy PCM samples from output buffer into another buffer --
short[] PCMoutBuffer = copy_of(OutputBuffer);
//-------- release output buffer so codec can have it --------
codec.releaseOutputBuffer();
//-------- write PCMoutBuffer into a file, or play it -------
}
//---------------- stop decoding ----------------
codec.stop();
codec.release();
Is this the right way to use the MediaCodec? If not, please enlighten me with the right approach. If this is the right way, how do I measure the performance of the MediaCodec? Is it the time difference between when codec.dequeueOutputBuffer() returns and when codec.queueInputBuffer() returns? I'd like an accuracy/precision of microseconds. Your ideas and thoughts are appreciated.
(merging comments and expanding slightly)
You can't simply time how long a single buffer submission takes, because the codec might want to queue up more than one buffer before doing anything. You will need to measure it in aggregate, timing the duration of the entire file decode with System.nanoTime(). If you turn the copy_of operation into a no-op and just discard the decoded data, you'll keep the output side (writing the decoded data to disk) out of the calculation.
Excluding the I/O from the input side is more difficult. As noted in the MediaCodec docs, the encoded input/output "is not a stream of bytes, it's a stream of access units". So you'd have to populate any necessary codec-specific-data keys in MediaFormat, and then identify individual frames of input so you can properly feed the codec.
An easier but less accurate approach would be to conduct a separate pass in which you time how long it takes to read the input data, and then subtract that from the total time. In your sample code, you would keep the operations on extractor (like readSampleData), but do nothing with codec (maybe dequeue one buffer and just re-use it every time). That way you only measure the MediaExtractor overhead. The trick here is to run it twice, immediately before the full test, and ignore the results from the first -- the first pass "warms up" the disk cache.
If you're interested in performance differences between devices, it may be the case that the difference in input I/O time, especially from a "warm" cache, is similar enough and small enough that you can just disregard it and not go through all the extra gymnastics.
I have saved recorded audio raw PCM into a file rxrawpcm.pcm, after that i tried to play the pcm file but unable to play recorded PCM? I didn't hear recorded voice hearing only a buzzy sound
Configuration
AudioRecorder and AudioTrack configuration
Stream Type :STREAM_VOICE_CALL
Sample Rate : 8000
Audio Format :PCM_16BIT
MODE :MODE_STREAM
Channel Config :CHANNEL_CONFIGURATION_MONO
Recording
byte[] buffer=new byte[1600];
int read = audioRecord.read(buffer, 0,buffer.length);
if(recordAudio){
if(out!=null){
out.write(buffer);
}
Player Side
FileInputStream fis=new FileInputStream(rxFile);
byte[] buffer=new byte[1600];
while(fis.read(buffer)!=-1){
audioPlayer.write(buffer, 0, buffer.length);
}
Your buffer size may be too small. You are supposed to use the getMinBufferSize method to determine the smallest buffer size that doesn't result in buffer overflows. The top voted answer in this question Android AudioRecord class - process live mic audio quickly, set up callback function demonstrates how to properly setup audio recording with an appropriate buffer size.