I made an audio recorder using MediaRecorder and saving the file as a m4a, like this:
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(128000);
recorder.setAudioSamplingRate(44100);
recorder.setOutputFile(AveActivity.REC_DIR + "/" + song);
Very simple, works great.
Now I want to implement gain into this (i.e., to make the volume considerably greater), since the audio is too low: I want my recorder to record birds and wildlife, and being wild means it is almost always far away....
So, I migrated my code to use AudioRecord based on this thread. The problem is that with this I get PCM audio, which is a pain to convert to WAV (I did that too). I did that first saving the PCM, then converting to WAV... And, yet, the WAV files are 6 times bigger than the m4a.
First question: Is there any way to apply gain before saving the file using MediaRecorder??
Second question: Is there an easy way to encode the PCM audio directly to m4a "on the fly", without saving PCM and re-encoding? I mean, I get the PCM using a read command like this:
recorder.startRecording();
recordingThread = new Thread(this::writeAudioDataToFile, "AudioRecorder Thread");
recordingThread.start();
...
private void writeAudioDataToFile() {
....
while (recorder != null) {
int numRead = recorder.read(sData, 0, bufferSize);
// **Here is the gain! Hardcoded for now...**
int gain = 8;
if (numRead > 0)
for (int i = 0; i < numRead; ++i)
sData[i] = (short) Math.max(Math.min(sData[i] * gain, Short.MAX_VALUE), Short.MIN_VALUE);
try {
os.write(short2byte(sData), 0, 2*bufferSize);
} catch (IOException e) {
e.printStackTrace();
}
}
....
}
Related
Basically, my program records user input via the microphone and store it as a .pcm file in the sdcard/ directory. It'll be overwritten should there be an existing one. The file is then later sent for playback and analysis (mainly FFT, RMS computation).
I have added another function which allows the program to record system audio, so it allows user's mp3 files to be analyzed as well. It streams the system audio and store it as a .pcm file for later playback and analysis.
It's all functioning well. However, there's a slight issue, when the program streams audio, it captures input from the mic and there'll be noises in the playback. I do not want this as it'll affect the analysis reading. I googled for a solution and found that I can actually mute the mic. So now, I want to mute the mic when the mp3 file is being streamed.
The code I have found is,
AudioManager.setMicrophoneMute(true);
I tried to implement it but it just crashes my application. I tried to find for solutions these few days but I cannot seem to get any.
Here is my code snippet for the part where I want to stream system audio and muting the microphone before it starts streaming.
//create a new AudioRecord object to record the audio data of an mp3 file
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(AudioManager.STREAM_MUSIC,
frequency, channelConfiguration,
audioEncoding, bufferSize);
//a short array to store raw pcm data
short[] buffer = new short[bufferSize];
Log.i("decoder", "The audio record created fine ready to record");
try {
audioManager.setMicrophoneMute(true);
} catch (Exception e) {
e.printStackTrace();
}
audioRecord.startRecording();
isDecoding = true;
When the setMicrophoneMute(true) line is surrounded with try-catch, the program would only crash when I want to send the recording for play back. Errors are as follow:
"AudioFlinger could not create track, status: -12"
"Error initializing AudioTrack"
"[android.media.AudioTrack] Error code -20 when initializing AudioTrack."
When it is not surrounded with try-catch, the program would just crash the moment I click on the start streaming button.
"Decoding failed" < this is an error log from catching a throwable.
How can I mute the microphone input while streaming the system audio? Let me know if I can provide you with more codes. Thank you!
**EDIT
I have implemented my mutemicrophone successfully, it even returns me a true for isMicrophoneMute(), however, it's not muted as it still records from the microphone; it's a false true.
Based on the suggested answer, I have already created a class for audio focus as below:
private final Context c;
private final AudioManager.OnAudioFocusChangeListener changeListener =
new AudioManager.OnAudioFocusChangeListener()
{
public void onAudioFocusChange(int focusChange)
{
//nothing to do
}
};
AudioFocus(Context context)
{
c = context;
}
public void grabFocus()
{
final AudioManager am = (AudioManager) c.getSystemService(Context.AUDIO_SERVICE);
final int result = am.requestAudioFocus(changeListener,
AudioManager.STREAM_MUSIC,
AudioManager.AUDIOFOCUS_GAIN);
Log.d("audiofocus","Grab audio focus: " + result);
}
public void releaseFocus()
{
final AudioManager am = (AudioManager) c.getSystemService(Context.AUDIO_SERVICE);
final int result = am.abandonAudioFocus(changeListener);
Log.d("audiofocus","Abandon audio focus: " + result);
}
I then call the method from my Decoder class to request for audio focus:
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
audioFocus.grabFocus();
audioRecord = new AudioRecord(AudioManager.STREAM_MUSIC,
frequency, channelConfiguration,
audioEncoding, bufferSize);
//a short array to store raw pcm data
short[] buffer = new short[bufferSize];
Log.i("decoder", "The audio record created fine ready to record");
audioRecord.startRecording();
isDecoding = true;
Log.i("decoder", "Start recording fine");
And then release the focus when stop decoding is pressed:
//stops recording
public void stopDecoding(){
isDecoding = false;
Log.i("decoder", "Out of recording");
audioRecord.stop();
try {
dos.close();
} catch (IOException e) {
e.printStackTrace();
}
mp.stop();
mp.release();
audioFocus.releaseFocus();
}
However, this makes my application crash. Where did I went wrong?
The following snippet requests permanent audio focus on the music audio stream. You should request the audio focus immediately before you begin playback, such as when the user presses play. I think this would be the way to go rather than muting the input microphone. Check out the developer audio focus docs for more information
AudioManager am = mContext.getSystemService(Context.AUDIO_SERVICE);
...
// Request audio focus for playback
int result = am.requestAudioFocus(afChangeListener,
// Use the music stream.
AudioManager.STREAM_MUSIC,
// Request permanent focus.
AudioManager.AUDIOFOCUS_GAIN);
if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
am.registerMediaButtonEventReceiver(RemoteControlReceiver);
// Start playback.
}
I am trying to build a opensource video system in android, since we have no access to the data in a closed system. In this system, we can modify the raw data captured by camera.
I used MediaCodec and MediaMux to do the video data encoding and muxing job, and that works. But I have no idea about the audio part. I used onFramePreview to get each frame and do the encoding/muxing work by frame. But how do I do the audio recording at the same time(I mean capturing the audio by frame, encode it and send the data to the MediaMux).
I've done some research. It seems that we use audiorecorder to get the raw data of audio. But audiorecorder does a constant recording job, I don't think it can work.
Can anyone give me a hint? Thank you!
Create audioRecorder like this:
private AudioRecord getRecorderInstance() {
AudioRecord ar = null;
try {
//Get a audiorecord
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
ar = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
}
catch (Exception e) {
}
return ar; //Returns null if mic is unavailable
}
Prepare and send the data for the encoding and muxing later like this is separate thread:
public class MicrophoneInput implements Runnable {
#Override
public void run() {
// Buffer for 200 milliseconds of data, e.g. 400 samples at 8kHz.
byte[] buffer200ms = new byte[8000 / 10];
try {
while (recording) {
audioRecorder.read(buffer200ms, 0, buffer200ms.length);
//process buffer i.e send to encoder
//don't forget to set correct timestamps synchronized with video
}
}
catch(Throwable x) {
//
}
}
}
I am trying to record and process audio data based on differences in what gets recorded in the left and right channel. For this I am using Audio Record class, with MIC as input and STEREO mode.
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
My issue is that I get exactly the same data in both the channels. (alternate samples are separated to get individual channel inputs). Please help. I am not sure why this is happening.
Using this configuration:
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 48000;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
The data in the audio data is as follows.
leftChannel data: [0,1],[4,5]...
rightChannel data: [2,3],[6,7]...
So you need to seperate the data.
readSize = audioRecord.read(audioShortData, 0, bufferSizeInBytes);
for(int i = 0; i < readSize/2; i = i + 2)
{
leftChannelAudioData[i] = audiodata[2*i];
leftChannelAudioData[i+1] = audiodata[2*i+1];
rightChannelAudioData[i] = audiodata[2*i+2];
rightChannelAudioData[i+1] = audiodata[2*i+3];
}
Hope this helpful.
Here is a working example for capturing audio in stereo (tested with Samsung Galaxy S3 4.4.2 SlimKat):
private void startRecording() {
String filename = Environment.getExternalStorageDirectory().getPath()+"/SoundRecords/"+System.currentTimeMillis()+".aac";
File record = new File(filename);
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.CAMCORDER);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(128000);
recorder.setAudioSamplingRate(96000);
recorder.setAudioChannels(2);
recorder.setOutputFile(filename);
t_filename.setText(record.getName());
try {
recorder.prepare();
recorder.start();
} catch (IOException e) {
e.printStackTrace();
}
}
If your phone supports stereo capturing, then this should work :)
You cannot obtain a stereo input in this way on your device.
Although the Nexus 4 has two microphones, they are not intended for stereo recording, but instead are likely for background noise cancellation.
See https://groups.google.com/forum/#!topic/android-platform/SptXI964eEI where various low-level modifications of the audio system are discussed in an attempt to accomplish stereo recording.
hello,i want to use mediaRecorder to record voice. i want to save the format is amr.
this.mediaRecorder = new MediaRecorder();
this.mediaRecorder.setAudioChannels(1);
this.mediaRecorder.setAudioSamplingRate(8000);
this.mediaRecorder.setAudioEncodingBitRate(16);
this.mediaRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
this.mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.RAW_AMR);
this.mediaRecorder.setOutputFile(this.file.getAbsolutePath());
this.mediaRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
i used this.mediaRecorder.setAudioEncodingBitRate(16), some device is ok
mediaRecorder.setAudioEncodingBitRate(12500),somedevice is ok
but i delete the mediaRecorder.setAudioEncodingBitRate some device is ok
so my question how to get the default the AudioEncodingBitRate.
which parameter i need to use?
You set the AudioEncodingBitRate too low. I made the same mistake :-)
This seems to work:
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
if (Build.VERSION.SDK_INT >= 10) {
recorder.setAudioSamplingRate(44100);
recorder.setAudioEncodingBitRate(96000);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
} else {
// older version of Android, use crappy sounding voice codec
recorder.setAudioSamplingRate(8000);
recorder.setAudioEncodingBitRate(12200);
recorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
}
recorder.setOutputFile(file.getAbsolutePath());
try {
recorder.prepare();
} catch (IOException e) {
throw new RuntimeException(e);
}
The idea comes from here
plus: read the docs. The docs of setAudioSamplingRate say the following:
The sampling rate really depends on the format for the audio recording, as well as the capabilities of the platform. For instance, the sampling rate supported by AAC audio coding standard ranges from 8 to 96 kHz, the sampling rate supported by AMRNB is 8kHz, and the sampling rate supported by AMRWB is 16kHz.
I am using bellow configurations and gives amazing clear recording output.
localFileName = getFileName()+".wav";
localFile = new File(localdir, localFileName);
mRecorder = new MediaRecorder();
mRecorder.setAudioSource(MediaRecorder.AudioSource.DEFAULT);
mRecorder.setOutputFormat(AudioFormat.ENCODING_PCM_16BIT);
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
mRecorder.setAudioChannels(1);
mRecorder.setAudioEncodingBitRate(128000);
mRecorder.setAudioSamplingRate(44100);
mRecorder.setOutputFile(localFile.getPath());
however if you are recording along with playing audio simultaneously it has some issues in samsung devices.
[but again only when you are playing audio and recording both together at the same time]
I find that the encoding bitrate should be calculated from the sample rate.
There is a good write-up of how these values relate on https://micropyramid.com/blog/understanding-audio-quality-bit-rate-sample-rate/
I use 8:1 compression for high-quality recordings. I prefer 48 KHz sampling, but the same logic works at an 8000 Hz sample rate requested for this post.
final int BITS_PER_SAMPLE = 16; // 16-bit data
final int NUMBER_CHANNELS = 1; // Mono
final int COMPRESSION_AMOUNT = 8; // Compress the audio at 8:1
public MediaRecorder setupRecorder(String filename, int selectedAudioSource, int sampleRate) {
final int uncompressedBitRate = sampleRate * BITS_PER_SAMPLE * NUMBER_CHANNELS;
final int encodedBitRate = uncompressedBitRate / COMPRESSION_AMOUNT;
mediaRecorder = new MediaRecorder();
try {
mediaRecorder.setAudioSource(selectedAudioSource);
mediaRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mediaRecorder.setAudioEncoder(MediaRecorder.OutputFormat.AMR_NB);
mediaRecorder.setAudioSamplingRate(sampleRate);
mediaRecorder.setAudioEncodingBitRate(encodedBitRate);
mediaRecorder.setOutputFile(filename);
}catch (Exception e) {
// TODO
}
return mediaRecorder;
}
MediaRecorder mediaRecorder = setupRecorder(this.file.getAbsolutePath(),
MediaRecorder.AudioSource.MIC,
8000);
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?