I am new to Android . I want to record some audio , and then I created an Android app . It works .
But the audio it recorded is too loud , and easy to produce clipping distortion .
For example , first I start my app and speak 'hello' to my Android phone , I get the recording file 'recording1.pcm' ; then I start another recording app from GooglePlay , speak 'hello' to my Android phone at the same volume , I get the recording file 'recording2.pcm' .
then I open those two files in Audition , the waveform of 'recording1.pcm' (records by my app) is clipped , and the waveform of 'recording2.pcm' (records by another app) is not clipped . (I am sorry I cannot embed images )
the core code of recordings as follows :
private class RecordAudio extends AsyncTask<Void, Integer, Void> {
#Override
protected Void doInBackground(Void... params) {
isRecording = true;
try {
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(recordingFile)));
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER, frequency, channelConfiguration, audioEncoding, bufferSize);
short[] buffer = new short[bufferSize * 99];
audioRecord.startRecording();
int r = 0;
while (isRecording) {
int bufferReadResult = audioRecord.read(buffer, 0, bufferSize);
for (int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
publishProgress(new Integer(r));
r++;
}
audioRecord.stop();
dos.close();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
}
Other parameters in my app:
frequency = 16000;
channelConfiguration = AudioFormat.CHANNEL_IN_STEREO;
audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
I have tried the following :
I change the code dos.writeShort(buffer[i]); to dos.writeShort(buffer[i]/2); , then the amplitude of the waveform become half of the past , but still with clipping distortion . I think that before the buffer got the audio data , the data has been clipped .
I look through the API of 'audioRecord' , but no API for microphone volume .
Thanks for your help !
Related
I made an audio recorder using MediaRecorder and saving the file as a m4a, like this:
recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(128000);
recorder.setAudioSamplingRate(44100);
recorder.setOutputFile(AveActivity.REC_DIR + "/" + song);
Very simple, works great.
Now I want to implement gain into this (i.e., to make the volume considerably greater), since the audio is too low: I want my recorder to record birds and wildlife, and being wild means it is almost always far away....
So, I migrated my code to use AudioRecord based on this thread. The problem is that with this I get PCM audio, which is a pain to convert to WAV (I did that too). I did that first saving the PCM, then converting to WAV... And, yet, the WAV files are 6 times bigger than the m4a.
First question: Is there any way to apply gain before saving the file using MediaRecorder??
Second question: Is there an easy way to encode the PCM audio directly to m4a "on the fly", without saving PCM and re-encoding? I mean, I get the PCM using a read command like this:
recorder.startRecording();
recordingThread = new Thread(this::writeAudioDataToFile, "AudioRecorder Thread");
recordingThread.start();
...
private void writeAudioDataToFile() {
....
while (recorder != null) {
int numRead = recorder.read(sData, 0, bufferSize);
// **Here is the gain! Hardcoded for now...**
int gain = 8;
if (numRead > 0)
for (int i = 0; i < numRead; ++i)
sData[i] = (short) Math.max(Math.min(sData[i] * gain, Short.MAX_VALUE), Short.MIN_VALUE);
try {
os.write(short2byte(sData), 0, 2*bufferSize);
} catch (IOException e) {
e.printStackTrace();
}
}
....
}
I was trying to record the sound from the mic. The sound is sampled against the tone running in background.
To make it clear i want to run a tone in background and when i make some noise from microphone this should be mixed with the background tone that is already playing.
The final output should be a mix of the tone played and the signals from the microphone which is the noise. How can i achieve this.
I was referring to the post Android : recording audio using audiorecord class play as fast forwarded in stackoverflow to record data from microphone. But i need to record the background tone as well as the microphone input.
public class StartRecording {
private int samplePerSec = 8000;
public void Start(){
stopRecording.setEnabled(true);
bufferSize = AudioRecord.getMinBufferSize(samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, this.samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize * 10);
audioRecorder.startRecording();
isRecording = true;
while (isRecording && audioRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING)
{
short recordedData[] = new short[bufferSize];
audioRecorder.read(recordedData, 0, recordedData.length); // Reading from the audiorecorder
byte[] bData = shortTobyte(recordedData);
}
}
}
private byte[] shortTobyte(short[] recordedData) {
int tempBuff = recordedData.length;
byte[] bytes = new byte[tempBuff * 10];
for (int i = 0; i < tempBuff; i++) {
bytes[i * 2] = (byte) (recordedData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (recordedData[i] >> 8);
recordedData[i] = 0;
}
return bytes;
}
Thanks in advance...
You have to use AudioTrack and AudioRecord simulteanously.
Then all buffers from AudioRecord must be mixed to your tone (there are some algo on google for mixing 2 audio signals) and written in the AudioTrack.
You will have latency and some problems with echo if you don't use a headset.
I want to write a program to check if the internal microphone of android phone is on, off or in use by some other application.
If this is possible then how can I do this?
I read related questions at stack overflow but did not find a solution.
Here's what I'm using to check if the microphone is busy (based on Odaym answer and my own tests):
(Updated with Android 6.0 Marshmallow compatibility, as suggested in comments)
public static boolean checkIfMicrophoneIsBusy(Context ctx){
AudioRecord audio = null;
boolean ready = true;
try{
int baseSampleRate = 44100;
int channel = AudioFormat.CHANNEL_IN_MONO;
int format = AudioFormat.ENCODING_PCM_16BIT;
int buffSize = AudioRecord.getMinBufferSize(baseSampleRate, channel, format );
audio = new AudioRecord(MediaRecorder.AudioSource.MIC, baseSampleRate, channel, format, buffSize );
audio.startRecording();
short buffer[] = new short[buffSize];
int audioStatus = audio.read(buffer, 0, buffSize);
if(audioStatus == AudioRecord.ERROR_INVALID_OPERATION || audioStatus == AudioRecord.STATE_UNINITIALIZED /* For Android 6.0 */)
ready = false;
}
catch(Exception e){
ready = false;
}
finally {
try{
audio.release();
}
catch(Exception e){}
}
return ready;
}
If you are using an AudioRecord object to record audio, like:
AudioRecord audio = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,Constants.BUFFER_SIZE_BYTES);
audio.startRecording();
Then right after audio.startRecording(), you're going to have to provide a buffer for reading the audio data into, and begin reading. You do that with:
int audioStatus = audio.read(bufferObject, 0, bufferSize);
The Android documentation for read() mentions the return value ERROR_INVALID_OPERATION (Constant Value: -3), this is only returned when the Mic is busy so you can check for that in your code and show a message that the Audio source is busy with another app.
As far as I know, there is no way to know the microphone's state (Busy, Available,..). Sorry
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?
i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.