What I want is to be able to get the current noise level in decibels (dB) on the click of a Button. I have been playing around with the sensors and can get them to work easily but this.. I'm stumped. I've tried a few codes but none work, or helped me understand this.
How can this be achieved?
EDIT:
I use the following code:
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
AudioRecord recorder;
boolean recorderStarted = false;
protected void startListenToMicrophone()
{
if (!recorderStarted)
{
recordingThread = new Thread()
{
#Override
public void run()
{
int minBufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener()
{
#Override
public void onPeriodicNotification(AudioRecord recorder)
{
short[] buffer = buffers[++lastBuffer
% buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i)
{
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
Log.i("dB", ""+averages);
tv4.setText("" + averages[1]);
}
#Override
public void onMarkerReached(AudioRecord recorder)
{
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true)
{
if (isInterrupted())
{
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone()
{
if (recorderStarted)
{
if (recordingThread != null && recordingThread.isAlive()
&& !recordingThread.isInterrupted())
{
recordingThread.interrupt();
}
recorderStarted = false;
}
}
}
I have two buttons in my app. First one calls startListenToMicrophone and second calls the stop. I don't understand how this works. I got the code from here.
The textview gets a weird and very big value. What I need is the sound level in decibels.
Just a passing thought and I may be ver very wrong but, amplitude in dB=20xlog(S1/S2).
I couldn't find this calculation anywhere in your code. what you need to do is get S1, which will be the current recorded level and get S2 which needs to be the maximum possible value to record. Then calculate the dB value.
Related
Within my Android app i want to be able to record spoken audio, online or offline, then, when i choose to, stream chunks of recorded audio to Google for Speech to Text transcribing all in the background so as to not affect the current Activity. New voice recording and streaming / transcribing could be going on at the same time.
What classes should i look into to accomplish the above?
Thanks
Simple AudioRecord API will do:
recorder = new AudioRecord(
AudioSource.VOICE_RECOGNITION, sampleRate,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize * 2);
Then inside the thread
private final class RecognizerThread extends Thread {
private int remainingSamples;
private int timeoutSamples;
private final static int NO_TIMEOUT = -1;
public RecognizerThread(int timeout) {
if (timeout != NO_TIMEOUT)
this.timeoutSamples = timeout * sampleRate / 1000;
else
this.timeoutSamples = NO_TIMEOUT;
this.remainingSamples = this.timeoutSamples;
}
public RecognizerThread() {
this(NO_TIMEOUT);
}
#Override
public void run() {
recorder.startRecording();
if (recorder.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED) {
recorder.stop();
IOException ioe = new IOException(
"Failed to start recording. Microphone might be already in use.");
mainHandler.post(new OnErrorEvent(ioe));
return;
}
Log.d(TAG, "Starting decoding");
short[] buffer = new short[bufferSize];
while (!interrupted()
&& ((timeoutSamples == NO_TIMEOUT) || (remainingSamples > 0))) {
int nread = recorder.read(buffer, 0, buffer.length);
if (nread < 0) {
throw new RuntimeException("error reading audio buffer");
} else {
boolean isFinal = recognizer.AcceptWaveform(buffer, nread);
if (isFinal) {
mainHandler.post(new ResultEvent(recognizer.Result(), true));
} else {
mainHandler.post(new ResultEvent(recognizer.PartialResult(), false));
}
}
if (timeoutSamples != NO_TIMEOUT) {
remainingSamples = remainingSamples - nread;
}
}
recorder.stop();
// Remove all pending notifications.
mainHandler.removeCallbacksAndMessages(null);
// If we met timeout signal that speech ended
if (timeoutSamples != NO_TIMEOUT && remainingSamples <= 0) {
mainHandler.post(new TimeoutEvent());
}
}
Full example here.
I am new to android and I am trying to build an APP to record audio, do FFT to get freq spectrum.
The buffer size of complete audio is 155 * 2048
i.e. 155* AudioRecord.getMinBufferSize(44100, mono_channel, PCM_16bit)
Each chunk from the recorder is of 2048 shorts , i convert type short into type double and pass it to the FFT library. The library returns me the real and imaginary part which i will use to construct the frequency spectrum. Then i append each chunk to an array.
Now here is the problem:
In app 1 there are no UI elements or Fragments just a simple basic button which is attach to a listener that execute an Async task for reading chunks from Audio.Recorder and does FFT on it chunk by chunk ( each chunk = 2048 short). This process (Recording and FFT) for 155 chunks with sample rate 44100 should take 7 seconds ( 2048 * 155 / 44100 ) but the task took around 9 seconds, which is a lag of 2 seconds (which is acceptable).
In app 2 there are 7 fragments with login and signup screen where each fragment is separate from each other and linked to main activity. The same code here does the task (recording and fft) for 155 * 2048 chunks in 40-45 seconds which means the lag is upto 33-37 seconds. This lag is too much for my purpose. What could be the cause of so much lag in app 2 and how can i reduce it ?
Following is the FFT Library Code and Complex Type Code
FFT.java , Complex.java
My application Code
private boolean is_recording = false;
private AudioRecord recorder = null;
int minimum_buffer_size = AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int bufferSize = 155 * AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
private static final int SAMPLE_RATE = 44100;
private Thread recordingThread = null;
short[] audioBuffer = new short[bufferSize];
MainTask recordTask;
double finalData[];
Complex[] fftArray;
boolean recieved = false;
int data_trigger_point = 10;
int trigger_count = 0;
double previous_level_1 ;
double previous_level_2 ;
double previous_level_3 ;
int no_of_chunks_to_be_send = 30;
int count = 0;
short[] sendingBuffer = new short[minimum_buffer_size * no_of_chunks_to_be_send];
public static final int RequestPermissionCode = 1;
mButton = (ImageButton) view.findViewById(R.id.submit);
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if (is_recording) {
mButton.setBackgroundResource(R.drawable.play);
stopRecodringWithoutTone();
}
else {
mButton.setBackgroundResource(R.drawable.wait);
is_recording = true;
recordTask = new MainTask();
recordTask.execute();
}
}
});
public class MainTask extends AsyncTask<Void, int[], Void> {
#Override
protected Void doInBackground(Void... arg0) {
try {
recorder = new AudioRecord(
MediaRecorder.AudioSource.DEFAULT,
SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minimum_buffer_size);
recorder.startRecording();
short[] buffer_recording = new short[minimum_buffer_size];
int recieve_counter = 0;
while (is_recording) {
if (count < bufferSize) {
int bufferReadResult = recorder.read(buffer_recording, 0, minimum_buffer_size);
System.arraycopy(buffer_recording, 0, audioBuffer, count, buffer_recording.length);
count += bufferReadResult;
System.out.println(count);
finalData = convert_to_double(buffer_recording);
int [] magnitudes = processFFT(finalData);
}
else {
stopRecording();
}
}
}
catch (Throwable t) {
t.printStackTrace();
Log.e("V1", "Recording Failed");
}
return null;
}
#Override
protected void onProgressUpdate(int[]... magnitudes) {
}
}
private int[] processFFT(double [] data){
Complex[] fftTempArray = new Complex[finalData.length];
for (int i=0; i<finalData.length; i++)
{
fftTempArray[i] = new Complex(finalData[i], 0);
}
fftArray = FFT.fft(fftTempArray);
int [] magnitude = new int[fftArray.length/2];
for (int i=0; i< fftArray.length/2; i++) {
magnitude[i] = (int) fftArray[i].abs();
}
return magnitude;
}
private double[] convert_to_double(short data[]) {
double[] transformed = new double[data.length];
for (int j=0;j<data.length;j++) {
transformed[j] = (double)data[j];
}
return transformed;
}
private void stopRecording() {
if (null != recorder) {
recorder.stop();
postAudio(audioBuffer);
recorder.release();
is_recording = false;
recorder = null;
recordingThread = null;
count = 0;
recieved = false;
}
}
I am not sure why there is a lag, however you can circumvent this problem : Run two async tasks, task 1 records the data and stores it in an array. 2nd async task takes chunks form this array and does FFT.
AsyncTask runs at a lower priority to make sure the UI thread will remain responsive. Thus more UI, more delay in AsyncTask
You're facing the delay because of the scheduling of BACKGROUND thread priority by the Linux cgroup that Android uses that has to live with 10% CPU time altogether.
If you go with THREAD_PRIORITY_BACKGROUND + THREAD_PRIORITY_MORE_FAVORABLE
your thread will be lifted with 10% limitation.
So your code will look like this:
protected final Void doInBackground(Void... arg0) {
Process.setThreadPriority(THREAD_PRIORITY_BACKGROUND + THREAD_PRIORITY_MORE_FAVORABLE);
...//your code here
}
If that doesn't work due to some reasons on next call of doInBackground because Android by default resets the priority. In that case, try using Process.THREAD_PRIORITY_FOREGROUND
I'm developing an android (compileSdkVersion 23) app to record audio by using AudioRecord and the reason of using it is to get frequency after FFT in real time.
Not only this work, I need to save the recorded sound to check the sound(In this process, tracking the frequency is unnecessary.)
How to save recorded sound to file by using the AudioRecord in android?
Thus, am I using the AudioRecord correctly?
Here is code:
public class MainActivity extends Activity {
int frequency = 8000;
int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
AudioRecord audioRecord;
RecordAudio recordTask;
int blockSize;// = 256;
boolean started = false;
boolean CANCELLED_FLAG = false;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
blockSize = 256;
final Button btRec = (Button) findViewById(R.id.btRec);
btRec.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
if (started == true) {
//started = false;
CANCELLED_FLAG = true;
//recordTask.cancel(true);
try{
audioRecord.stop();
}
catch(IllegalStateException e){
Log.e("Stop failed", e.toString());
}
btRec.setText("Start");
// canvasDisplaySpectrum.drawColor(Color.BLACK);
}
else {
started = true;
CANCELLED_FLAG = false;
btRec.setText("Stop");
recordTask = new RecordAudio();
recordTask.execute();
}
}
});
}
private class RecordAudio extends AsyncTask<Void, double[], Boolean> {
#Override
protected Boolean doInBackground(Void... params) {
int bufferSize = AudioRecord.getMinBufferSize(frequency,
channelConfiguration, audioEncoding);
audioRecord = new AudioRecord(
MediaRecorder.AudioSource.DEFAULT, frequency,
channelConfiguration, audioEncoding, bufferSize);
int bufferReadResult;
short[] buffer = new short[blockSize];
double[] toTransform = new double[blockSize];
try {
audioRecord.startRecording();
} catch (IllegalStateException e) {
Log.e("Recording failed", e.toString());
}
while (started) {
if (isCancelled() || (CANCELLED_FLAG == true)) {
started = false;
//publishProgress(cancelledResult);
Log.d("doInBackground", "Cancelling the RecordTask");
break;
} else {
bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
//transformer.ft(toTransform);
//publishProgress(toTransform);
}
}
return true;
}
}
}
You have to download your file and save in cache, than for any request you have to check for cahce file if it is available use otherwise request new file
For complete help look into one of my answer Download and cache media files
edit: I've edited the code to show my fruitless (and maybe completely stupid) attempt to solve the problem myself. With this code I only get an awful rattle-like sound.
I’m rather new to Android app development and now my uncle asked me to develop an app for him, which records audio and simultaneously plays it. As if this wasn't enough, he also wants me to add a frequency filter. Actually, that’s beyond my skills, but I told him I would try, anyway.
I am able to record audio and play it with the RecordAudio and AudioTrack classes, respectively, but I have big problems with the frequency filter. I’ve used Google and searched this forum, of course, and could find some promising code snippets, but nothing really worked.
This is the (working) code I have so far:
public class MainActivity extends ActionBarActivity {
float freq_min;
float freq_max;
boolean isRecording = false;
int SAMPLERATE = 8000;
int AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
Thread recordingThread = null;
AudioRecord recorder;
Button cmdPlay;
EditText txtMinFrequency, txtMaxFrequency;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
cmdPlay = (Button)findViewById(R.id.bPlay);
cmdPlay.setOnClickListener(onClickListener);
txtMinFrequency = (EditText)findViewById(R.id.frequency_min);
txtMaxFrequency = (EditText)findViewById(R.id.frequency_max);
}
private OnClickListener onClickListener = new OnClickListener() {
#Override
public void onClick(View v) {
if (!isRecording) {
freq_min = Float.parseFloat(txtMinFrequency.getText().toString());
freq_max = Float.parseFloat(txtMaxFrequency.getText().toString());
isRecording = true;
cmdPlay.setText("stop");
startRecording();
}
else {
isRecording = false;
cmdPlay.setText("play");
stopRecording();
}
}
};
public void startRecording() {
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLERATE,
AudioFormat.CHANNEL_IN_MONO, AUDIO_FORMAT, 1024);
recorder.startRecording();
recordingThread = new Thread(new Runnable(){
public void run() {
recordAndWriteAudioData();
}
});
recordingThread.start();
}
public void stopRecording() {
isRecording = false;
recorder.stop();
recorder.release();
recorder = null;
recordingThread = null;
}
private void recordAndWriteAudioData() {
byte audioData[] = new byte[1024];
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, 1024, AudioTrack.MODE_STREAM);
at.play();
while (isRecording) {
recorder.read(audioData, 0, 1024);
// Converting from byte array to float array and dividing floats by 32768 to get values between 0 and 1
float[] audioDataF = shortToFloat(byteToShort(audioData));
for (int i = 0; i < audioDataF.length; i++) {
audioDataF[i] /= 32768.0;
}
// Fast Fourier Transform
FloatFFT_1D fft = new FloatFFT_1D(512);
fft.realForward(audioDataF);
// fiter frequencies
for(int fftBin = 0; fftBin < 512; fftBin++){
float frequency = (float)fftBin * (float)SAMPLERATE / (float)512;
if(frequency < freq_min || frequency > freq_max){
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
audioDataF[real] = 0;
audioDataF[imaginary] = 0;
}
}
//inverse FFT
fft.realInverse(audioDataF, false);
// multiplying the floats by 32768
for (int i = 0; i < audioDataF.length; i++) {
audioDataF[i] *= 32768.0;
}
// converting float array back to short array
audioData = shortToByte(floatToShort(audioDataF));
at.write(audioData, 0, 1024);
}
at.stop();
at.release();
}
public static short[] byteToShort (byte[] byteArray){
short[] shortOut = new short[byteArray.length / 2];
ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);
for (int i = 0; i < shortOut.length; i++) {
shortOut[i] = byteBuffer.getShort();
}
return shortOut;
}
public static float[] shortToFloat (short[] shortArray){
float[] floatOut = new float[shortArray.length];
for (int i = 0; i < shortArray.length; i++) {
floatOut[i] = shortArray[i];
}
return floatOut;
}
public static short[] floatToShort (float[] floatArray){
short[] shortOut = new short[floatArray.length];
for (int i = 0; i < floatArray.length; i++) {
shortOut[i] = (short) floatArray[i];
}
return shortOut;
}
public static byte[] shortToByte (short[] shortArray){
byte[] byteOut = new byte[shortArray.length * 2];
ByteBuffer.wrap(byteOut).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(shortArray);
return byteOut;
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
#Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
On the site, Filter AudioRecord Frequencies, I found a code, which uses FFT to filter frequencies:
I hope this code is correct, because - to be honest - I wouldn’t know at all how to alter it, if it wasn’t. But the actual problem is, that the audio buffer is a ByteArray, but I need a Float Array for the FFT with values between 0 and 1 (and after the reverse FFT I have to convert the float array back to a ByteArray).
I simply can’t find code anywhere to do this, so any help would be highly appreciated!
byteToShort conversion is incorrect. While the data and most android devices are littlendian, ByteBuffer by default uses big-endian order. So we need to force it little-endian before conversion to short:
public static short[] byteToShort (byte[] byteArray){
short[] shortOut = new short[byteArray.length / 2];
ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);
byteBuffer.order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortOut);
return shortOut;
}
I'm trying to get Android's AudioTrack play a squarewave with the following code
public Synthesizer() {
bufferSize = android.media.AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
player = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize,
AudioTrack.MODE_STREAM);
}
public void writeSamples(byte[] samples) {
fillBuffer(samples);
player.write(buffer, 0, samples.length);
}
private void putBuffer(byte[] samples) {
if (buffer.length < samples.length)
buffer = new byte[samples.length];
for (int i = 0; i < samples.length; i++)
buffer[i] = samples[i];
Even samples will be negative, the others will be positive, to get a square wave:
btnPlay.setOnClickListener(new OnClickListener() {
#Override
public void onClick(View arg0) {
int frequency = 44100;
byte[] sample = new byte[frequency];
for (int i = 0; i < frequency; i++) {
if (i % 2 == 0) {
sample[i] = Byte.MAX_VALUE;
}
else {
sample[i] = Byte.MIN_VALUE;
}
}
squareSynth.writeSamples(sample);
Unfortunately I get no sound at all, i've checked my volume but couldn't even get static or some crackle. I find this very strange. Anyone knows how to fix it?