How to test sound level rms algorithm - android

My app. is calculating noise level and peak of frequency of input sound.
I used FFT to get array of shorts[] buffer , and this is the code :
bufferSize = 1024, sampleRate = 44100
int bufferSize = AudioRecord.getMinBufferSize(sapleRate,
channelConfiguration, audioEncoding);
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.DEFAULT, sapleRate,
channelConfiguration, audioEncoding, bufferSize);
and this is converting code :
short[] buffer = new short[blockSize];
try {
audioRecord.startRecording();
} catch (IllegalStateException e) {
Log.e("Recording failed", e.toString());
}
while (started) {
int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
/*
* Noise level meter begins here
*/
// Compute the RMS value. (Note that this does not remove DC).
double rms = 0;
for (int i = 0; i < buffer.length; i++) {
rms += buffer[i] * buffer[i];
}
rms = Math.sqrt(rms / buffer.length);
mAlpha = 0.9; mGain = 0.0044;
/*Compute a smoothed version for less flickering of the
// display.*/
mRmsSmoothed = mRmsSmoothed * mAlpha + (1 - mAlpha) * rms;
double rmsdB = 20.0 * Math.log10(mGain * mRmsSmoothed);
Now I want to know if this algorithm works correctly or i'm missing something ?
And I want to know if it was correct and i have sound in dB displayed on mobile , how to test it ?
I need any help please , Thanks in advance :)

The code looks correct but you should probably handle the case where the buffer initially contains zeroes, which could cause Math.log10 to fail, e.g. change:
double rmsdB = 20.0 * Math.log10(mGain * mRmsSmoothed);
to:
double rmsdB = mGain * mRmsSmoothed >.0 0 ?
20.0 * Math.log10(mGain * mRmsSmoothed) :
-999.99; // choose some appropriate large negative value here for case where you have no input signal

Related

Android: confused about how to get amplitude of a frequency generated and played by AudioTrack

In my app I generate list of sounds with different frequencies, for example 1000Hz, 2000Hz, 4000Hz ... for left and right channel:
private AudioTrack generateTone(Ear ear) {
// fill out the array
int numSamples = SAMPLE_RATE * getDurationInSeconds();
double[] sample = new double[numSamples];
byte[] generatedSnd = new byte[2 * numSamples];
for (int i = 0; i < numSamples; ++i) {
sample[i] = Math.sin(2 * Math.PI * i / (SAMPLE_RATE / getLatestFreqInHz()));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, numSamples,
AudioTrack.MODE_STATIC);
audioTrack.write(generatedSnd, 0, generatedSnd.length);
int channel;
if (ear == Ear.LEFT) {
audioTrack.setStereoVolume(1.0f, 0.0f);
} else if (ear == Ear.RIGHT) {
audioTrack.setStereoVolume(0.0f, 1.0f);
}
return audioTrack;
}
I use this code to control volume:
audioManager = (AudioManager) context.getApplicationContext().getSystemService(Context.AUDIO_SERVICE);
MAX_VOLUME = audioManager.getStreamMaxVolume(STREAM_MUSIC);
MIN_VOLUME = audioManager.getStreamMinVolume(STREAM_MUSIC);
APP_DEFAULT_VOLUME = (MAX_VOLUME + MIN_VOLUME) /2 ;
#Override
public void setToAppDefaultVolume() {
Timber.tag(TAG).d("Resetting to app default sound volume.");
audioManager.setStreamVolume(STREAM_MUSIC, APP_DEFAULT_VOLUME, 0);
}
#Override
public void increaseVolume() {
if (!isReachedMaxVolume()) {
Timber.tag(TAG).d("Increasing device sound volume from %d to %d", currentVolumeLevel(), currentVolumeLevel() + STEP);
audioManager.setStreamVolume(STREAM_MUSIC, currentVolumeLevel() + STEP, 0);
} else {
Timber.tag(TAG).d("Reached the maximum device volume.");
}
}
#Override
public void decreaseVolume() {
if (!isReachedMinVolume()) {
Timber.tag(TAG).d("Decreasing device sound volume from %d to %d", currentVolumeLevel(), currentVolumeLevel() - STEP);
audioManager.setStreamVolume(STREAM_MUSIC, currentVolumeLevel() - STEP, 0);
} else {
Timber.tag(TAG).d("Reached the preferred minimum volume");
}
}
I start playing each tone (frequency) with APP_DEFAULT_VOLUME and gradually increase tone. When user confirm he/she heard specific tone with specific volume, I want to calculate it's amplitude and log it for latter so I can review user hearing ...
But i have no clue how to do so!
All solutions I found was about reading data from microphone and calculate the amplitude to visualize it on screen ...
My scenario is much simpler. I have device volume recorded, frequency is fixed and recorded, and Audio is generated dynamically and is not a file.
Will anyone help me with this scenario ?
Thanks in advance.

android AudioTrack setloop invalid value

I generate a PCM and want to loop the sound.
I follow the documentation, but Eclipse keep telling me that
08-05 15:46:26.675: E/AudioTrack(27686): setLoop invalid value: loopStart 0, loopEnd 44100, loopCount -1, framecount 11025, user 11025
here is my code:
void genTone() {
// fill out the array
for (int i = 1; i < numSamples - 1; i = i + 2) {
sample[i] = Math.sin(2 * Math.PI * i / (sampleRate / -300));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
for (double dVal : sample) {
short val = (short) (dVal * 32767);
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
//write it to audio Track.
audioTrack.write(generatedSnd, 0, numSamples);
audioTrack.setLoopPoints(0, numSamples, -1);
//from 0.0 ~ 1.0
audioTrack.setStereoVolume((float)0.5, (float)1); //change amplitude
}
public void buttonPlay(View v) {
audioTrack.reloadStaticData();
audioTrack.play();
}
please help ~~
From the documentation: "endInFrames loop end marker expressed in frames"
The log print indicates that your track contains 11025 frames, which is less than the 44100 that you're trying to specify as the end marker (for 16-bit stereo PCM audio, the frame size would be 4 bytes).
Another thing worth noting is that "the track must be stopped or paused for the position to be changed".

How to generate and play a 20Hz square wave with AudioTrack?

I'm trying to generate and play a square wave with AudioTrack(Android). I've read lots of tutorials but still have some confusions.
int sampleRate = 44100;
int channelConfig = AudioFormat.CHANNEL_IN_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
AudioTrack audioTrack;
int buffer = AudioTrack.getMinBufferSize(sampleRate, channelConfig,
audioFormat);
audioTrack.write(short[] audioData, int offsetInShorts, int sizeInShorts);
In the codes, what makes me confused is How to write the short array "audioData" ...
Anyone can help me? Thanks in advance !
You should use Pulse-code modulation. The linked article has an example of encoding a sine wave, a square wave is even simpler. Remember that the maximum amplitude is encoded by the maximum value of short (32767) , and that the "effective" frequency depends on your sampling rate.
This method generates Square, Sin and Saw Tooth wave forms
// Process audio
protected void processAudio()
{
short buffer[];
int rate =
AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
int minSize =
AudioTrack.getMinBufferSize(rate, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
// Find a suitable buffer size
int sizes[] = {1024, 2048, 4096, 8192, 16384, 32768};
int size = 0;
for (int s : sizes)
{
if (s > minSize)
{
size = s;
break;
}
}
final double K = 2.0 * Math.PI / rate;
// Create the audio track
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, rate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT,
size, AudioTrack.MODE_STREAM);
// Check audiotrack
if (audioTrack == null)
return;
// Check state
int state = audioTrack.getState();
if (state != AudioTrack.STATE_INITIALIZED)
{
audioTrack.release();
return;
}
audioTrack.play();
// Create the buffer
buffer = new short[size];
// Initialise the generator variables
double f = frequency;
double l = 0.0;
double q = 0.0;
while (thread != null)
{
// Fill the current buffer
for (int i = 0; i < buffer.length; i++)
{
f += (frequency - f) / 4096.0;
l += ((mute ? 0.0 : level) * 16384.0 - l) / 4096.0;
q += (q < Math.PI) ? f * K : (f * K) - (2.0 * Math.PI);
switch (waveform)
{
case SINE:
buffer[i] = (short) Math.round(Math.sin(q) * l);
break;
case SQUARE:
buffer[i] = (short) ((q > 0.0) ? l : -l);
break;
case SAWTOOTH:
buffer[i] = (short) Math.round((q / Math.PI) * l);
break;
}
}
audioTrack.write(buffer, 0, buffer.length);
}
audioTrack.stop();
audioTrack.release();
}
}
Credit goes to billthefarmer.
Complete Source code:
https://github.com/billthefarmer/sig-gen

Android - Mixing multiple static waveforms into a single AudioTrack

I am making a class that takes an array of frequencies values (i.e. 440Hz, 880Hz, 1760Hz) and plays how they would sound combined into a single AudioTrack. I am not a sound programmer, so this is difficult for me to write myself, where I believe that it is a relatively easy problem to an experienced sound programmer. Here is some of the code below in the play method:
public void play() {
// Get array of frequencies with their relative strengths
double[][] soundData = getData();
// TODO
// Perform a calculation to fill an array with the mixed sound - then play it in an infinite loop
// Need an AudioTrack that will play calculated loop
// Track sample info
int numOfSamples = DURATION * SAMPLE_RATE;
double sample[] = new double[numOfSamples];
byte sound[] = new byte[2 * numOfSamples];
// fill out the array
for (int i = 0; i < numOfSamples; ++i) {
sample[i] = Math.sin(2 * Math.PI * i / (SAMPLE_RATE / 440));
}
int i = 0;
for (double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
sound[i++] = (byte) (val & 0x00ff);
sound[i++] = (byte) ((val & 0xff00) >>> 8);
}
// Obtain a minimum buffer size
int minBuffer = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (minBuffer > 0) {
// Create an AudioTrack
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, numOfSamples, AudioTrack.MODE_STATIC);
// Write audio data to track
track.write(sound, 0, sound.length);
// Begin playing track
track.play();
}
// Once everything has successfully begun, indicate such.
isPlaying = true;
}
Right now, this code simply plays a concert A (440Hz). It was to test whether this code works. Now, I need to take a bunch a frequencies, perform some kind of calculation, and write the sample data.
Ok, so the answer did turn out to be a simple summation loop. Here it is, just replace this for loop with the original one:
// fill out the array
for (int i = 0; i < numOfSamples; ++i) {
double valueSum = 0;
for (int j = 0; j < soundData.length; j++) {
valueSum += Math.sin(2 * Math.PI * i / (SAMPLE_RATE / soundData[j][0]));
}
sample[i] = valueSum / soundData.length;
}
Now, what this does is simply take all possible frequencies, add them together into the variable, valueSum, and then divide that by the length of the frequency array, soundData, which is a simple average. This produces a nice sine wave mixture of an arbitrarily long array of frequencies.
I haven't tested performance, but I do have this running in a thread, otherwise it could crash the UI. So, hope this helps - I am marking this as the answer.
If you intend to mix multiple waveforms into one, you might prevent clipping in several ways.
Assuming sample[i] is a float representing the sum of all sounds.
HARD CLIPPING:
if (sample[i]> 1.0f)
{
sample[i]= 1.0f;
}
if (sample[i]< -1.0f)
{
sample[i]= -1.0f;
}
HEADROOM (y= 1.1x - 0.2x^3 for the curve, min and max cap slighty under 1.0f)
if (sample[i] <= -1.25f)
{
sample[i] = -0.987654f;
}
else if (sample[i] >= 1.25f)
{
sample[i] = 0.987654f;
}
else
{
sample[i] = 1.1f * sample[i] - 0.2f * sample[i] * sample[i] * sample[i];
}
For a 3rd polynomial waveshapper (less smooth), replace the last line above with:
sample[i]= 1.1f * sample[i]- 0.2f * sample[i] * sample[i] * sample[i];

Android AudioRecord filter range of frequency

I am using android platform, from the following reference question I come to know that using AudioRecord class which returns raw data I can filter range of audio frequency depends upon my need but for that I will need algorithm, can somebody please help me out to find algorithm to filter range b/w 14,400 bph and 16,200 bph.
I tried "JTransform" but i don't know can I achieve this with JTransform or not ? Currently I am using "jfftpack" to display visual effects which works very well but i can't achieve audio filter using this.
Reference here
help appreciated Thanks in advance.
Following is my code as i mentioned above i am using "jfftpack" library to display you may find this library reference in the code please don't get confuse with that
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
try {
final AudioRecord audioRecord = findAudioRecord();
if(audioRecord == null){
return null;
}
final short[] buffer = new short[blockSize];
final double[] toTransform = new double[blockSize];
audioRecord.startRecording();
while (started) {
final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
audioRecord.release();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
/**
* #param toTransform
*/
protected void onProgressUpdate(double[]... toTransform) {
canvas.drawColor(Color.BLACK);
for (int i = 0; i < toTransform[0].length; i++) {
int x = i;
int downy = (int) (100 - (toTransform[0][i] * 10));
int upy = 100;
canvas.drawLine(x, downy, x, upy, paint);
}
imageView.invalidate();
}
There are a lot of tiny details in this process that can potentially hang you up here. This code isn't tested and I don't do audio filtering very often so you should be extremely suspicious here. This is the basic process you would take for filtering audio:
Get audio buffer
Possible audio buffer conversion (byte to float)
(optional) Apply windowing function i.e. Hanning
Take the FFT
Filter frequencies
Take inverse FFT
I'm assuming you have some basic knowledge of Android and audio recording so will cover steps 4-6 here.
//it is assumed that a float array audioBuffer exists with even length = to
//the capture size of your audio buffer
//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type
//Take the FFT
mFFT.realForward(audioBuffer);
//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way. To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE
//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2
//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;
//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){
//Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;
//Now filter the audio, I'm assuming you wanted to keep the
//frequencies of interest rather than discard them.
if(frequency < freqMin || frequency > freqMax){
//Calculate the index where the real and imaginary parts are stored
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
//zero out this frequency
audioBuffer[real] = 0;
audioBuffer[imaginary] = 0;
}
}
//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);

Categories

Resources