Android PCM drawing - android

i'm trying to draw audio amplitudes by time, i'm using to achieve this, the AudioRecord class, which gives me a raw audio array.
new Thread(new Runnable() {
#Override
public void run() {
while (mIsRecording) {
int readSize = mRecorder.read(mBuffer, 0, mBuffer.length);
for (int i = 0; i < readSize; i++) {
long time = mChronometer.getTimeElapsed();
ampArray.add((mBuffer[i]));
timeArray.add(time);
}
}
}
}).start();
}
The parameters i use for AudioRecored are:
public static final int SAMPLE_RATE = 8000;
private void initRecorder() {
int bufferSize = AudioRecord.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mBuffer = new short[bufferSize];
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
}
The result i get is this one:
The result i get -----
What i'm looking for
Am I missing something here?
Thanks in advance.
EDIT: Drawing method:
When the recording is stopped, i send all the values saved in the amplitude array and the one in the time array to the LineGraphSeries of the GraphView Api
series = new LineGraphSeries<DataPoint>(generateData(ampArray, timeArray));
graph.addSeries(series);
generateData method:
double x = 0; int i = 0; short y = 0;
private DataPoint[] generateData(ArrayList<Short> ampArray, ArrayList<Double> timeArray) {
DataPoint[] values = new DataPoint[ampArray.size()];
for (int i=0; i< ampArray.size(); i++) {
x = timeArray.get(i);
y = ampArray.get(i);
DataPoint v = new DataPoint(x, y);
values[i] = v;
}
return values;
}

I'm going to take an educated guess here and suggest that it has something to do with these two lines:
long time = mChronometer.getTimeElapsed();
timeArray.add(time);
It looks to me like you are trying to plot the samples which have occurred in a different time regime and you are processing in batch against the current CPU clock which would explain your results - for example you might process a big block of samples - which you can do much faster than they occurred in the first place - and they might all get a similar time axis value.
The proper approach is to reconstruct the time axis for the samples themselves. Assume the first sample you process is time 0. If your sample rate is 48000 then each sample is 1/48000 of a second. The approach would be something like this:
int sampleNumber = 0;
while (mIsRecording) {
int readSize = mRecorder.read(mBuffer, 0, mBuffer.length);
for (int i = 0; i < readSize; i++) {
ampArray.add((mBuffer[i]));
double time = sampleNumber / SAMPLE_RATE;
timeArray.add(time);
sampleNumber++;
}
}
Note, I changed timeArray from int to double as it is now in seconds rather than milliseconds. If you prefer milliseconds then multiply time by 1000 and cast to a long.
Also, you don't need to create an array for the time axis as you can determine the time of any sample based upon its absolute index in the ampArray.

Related

Play short generated sound

I want to play a generated sound that is shorter than 1 second. However, the minBufferSize of the AudioTrack always seems to be 1 second or longer. On some devices I can set the bufferSize smaller than the value evaluated with AudioTrack.getMinBufferSize, however this is not possible on all devices. I'd like to know wether it's possible to generate a shorter sound for the AudioTrack. I'm currently using this code (it contains some smoothing, because I'm getting constantly new frequences):
int buffSize = AudioTrack.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffSize,
AudioTrack.MODE_STREAM);
short samples[] = new short[buffSize];
int amp = 10000;
double twopi = 8. * Math.atan(1.);
double phase = 0.0;
audioTrack.play();
double currentFrequency = getFrequency();
double smoothing = 300;
double deltaTime = buffSize / 500;
while (playing && PreferenceManager.getDefaultSharedPreferences(
MainActivity.this).getBoolean("effect_switch", true))
{
double newFrequency = getFrequency();
for (int i = 0; i < buffSize; i++)
{
currentFrequency += deltaTime * (newFrequency - currentFrequency) / smoothing;
samples[i] = (short) (amp * Math.sin(phase));
phase += twopi * currentFrequency / sampleRate;
}
audioTrack.write(samples, 0, buffSize);
}
audioTrack.stop();
audioTrack.release();
In fact, I want the sounds to be updated more frequently, which is the reason for me needing shorter samples.
I think I have a solution for you. Since my min buffer seems to be much smaller than 1 sec, I simulated your problem by loading a buffer with 5 sec of data but only play 0.5 sec of it immediately followed by another frequency. This tone I also created 5 sec of data but only played 0.5 sec & repeated this for several tones. It all works for me.
Also, since I jammed this into a current project I'm working on, it's difficult for me to just cut and paste my code. While I've tested my solution, what I've posted here is not tested exactly as written. Some of it is cut & paste, some pseudocode.
The key feature is using the OnPlaybackPositionUpdateListener.
private AudioTrack.OnPlaybackPositionUpdateListener audioTrackListener = new AudioTrack.OnPlaybackPositionUpdateListener() {
#Override
public void onMarkerReached(AudioTrack audioTrack) {
int marker = audioTrack.getNotificationMarkerPosition();
// I just used 8 tones of 0.5 sec each to determine when to stop but you could make
// the condition based on a button click or whatever is best for you
if(marker < MAX_FRAME_POSITION) {
audioTrack.pause();
newSamples();
audioTrack.play();
} else {
audioTrack.stop();
}
audioTrack.setNotificationMarkerPosition(marker + FRAME_MARKER);
Log.d(TAG, "MarkerReached");
}
#Override
public void onPeriodicNotification(AudioTrack audioTrack) {
int position = audioTrack.getPlaybackHeadPosition();
if(position < MAX_FRAME_POSITION) {
audioTrack.pause();
newSamples();
audioTrack.play();
} else {
audioTrack.stop();
}
Log.d(TAG, "PeriodNotification");
}
};
Then
audioTrack.setPlaybackPositionUpdateListener(AudioTrackListener);
I used the marker (which has to be re-initialized repeatedly) for my tests...
audioTrack.setNotificationMarkerPosition(MARKER_FRAMES);
but you should be able to use the periodic notification too.
audioTrack.setPositionNotificationPeriod(PERIODIC_FRAMES);
And the newSamples() method called from the listener
public void newSamples() {
/*
* generate buffer, I'm doing similar to you, without the smoothing
*/
// AudioTrack write is a blocking operation so I've moved it off to it's own Thread.
// Could also be done with an AsyncTask.
Thread thread = new Thread(writeSamples);
thread.start();
}
private Runnable writeSamples = new Runnable() {
#Override
public void run() {
audioTrack.write(samples, 0, buffSize);
}
};

How to compute decibel (dB) of Amplitude from Media Player?

I have a code to compute real-time dB Amplitude of AudioRecord. The code works well for computing dB Amplitude. After recording, I save that it to wav file. Now, I want to playback that file and recompute the dB Amplitude. However, I cannot achieve similar result before. Could you help me to fix it. This is my code to compute dB Amplitude when recording and playback.
1.Compute dB amplitude when recording
bufferSize = AudioRecord.getMinBufferSize(16000, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
record = new AudioRecord(MediaRecorder.AudioSource.VOICE_COMMUNICATION, SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT, bufferSize);
audioBuffer = new short[bufferSize];
readSize=record.read(audioBuffer, 0, audioBuffer.length);
double amplitude = 0;
double sum=0;
for (int i = 0; i < readSize; i++) {
sum += audioBuffer[i] * audioBuffer[i];
}
amplitude = sum / readSize;
dbAmp=20.0 *Math.log10(amplitude/32767.0);
2.Assume that the file output is ouput.wav. I used MediaPlayer to playback and compute Amplitude
String filePath = Environment.getExternalStorageDirectory().getPath() +"/" +"output.wav";
mPlayer = new MediaPlayer();
mPlayer.setDataSource(filePath);
mPlayer.prepare();
mPlayer.start();
mVisualizerView.link(mPlayer);
In which, mVisualizerView is Visualizer class. The class has link function such as
public void link(MediaPlayer player)
{
// Create the Visualizer object and attach it to our media player.
mVisualizer = new Visualizer(player.getAudioSessionId());
mVisualizer.setScalingMode(Visualizer.SCALING_MODE_NORMALIZED);
mVisualizer.setCaptureSize(Visualizer.getCaptureSizeRange()[1]);
// Pass through Visualizer data to VisualizerView
Visualizer.OnDataCaptureListener captureListener = new Visualizer.OnDataCaptureListener()
{
#Override
public void onWaveFormDataCapture(Visualizer visualizer, byte[] bytes,
int samplingRate)
{
updateVisualizer(bytes);
}
#Override
public void onFftDataCapture(Visualizer visualizer, byte[] bytes,
int samplingRate)
{
updateVisualizerFFT(bytes);
}
};
mVisualizer.setDataCaptureListener(captureListener,
Visualizer.getMaxCaptureRate() / 2, true, true);
player.setOnCompletionListener(new MediaPlayer.OnCompletionListener()
{
#Override
public void onCompletion(MediaPlayer mediaPlayer)
{
mVisualizer.setEnabled(false);
}
});
}
As my task, I will recompute dbAmp from bytes in functions updateVisualizer or updateVisualizerFFT
public void updateVisualizer(byte[] bytes) {
dbAmp = computedbAmp(bytes);
mBytes = bytes;
invalidate();
}
public void updateVisualizerFFT(byte[] bytes) {
dbAmp = computedbAmp(bytes);
mFFTBytes = bytes;
invalidate();
}
public double computedbAmp(byte[] audioData) {
//System.out.println("::::: audioData :::::"+audioData);
double amplitude = 0;
for (int i = 0; i < audioData.length/2; i++) {
double y = (audioData[i*2] | audioData[i*2+1] << 8) / 32768.0;
// depending on your endianness:
// double y = (audioData[i*2]<<8 | audioData[i*2+1]) / 32768.0
amplitude += Math.abs(y);
}
amplitude = amplitude / audioData.length / 2;
return amplitude;
}
Currently, I apply some way to compute dB amplitude from bytes. However, they are not correct. Could you help me to fix it or suggest to me the solution to compute it? Thanks
My expected solution such as Sensor Box for Android
As mentioned in the comments you are not using the same computation for both. Also, I don't think either method is correct.
From your code in the first example it looks like you are trying to compute the RMS which is the sqrt(sumOfSquares/N) and then convert to dB.
The second sample is sumOfAbs/N not converted to dB
Another very minor issue is that in one case you divide by 32767 and the other 32768. Both should be 32768.
For part one do something like this:
double sum=0;
for (int i = 0; i < readSize; i++) {
double y = audioBuffer[i] / 32768.0;
sum += y * y;
}
double rms = Math.sqrt(sum / readSize);
dbAmp=20.0 *Math.log10(rms);
And for part 2:
double sum=0;
for (int i = 0; i < audioData.length/2; i++) {
double y = (audioData[i*2] | audioData[i*2+1] << 8) / 32768.0;
sum += y * y;
}
double rms = Math.sqrt(sum / audioData.length/2);
dbAmp = 20.0*Math.log10(rms);
Notice the two are almost exactly identical with the exception of cracking open the byte array. This should be a clue to you to find a way to factor out this function and then you won't run into this kind of problem in the future.
Edit:
One more thing I forgot to mention. There is a bit of open debate on this matter but depending on your application you might want your dBFS result to be sine calibrated. What I mean that is you were to run the computation on a single full scale sine wave as I've written it you would get a rms value of 0.7071 (1/sqrt(2)), or -3dBFS. If you want a full scale sine to hit exactly zero dBFS you need to multiply the rms value by sqrt(2).
As question said that first case worked well. Hence, I assumed first case was correct and used it as reference to edit his second case. From comment of jaket, we can modify the second case as
double sum=0;
for (int i = 0; i < audioData.length/2; i++) {
double y = (audioData[i*2] | audioData[i*2+1] << 8);
sum += y*y;
}
double rms = sum / audioData.length/2;
double dbAmp = 20.0*Math.log10(rms/32768.0);
return dbAmp;
I think it will be same result with first case. Hope it help

Android sound wave - not smooth and duration issue

I have a problem to generate smooth sinus wave.
I've done it few years ago on C++ and everything worked perfect. Now I am trying to do this using AudioTrack and I do not know what is wrong.
This is my test case:
I want to produce for five second a sinus wave which is smooth (no crack etc.). For one second I generate 44100 samples and divided it on couple of buffer with size 8192 (probably this is the reason of cracks, but how can I fix it, without giving bigger size of buffer).
Unfortunatelly using my code the sound is not smooth and instead of 5 second it takes about 1 second. I would be gratefull for any help.
Please let me now if this piece of code is not enough.
class Constants:
//<---
public final static int SAMPLING = 44100;
public final static int DEFAULT_GEN_DURATION = 1000;
public final static int DEFAULT_NUM_SAMPLES = DEFAULT_GEN_DURATION * SAMPLING / 1000; //44100 per second
public final static int DEFAULT_BUFFER_SIZE = 8192;
//--->
//preparing buffers to play;
Buffer buffer = new Buffer();
short[] buffer_values = new short[Constants.DEFAULT_BUFFER_SIZE];
float[] samples = new float[Constants.DEFAULT_BUFFER_SIZE];
float d = (float) (( Constants.FREQUENCIES[index] * 2 * Math.PI ) / Constants.SAMPLING);
int numSamples = Constants.DEFAULT_NUM_SAMPLES; //44100 per second - for test
float x = 0;
int index_in_buffer = 0;
for(int i = 0; i < numSamples; i++){
if(index_in_buffer >= Constants.DEFAULT_BUFFER_SIZE - 1){
buffer.setBufferShort(buffer_values);
buffer.setBufferSizeShort(index_in_buffer);
queue_with_data_AL.add(buffer); //add buffer to queue
buffer_values = new short[Constants.DEFAULT_BUFFER_SIZE];
samples = new float[Constants.DEFAULT_BUFFER_SIZE];
index_in_buffer = 0;
}
samples[index_in_buffer] = (float) Math.sin(x);
buffer_values[index_in_buffer] = (short) (samples[index_in_buffer] * Short.MAX_VALUE);
x += d;
index_in_buffer++;
}
buffer.setBufferShort(buffer_values);
buffer.setBufferSizeShort(index_in_buffer+1);
queue_with_data_AL.add(buffer);
index_in_buffer = 0;
}
//class AudioPlayer
public AudioPlayer(int sampleRate) { //44100
int minSize = AudioTrack.getMinBufferSize(sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
audiotrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT,
minSize, AudioTrack.MODE_STREAM);
}
public void play(byte[] audioData, int sizeOfBuffer) {
audiotrack.write(audioData, 0, sizeOfBuffer);
}
public void start() {
if (state == Constants.STOP_STATE) {
state = Constants.START_STATE;
int startLength = 0;
while (state == Constants.START_STATE) {
Buffer buffer = getBufferFromQueueAL(); //getting buffer from prepared list
if (buffer != null) {
short[] data = buffer.getBufferShort();
int size_of_data = buffer.getBufferSizeShort();
if (data != null) {
int len = audiotrack.write(data, 0, size_of_data);
if (startLength == 0) {
audiotrack.play();
}
startLength += len;
} else {
break;
}
} else {
MessagesLog.e(TAG, "get null data");
break;
}
}
if (audiotrack != null) {
audiotrack.pause();
audiotrack.flush();
audiotrack.stop();
}
}
}
You are playing only 1 second because 44100 samples at a samplerate of 44100Hz result in exactly 1 second of sound.
You have to generate 5 times more samples if you want to play 5 seconds of sound (e.g. multiply DEFAULT_NUM_SAMPLES by 5) in your code.
I've found the solution by myself. After adding Buffer to queue_with_data_AL I've forgotten create new instance of Buffer object. So in queue was couple of buffer with the same instance, hence sinus wave were not continuous.
Thanks if someone was trying to solve my problem. Unfortunatelly it was my programming mistake.
Best regards.

Continuous synthesis of static waveform in Android using AudioTrack class

Below is the code for my play() method which simply generates an arbitrary set of frequencies and blends them into one tone.
The problem is that it only plays for a split second - I need is to play it continuously. I would appreciate suggestions on how to constantly generate the sound using the AudioTrack class in Android. I believe it has something to do with the MODE_STREAM constant, but I can't quite work out how.
Here is the link to AudioTrack class documentation:
http://developer.android.com/reference/android/media/AudioTrack.html
EDIT: I forgot to mention one important aspect, it can't loop. Due to the mixing of sometimes up to 50+ frequencies, it will sound choppy because there is no least common denominator for all frequency peaks - or it's too far down the waveform to store as one sound.
/**
* play - begins playing the sound
*/
public void play() {
// Get array of frequencies with their relative strengths
double[][] soundData = getData();
// Track samples array
final double samples[] = new double[1024];
// Calculate the average sum in the array and write it to sample
for (int i = 0; i < samples.length; ++i) {
double valueSum = 0;
for (int j = 0; j < soundData.length; j++) {
valueSum += Math.sin(2 * Math.PI * i / (SAMPLE_RATE / soundData[j][0]));
}
samples[i] = valueSum / soundData.length;
}
// Obtain a minimum buffer size
int minBuffer = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (minBuffer > 0) {
// Create an AudioTrack
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBuffer, AudioTrack.MODE_STREAM);
// Begin playing track
mTrack.play();
// Fill the buffer
if (mBuffer.length < samples.length) {
mBuffer = new short[samples.length];
}
for (int k = 0; k < samples.length; k++) {
mBuffer[k] = (short) (samples[k] * Short.MAX_VALUE);
}
// Write audio data to track for real-time audio sythesis
mTrack.write(mBuffer, 0, samples.length);
}
// Once everything has successfully begun, indicate such.
isPlaying = true;
}
It looks like the code is almost there. It just needs a loop to keep generating the samples, putting them in the buffer, and writing them to the AudioTrack. Right now just one buffer full gets written before it exits which is why it stops so quickly.
void getSamples(double[] samples) {
// Get array of frequencies with their relative strengths
double[][] soundData = getData();
// Calculate the average sum in the array and write it to sample
for (int i = 0; i < samples.length; ++i) {
double valueSum = 0;
for (int j = 0; j < soundData.length; j++) {
valueSum += Math.sin(2 * Math.PI * i / (SAMPLE_RATE / soundData[j][0]));
}
samples[i] = valueSum / soundData.length;
}
}
public void endPlay() {
done = true;
}
/**
* play - begins playing the sound
*/
public void play() {
// Obtain a minimum buffer size
int minBuffer = AudioTrack.getMinBufferSize(SAMPLE_RATE, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (minBuffer > 0) {
// Create an AudioTrack
mTrack = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLE_RATE, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBuffer, AudioTrack.MODE_STREAM);
// Begin playing track
mTrack.play();
// Track samples array
final double samples[] = new double[1024];
while (!done) {
// Fill the buffer
if (mBuffer.length < samples.length) {
mBuffer = new short[samples.length];
}
getSamples(samples);
for (int k = 0; k < samples.length; k++) {
mBuffer[k] = (short) (samples[k] * Short.MAX_VALUE);
}
// Write audio data to track for real-time audio sythesis
mTrack.write(mBuffer, 0, samples.length);
// Once everything has successfully begun, indicate such.
isPlaying = true;
}
}
// Once everything is done, indicate such.
isPlaying = false;
}

Android AudioRecord filter range of frequency

I am using android platform, from the following reference question I come to know that using AudioRecord class which returns raw data I can filter range of audio frequency depends upon my need but for that I will need algorithm, can somebody please help me out to find algorithm to filter range b/w 14,400 bph and 16,200 bph.
I tried "JTransform" but i don't know can I achieve this with JTransform or not ? Currently I am using "jfftpack" to display visual effects which works very well but i can't achieve audio filter using this.
Reference here
help appreciated Thanks in advance.
Following is my code as i mentioned above i am using "jfftpack" library to display you may find this library reference in the code please don't get confuse with that
private class RecordAudio extends AsyncTask<Void, double[], Void> {
#Override
protected Void doInBackground(Void... params) {
try {
final AudioRecord audioRecord = findAudioRecord();
if(audioRecord == null){
return null;
}
final short[] buffer = new short[blockSize];
final double[] toTransform = new double[blockSize];
audioRecord.startRecording();
while (started) {
final int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
for (int i = 0; i < blockSize && i < bufferReadResult; i++) {
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
transformer.ft(toTransform);
publishProgress(toTransform);
}
audioRecord.stop();
audioRecord.release();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
/**
* #param toTransform
*/
protected void onProgressUpdate(double[]... toTransform) {
canvas.drawColor(Color.BLACK);
for (int i = 0; i < toTransform[0].length; i++) {
int x = i;
int downy = (int) (100 - (toTransform[0][i] * 10));
int upy = 100;
canvas.drawLine(x, downy, x, upy, paint);
}
imageView.invalidate();
}
There are a lot of tiny details in this process that can potentially hang you up here. This code isn't tested and I don't do audio filtering very often so you should be extremely suspicious here. This is the basic process you would take for filtering audio:
Get audio buffer
Possible audio buffer conversion (byte to float)
(optional) Apply windowing function i.e. Hanning
Take the FFT
Filter frequencies
Take inverse FFT
I'm assuming you have some basic knowledge of Android and audio recording so will cover steps 4-6 here.
//it is assumed that a float array audioBuffer exists with even length = to
//the capture size of your audio buffer
//The size of the FFT will be the size of your audioBuffer / 2
int FFT_SIZE = bufferSize / 2;
FloatFFT_1D mFFT = new FloatFFT_1D(FFT_SIZE); //this is a jTransforms type
//Take the FFT
mFFT.realForward(audioBuffer);
//The first 1/2 of audioBuffer now contains bins that represent the frequency
//of your wave, in a way. To get the actual frequency from the bin:
//frequency_of_bin = bin_index * sample_rate / FFT_SIZE
//assuming the length of audioBuffer is even, the real and imaginary parts will be
//stored as follows
//audioBuffer[2*k] = Re[k], 0<=k<n/2
//audioBuffer[2*k+1] = Im[k], 0<k<n/2
//Define the frequencies of interest
float freqMin = 14400;
float freqMax = 16200;
//Loop through the fft bins and filter frequencies
for(int fftBin = 0; fftBin < FFT_SIZE; fftBin++){
//Calculate the frequency of this bin assuming a sampling rate of 44,100 Hz
float frequency = (float)fftBin * 44100F / (float)FFT_SIZE;
//Now filter the audio, I'm assuming you wanted to keep the
//frequencies of interest rather than discard them.
if(frequency < freqMin || frequency > freqMax){
//Calculate the index where the real and imaginary parts are stored
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
//zero out this frequency
audioBuffer[real] = 0;
audioBuffer[imaginary] = 0;
}
}
//Take the inverse FFT to convert signal from frequency to time domain
mFFT.realInverse(audioBuffer, false);

Categories

Resources