I have been searching everywhere to find a reliable method to calculate the FFT of an audio byte stream received by a native function in android SDK (through eclipse IDE). I have come across the libgdx fft and Jtransform. Jtransform Found here
JTransform
.
I have downloaded them all and added the .jar files to a created libs folder in the root directory for the project. I have then linked the project to the new .jar files through project properties > java Build Path > Libraries.
My src java file looks like this attempting to use Jtransform.
package com.spectrum;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.widget.LinearLayout;
import android.widget.TextView;
import android.view.View;
import com.badlogic.gdx.audio.analysis.*;
import edu.emory.mathcs.jtransforms.fft.*;
import edu.emory.mathcs.utils.*;
public class spectrum extends Activity {
static byte [] sin = null;
static int f = 2000;
static int fs = 44100;
double buf;
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//initialize(new SpectrumDesktop(), false);
sin = playsin(f,fs);
buf = bytearray2double(sin);
public DoubleFFT_1D(512); //<Undefined>
public void complexForward(sin) //<Undefined>
playsound(sin);
}
public static double bytearray2double(byte[] b) {
ByteBuffer buf = ByteBuffer.wrap(b);
return buf.getDouble();
}
private void playsound(byte[] sin2){
int intSize = android.media.AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT, intSize, AudioTrack.MODE_STREAM);
at.play();
at.write(sin2, 0, sin2.length );
at.stop();
at.release();
}
#Override
protected void onDestroy() {
super.onDestroy();
// TODO Auto-generated method stub
if (mMediaPlayer != null) {
mMediaPlayer.release();
mMediaPlayer = null;
}
}
public native byte[] playsin(int f,int fs);
/** Load jni .so on initialisation */
static {
System.loadLibrary("SPL");
}
}
In this example I am only using the Jtransform packages, however I have been getting the same compile error for the lingdx packages. The compiler says that DoubleFFT_1D and complexForward are undefined.
So there is something I am missing, like not linking my libraries correctly, I am not sure.
Any help would be greatly appreciated. Am I meant to declare an instance of DoubleFFT_1D and complexForward before onCreate or something?
I know this is a noob question, but I am new to object oriented languages and learning java on the go. Thanks :)
You first need to create a Fourier Transform object
DoubleFFT_1D fft = new DoubleFFT_1D(int n);
Where n is the size of the transform you want. n may have to be 2 times bigger than you think since it expects real and imaginary parts side by side in the same input matrix.
Then you can apply your methods to fft, eg
fft.complexForward(double[]);
Strangely the result is saved into the input array.
1.In Project properties -> Java build path -> Order and export, check all your added dependencies to be included with project class files.
2.Select Android Tools > Fix Project Properties
Than run your app
Related
I want to play music tracks with AudioTrack class in android, using stereo with PCM 16 bit channel. Here's my code for MusicListFragment.java
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.support.v4.app.Fragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.AdapterView;
import android.widget.ListView;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class MusicListFragment extends Fragment implements AdapterView.OnItemClickListener {
private AudioTrack mAudioTrack;
public MusicListFragment() {
}
#Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View view = inflater.inflate(R.layout.fragment_music_list, container, false);
ListView musicListView = (ListView) view.findViewById(R.id.music_list_view);
musicListView.setAdapter(new MusicListAdapter(getActivity()));
int minBufferSize = AudioTrack.getMinBufferSize(
22000,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 22000
, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize,
AudioTrack.MODE_STREAM);
musicListView.setOnItemClickListener(this);
return view;
}
#Override
public void onItemClick(AdapterView<?> adapterView, View view, int i, long l) {
Music music = (Music) adapterView.getItemAtPosition(i);
InputStream inputStream = null;
byte[] bytes = new byte[512];
mAudioTrack.play();
try {
File file = new File(music.getPath());
inputStream = new FileInputStream(file);
int k;
while ((k = inputStream.read(bytes)) != -1) {
mAudioTrack.write(bytes, 0, k);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
The adapter works fine, since I tested it using the MediaPlayer class. (I can provide my Adapter and other classes too, if you want. But I doubt they are the issue.) My list view shows the title, artist and the album of musics and also stores each music's path.
I could play musics easily with MediaPlayer, but I'm having problem using the AudioTrack. The code makes device play static sounds, like the old TVs when they have no signal! :)
As you can see in adapter's on click listener, I'm
1. getting the music that is selected.
2. reading the music file into an InputStream
3. and finally writing the bytes to the audio track instance. (I've also put the line: mAudioTrack.play() after the try, catch statement, no luck) What am I doing wrong here?
Playing the binary contents of a compressed audio file to the AudioTrack, perhaps? This won't work unless your music files are stored in raw, uncompressed format. AudioTracks use PCM format. Even the header on a .wav file would sound like static, until you reached the raw samples.
Thanks to #greeble31 answer, I understand my issue now, I searched about how to decode .mp3 and .wav files to PCM format and I found some useful answers here and here , in case anyone needs it.
I wand to record some audio using AudioRecord. In order to initialize the AudioRecord object you must provide several arguments e.g( rate, channel, encoding) and since different combinations of arguments are supported from hardware devices, I went on and check functional apps like
Ringdroid:
Audio recording done in Ringdroid
and Rehersal Assistant:
Audio recording in Rehersal Assistant
As mentioned in the documentation of the AudioRecord class, the configuration that will always work is rate = 44100 and channel = CHANNEL_IN_MONO.
I am using the same arguments when initializing my AudioRecord object but I still get a runtime error saying that my object is uninitialized. Since RingDroind is working fine on my device (Nexus 5) I have used the same configuration when creating my AudioObject.
package com.example.android.visualizeaudio;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.view.Menu;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
public class MainActivity extends Activity {
int mSampleRate = 44100;
Button startButton;
boolean started = false;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
startButton = (Button) this.findViewById(R.id.start_button);
}
private void RecordAudio() {
int minBufferSize = AudioRecord.getMinBufferSize(
mSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
// make sure minBufferSize can contain at least 1 second of audio (16 bits sample).
if (minBufferSize < mSampleRate * 2) {
minBufferSize = mSampleRate * 2;
}
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
mSampleRate,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize
);
audioRecord.startRecording();
//Do some stuff here with the recorded data
audioRecord.stop();
audioRecord.release();
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
public void startRec(View view) {
if (started) {
started = false;
startButton.setText("Start");
} else {
started = true;
startButton.setText("Stop");
Toast.makeText(this, "Recording started", Toast.LENGTH_LONG);
RecordAudio();
}
}
}
I am attaching the object inspection during debugging in case it provides more insight
Thank you
Switching to SDK target version 22 made the trick. With target SDK 23 I had these errors. I don't know why but it seems that the resource I am trying to access is used by the OS.
I'm trying to write an app that will listen for sound over a phone/tablet's microphone. I think capturing sound is not too hard, I've found some code here.
I was wondering how I would go about associating a volume level? Ideally I'd like to convert the sound level into decibels, but any arbitrary scale would do just fine.
In the end I just used the Java SDK to do this, and then when I converted the app to iOS I rewrote it in objective-c.
On Android you can import the following libraries:
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
And then make some calls like this to see the amplitude of the audio recording:
int minSize = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioRecord ar = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,minSize);
short[] buffer = new short[minSize];
ar.startRecording();
while(1){
ar.read(buffer, 0, val*minSize);
average_buffer = 0;
for (int i = 0; i < minSize; i++){
average_buffer += (int) Math.sqrt((float)buffer[i]*buffer[i]);
}
Log.i("NOISE LEVEL", Integer.toString(average_buffer);
}
I use this code to record and play back recorded audio in real time using the AudioTrack and AudioRecord
package com.example.audiotrack;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.util.Log;
public class MainActivity extends Activity {
private int freq = 8000;
private AudioRecord audioRecord = null;
private Thread Rthread = null;
private AudioManager audioManager = null;
private AudioTrack audioTrack = null;
byte[] buffer = new byte[freq];
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
final int bufferSize = AudioRecord.getMinBufferSize(freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize);
audioTrack = new AudioTrack(AudioManager.ROUTE_HEADSET, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate(freq);
final byte[] buffer = new byte[bufferSize];
audioRecord.startRecording();
Log.i("info", "Audio Recording started");
audioTrack.play();
Log.i("info", "Audio Playing started");
Rthread = new Thread(new Runnable() {
public void run() {
while (true) {
try {
audioRecord.read(buffer, 0, bufferSize);
audioTrack.write(buffer, 0, buffer.length);
} catch (Throwable t) {
Log.e("Error", "Read write failed");
t.printStackTrace();
}
}
}
});
Rthread.start();
}
}
My problem :
1.the quality of audio is bad
2.when I try different frequencies the app crashes
Audio quality can be bad because you are using AMR codec to compress audio data. AMR uses compression based on acoustic model so any other sounds than human speech will be in poor quality
Instead of
MediaRecorder.AudioEncoder.AMR_NB
try
AudioFormat.ENCODING_PCM_16BIT
AudioRecord is low level tool, so you must take care of, parameters compatibility on your own. As said in documentation many frequencies are not guranteed to work.
So it is good idea to go through all combinations and check wich of them are accesible before trying to record or play.
Nice solution was mentioned few times on stackOverflow, e.g here
Frequency detection on Android - AudioRecord
check public AudioRecord findAudioRecord() method
for a university project my prof. wants me to write an android application, would be my first one. I have some Java experience but I am new to Android programming, so please be gentle with me.
First I create an Activity where I have only two buttons, one for starting an AsyncTask and one for stopping it, I mean I just set the boolean "isRecording" to false, everything else is handled in the AsyncTask, which is attached as source code.
The thing is running quite okay, but after a while I can find some bufferoverflow messages in the LogCat and after that it crashes with an uncaught exception. I figured out why it's crashing, and the uncaught exception shouldn't be the purpose of that question.
03-07 11:34:02.474: INFO/buffer 247:(558): 40
03-07 11:34:02.484: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.484: INFO/MutantAudioRecorder:doInBackground()(558): isRecoding
03-07 11:34:02.494: WARN/AudioFlinger(33): RecordThread: buffer overflow
03-07 11:34:02.494: INFO/buffer 248:(558): -50
I write out the buffer as you can see, but somehow I think I made a mistake in configuring the AudioRecord correctly, can anybody tell why I get the bufferoverflow?
And the next question would be, how can I handle the buffer? I mean, I have the values inside it and want them to show in graphical spectrogram on the screen. Does anyone have experience with it and can me give a hint? How can I go on ...
Thanks in advance for your help.
Source code of the AsyncTask:
package nomihodai.audio;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.os.AsyncTask;
import android.util.Log;
public class MutantAudioRecorder extends AsyncTask<Void, Void, Void> {
private boolean isRecording = false;
public AudioRecord audioRecord = null;
public int mSamplesRead;
public int buffersizebytes;
public int buflen;
public int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO;
public int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
public static short[] buffer;
public static final int SAMPLESPERSEC = 8000;
#Override
protected Void doInBackground(Void... params) {
while(isRecording) {
audioRecord.startRecording();
mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);
if(!readerT.isAlive())
readerT.start();
Log.i("MutantAudioRecorder:doInBackground()", "isRecoding");
}
readerT.stop();
return null;
}
Thread readerT = new Thread() {
public void run() {
for(int i = 0; i < 256; i++){
Log.i("buffer " + i + ": ", Short.toString(buffer[i]));
}
}
};
#Override
public void onPostExecute(Void unused) {
Log.i("MutantAudioRecorder:onPostExecute()", "try to release the audio hardware");
audioRecord.release();
Log.i("MutantAudioRecorder:onPostExecute()", "released...");
}
public void setRecording(boolean rec) {
this.isRecording = rec;
Log.i("MutantAudioRecorder:setRecording()", "isRecoding set to " + rec);
}
#Override
protected void onPreExecute() {
buffersizebytes = AudioRecord.getMinBufferSize(SAMPLESPERSEC, channelConfiguration, audioEncoding);
buffer = new short[buffersizebytes];
buflen = buffersizebytes/2;
Log.i("MutantAudioRecorder:onPreExecute()", "buffersizebytes: " + buffersizebytes
+ ", buffer: " + buffer.length
+ ", buflen: " + buflen);
audioRecord = new AudioRecord(android.media.MediaRecorder.AudioSource.MIC,
SAMPLESPERSEC,
channelConfiguration,
audioEncoding,
buffersizebytes);
if(audioRecord != null)
Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord object created");
else
Log.i("MutantAudioRecorder:onPreExecute()", "audiorecord NOT created");
}
}
It's probably some live analyzing process working on the recorded audio bytes?
Since the buffer size for recording is limited, once your "analyzing process" is slower than the rate of recording, the data in the buffer will be stuck, but the recording bytes are always coming thus buffer overflows.
Try use threads on recording and the other process on the recorded bytes, there's a open source sample code for this approach: http://musicg.googlecode.com/files/musicg_android_demo.zip
As we discussed in the chat room, decoding the audio data and displaying it on the screen should be straightforward. You mentioned that the audio buffer has 8000 samples per second, each sample is 16 bit, and it's mono audio.
Displaying this should be straightforward. Treat each sample as a vertical offset in your view. You need to scale the range -32k to +32k to the vertical height of your view. Starting at the left edge of the view, draw one sample per column. When you reach the right edge, wrap around again (erasing the previous line as necessary).
This will end up drawing each sample as a single pixel, which may not look very nice. You can also draw a line between adjacent samples. You can play around with line widths, colors and so on to get the best effect.
One last note: You'll be drawing 8000 times per second, plus more to blank out the previous samples. You may need to take some shortcuts to make sure the framerate can keep up with the audio. You may need to skip samples.