I found the following code, that generates 'noise'. I want to be able to generate a tone. As far as I understood there's some kind of formula involving SIN to generate the tone.
This line generates the noise: rnd.nextBytes(noiseData);
I tried to assign specific value manually to all array elements, but then there's no sound. I found a code that generates a tone, but it doesn't stream it. When I tried to pass the data to my code, the tone is generated for a few seconds and then the app crashes.
Any suggestions how can I generate a tone from this? Thanks
public class Internal extends Activity
{
#Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
public void onPlayClicked(View v)
{
start();
}
public void onStopClicked(View v)
{
stop();
}
boolean m_stop = false;
AudioTrack m_audioTrack;
Thread m_noiseThread;
Runnable m_noiseGenerator = new Runnable()
{
public void run()
{
Thread.currentThread().setPriority(Thread.MIN_PRIORITY);
/* 8000 bytes per second, 1000 bytes = 125 ms */
byte [] noiseData = new byte[1000];
Random rnd = new Random();
while(!m_stop)
{
rnd.nextBytes(noiseData);
m_audioTrack.write(noiseData, 0, noiseData.length);
}
}
};
void start()
{
m_stop = false;
/* 8000 bytes per second*/
m_audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_8BIT, 8000 /* 1 second buffer */,
AudioTrack.MODE_STREAM);
m_audioTrack.play();
m_noiseThread = new Thread(m_noiseGenerator);
m_noiseThread.start();
}
void stop()
{
m_stop = true;
m_audioTrack.stop();
}
}
This is the code that generates a tone, but when I feed its output to my write buffer, it plays for a second and then the app crashes.. even though I changed 'AudioFormat.ENCODING_PCM_8BIT' to 'AudioFormat.ENCODING_PCM_16BIT'
private final int duration = 1; // seconds
private final int sampleRate = 8000;
private final int numSamples = duration * sampleRate;
private final double sample[] = new double[numSamples];
private final double freqOfTone = 440; // hz
private final byte generatedSnd[] = new byte[2 * numSamples];
void genTone(){
// fill out the array
for (int i = 0; i < numSamples; ++i) {
sample[i] = Math.sin(2 * Math.PI * i / (sampleRate/freqOfTone));
}
// convert to 16 bit pcm sound array
// assumes the sample buffer is normalised.
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
generatedSnd[idx++] = (byte) (val & 0x00ff);
generatedSnd[idx++] = (byte) ((val & 0xff00) >>> 8);
}
}
Be sure that your noiseData buffer size is greater or equal than the minimum buffersize AudioTrack will return you with getMinBufferSize.
http://developer.android.com/reference/android/media/AudioTrack.html#getMinBufferSize(int, int, int)
Related
I am trying to recieve an streaming audio from my app.
below is my code for recieving audio stream:
public class ClientListen implements Runnable {
private Context context;
public ClientListen(Context context) {
this.context = context;
}
#Override
public void run() {
boolean run = true;
try {
DatagramSocket udpSocket = new DatagramSocket(8765);
InetAddress serverAddr = null;
try {
serverAddr = InetAddress.getByName("127.0.0.1");
} catch (UnknownHostException e) {
e.printStackTrace();
}
while (run) {
try {
byte[] message = new byte[8000];
DatagramPacket packet = new DatagramPacket(message,message.length);
Log.i("UDP client: ", "about to wait to receive");
udpSocket.setSoTimeout(10000);
udpSocket.receive(packet);
String text = new String(packet.getData(), 0, packet.getLength());
Log.d("Received text", text);
} catch (IOException e) {
Log.e(" UDP clien", "error: ", e);
run = false;
udpSocket.close();
}
}
} catch (SocketException e) {
Log.e("Socket Open:", "Error:", e);
} catch (IOException e) {
e.printStackTrace();
}
}
}
In Received text logger i can see data as coming as
D/Received text: �������n�����������q�9�$�0�/�G�{�������s�����JiH&������d�����Z���������d�����E������C�+
��l��y�����������v���9����������u��f�j�������$�����K���������F��~R�2�����T��������������L�����!��G��8������s�;�"�,�R�����(��{�����*_��Z�������5������������\������x���j~������������/��=�����%�������
How can store this data into a wav file ?
What you see is the string representation of single udp packet after it was received and the received block has just being released.
It is a very small fraction of the sound you want to convert to wave.
Soon the while loop will continue and you will receive another packet and many more..
You need to collect all the packets in a buffer and then when you think it is ok - convert them to wave file.
Remember Wave is not just the sound bytes you get from udp but also 44 bytes of prefix you need to add to this file in order to be recognized by players.
Also if the udp is from another encoding format such as G711 - you must encode these bytes to PCM – if not you will hear heavy noise in the
Sound of the wave or the stream you play.
The buffer must be accurate. if it will be too big (many empty bytes in the end of the array) you will hear a sound of helicopter. if you know exactly what is the size of each packet then you can just write it to AudioTrack in order to play stream, or accumulate it and convert it to wave file when you will see fit. But If you are not sure about the size you can use this answer to get a buffer and then write the buffer to AudioTrack:
Android AudioRecord to Server over UDP Playback Issues.
they use Javax because it is very old answer but you just need to use AudioTrack instead in order to stream. It is not in this scope so I will just present the AudioTrack streaming replacements instead of Javax SourceDataLine:
final int SAMPLE_RATE = 8000; // Hertz
final int STREAM_TYPE = AudioManager.STREAM_NOTIFICATION;
int channelConfig = AudioFormat.CHANNEL_OUT_MONO;
int encodingFormat = AudioFormat.ENCODING_PCM_16BIT;
AudioTrack track = new AudioTrack(STREAM_TYPE, SAMPLE_RATE, channelConfig,
encodingFormat, BUF_SIZE, AudioTrack.MODE_STREAM);
track.play();
//.. then after receive UDP packets and the buffer is full:
if(track != null && packet != null){
track.write(audioStreamBuffer, 0, audioStreamBuffer.length);
}
You must not do this in the UI thread (I assume you know that).
In the code I will show you - I am getting udp of audio logs from PTT radio. It is encoded in G711 Ulaw . each packet is of 172 bytes exactly. First 12 bytes are for RTP and I need to offset (remove) them in order to eliminate small noises. rest 160 bytes are 20MS of sound.
I must decode the G711 Ulaw bytes to PCM shorts array. Then to take the short array and to make a wave file out of it. I am taking it after I see there was no packet receiving for more than one second (so I know the speech ended and the new block release is because of a new speech so I can take the old speech and make a wave file out of it). You can decide of a different buffer depends on what you are doing.
It works fine. After the decoding the sound of the wave is very good. If you have UDP with PCM so you don’t need to decode G711 - just skip this part.
Finally I want to mention I saw many old answers with code parts using javax.sound.sampled that seems great because it can convert easily an audio file or stream to wave format with AudioFileFormat
And also convert G711 to pcm with AudioFormat manipulations. But unfortunately it is not part of current java for android. We must count on android AudioTrack instead (and AudioRecord if we want to get the sound from the mic) but AudioTrack play only PCM and do not support G711 format – so when streaming G711 with AudioTrack the noise is terrible. We must decode it in our code before writing it to the track. Also we cannot convert to wave file using audioInputStream – I tried to do this easily with javax.sound.sampled jar file I added to my app but android keep giving me errors such as format not supported for wave, and mixer errors when try to stream – so I understood latest android cannot work with javax.sound.sampled and I went to look for law level decoding of G711 and law level creation of wave file out of the buffer of byte array received from the UDP packets .
A. in manifest add:
<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE"/>
<uses-permission android:name="android.permission.INTERNET"/>
B. in the worker thread:
#Override
public void run(){
Log.i(TAG, "ClientListen thread started. Thread id: " + Thread.currentThread().getId());
try{
udpSocket = new DatagramSocket(port);
}catch(SocketException e){
e.printStackTrace();
}
byte[] messageBuf = new byte[BUF_SIZE];
Log.i(TAG, "waiting to receive packet in port: " + port);
if(udpSocket != null){
// here you can create new AudioTrack and play.track
byte pttSession[] = null;
while (running){
packet = new DatagramPacket(messageBuf, 0, messageBuf.length);
Log.d(TAG, "inside while running loop");
try{
Log.d(TAG, "receive block: waiting for user to press on
speaker(listening now inside udpSocket for DatagramPacket..)");
//get inside receive block until packet will arrive through this socket
long timeBeforeBlock = System.currentTimeMillis();
udpSocket.receive(packet);
Log.d(TAG, "client received a packet, receive block stopped)");
//this is for sending msg handler to the UI tread (you may skip this)
sendState("getting UDP packets...");
/* if previous block release happened more than one second ago - so this
packet release is for a new speech. so let’s copy the previous speech
to a wave file and empty the speech */
if(System.currentTimeMillis() - timeBeforeBlock > 1000 && pttSession != null){
convertBytesToFile(pttSession);
pttSession = null;
}
/* let’s take the packet that was released and start new speech or add it to the ongoing speech. */
byte[] slice = Arrays.copyOfRange(packet.getData(), 12, packet.getLength());
if(null == pttSession){
pttSession = slice;
}else{
pttSession = concat(pttSession, slice);
Log.d(TAG, "pttSession:" + Arrays.toString(pttSession));
}
}catch(IOException e){
Log.e(TAG, "UDP client IOException - error: ", e);
running = false;
}
}
// let’s take the latest speech and make a last wave file out of it.
if(pttSession != null){
convertBytesToFile(pttSession);
pttSession = null;
}
// if running == false then stop listen.
udpSocket.close();
handler.sendEmptyMessage(MainActivity.UdpClientHandler.UPDATE_END);
}else{
sendState("cannot bind datagram socket to the specified port:" + port);
}
}
private void convertBytesToFile(byte[] byteArray){
//decode the bytes from G711U to PCM (outcome is a short array)
G711UCodec decoder = new G711UCodec();
int size = byteArray.length;
short[] shortArray = new short[size];
decoder.decode(shortArray, byteArray, size, 0);
String newFileName = "speech_" + System.currentTimeMillis() + ".wav";
//convert short array to wav (add 44 prefix shorts) and save it as a .wav file
Wave wave = new Wave(SAMPLE_RATE, (short) 1, shortArray, 0, shortArray.length - 1);
if(wave.writeToFile(Environment.getExternalStoragePublicDirectory
(Environment.DIRECTORY_DOWNLOADS),newFileName)){
Log.d(TAG, "wave.writeToFile successful!");
sendState("create file: "+ newFileName);
}else{
Log.w(TAG, "wave.writeToFile failed");
}
}
C. encoding/decoding G711 U-Law class:
taken from: https://github.com/thinktube-kobe/airtube/blob/master/JavaLibrary/src/com/thinktube/audio/G711UCodec.java
/**
* G.711 codec. This class provides u-law conversion.
*/
public class G711UCodec {
// s00000001wxyz...s000wxyz
// s0000001wxyza...s001wxyz
// s000001wxyzab...s010wxyz
// s00001wxyzabc...s011wxyz
// s0001wxyzabcd...s100wxyz
// s001wxyzabcde...s101wxyz
// s01wxyzabcdef...s110wxyz
// s1wxyzabcdefg...s111wxyz
private static byte[] table13to8 = new byte[8192];
private static short[] table8to16 = new short[256];
static {
// b13 --> b8
for (int p = 1, q = 0; p <= 0x80; p <<= 1, q += 0x10) {
for (int i = 0, j = (p << 4) - 0x10; i < 16; i++, j += p) {
int v = (i + q) ^ 0x7F;
byte value1 = (byte) v;
byte value2 = (byte) (v + 128);
for (int m = j, e = j + p; m < e; m++) {
table13to8[m] = value1;
table13to8[8191 - m] = value2;
}
}
}
// b8 --> b16
for (int q = 0; q <= 7; q++) {
for (int i = 0, m = (q << 4); i < 16; i++, m++) {
int v = (((i + 0x10) << q) - 0x10) << 3;
table8to16[m ^ 0x7F] = (short) v;
table8to16[(m ^ 0x7F) + 128] = (short) (65536 - v);
}
}
}
public int decode(short[] b16, byte[] b8, int count, int offset) {
for (int i = 0, j = offset; i < count; i++, j++) {
b16[i] = table8to16[b8[j] & 0xFF];
}
return count;
}
public int encode(short[] b16, int count, byte[] b8, int offset) {
for (int i = 0, j = offset; i < count; i++, j++) {
b8[j] = table13to8[(b16[i] >> 4) & 0x1FFF];
}
return count;
}
public int getSampleCount(int frameSize) {
return frameSize;
}
}
D. Converting to wave file:
Taken from here:
https://github.com/google/oboe/issues/320
import java.io.File;
import java.io.FileNotFoundException;
import java.io.FileOutputStream;
import java.io.IOException;
public class Wave
{
private final int LONGINT = 4;
private final int SMALLINT = 2;
private final int INTEGER = 4;
private final int ID_STRING_SIZE = 4;
private final int WAV_RIFF_SIZE = LONGINT+ID_STRING_SIZE;
private final int WAV_FMT_SIZE = (4*SMALLINT)+(INTEGER*2)+LONGINT+ID_STRING_SIZE;
private final int WAV_DATA_SIZE = ID_STRING_SIZE+LONGINT;
private final int WAV_HDR_SIZE = WAV_RIFF_SIZE+ID_STRING_SIZE+WAV_FMT_SIZE+WAV_DATA_SIZE;
private final short PCM = 1;
private final int SAMPLE_SIZE = 2;
int cursor, nSamples;
byte[] output;
public Wave(int sampleRate, short nChannels, short[] data, int start, int end)
{
nSamples=end-start+1;
cursor=0;
output=new byte[nSamples*SMALLINT+WAV_HDR_SIZE];
buildHeader(sampleRate,nChannels);
writeData(data,start,end);
}
/*
by Udi for using byteArray directly
*/
public Wave(int sampleRate, short nChannels, byte[] data, int start, int end)
{
int size = data.length;
short[] shortArray = new short[size];
for (int index = 0; index < size; index++){
shortArray[index] = (short) data[index];
}
nSamples=end-start+1;
cursor=0;
output=new byte[nSamples*SMALLINT+WAV_HDR_SIZE];
buildHeader(sampleRate,nChannels);
writeData(shortArray,start,end);
}
// ------------------------------------------------------------
private void buildHeader(int sampleRate, short nChannels)
{
write("RIFF");
write(output.length);
write("WAVE");
writeFormat(sampleRate, nChannels);
}
// ------------------------------------------------------------
public void writeFormat(int sampleRate, short nChannels)
{
write("fmt ");
write(WAV_FMT_SIZE-WAV_DATA_SIZE);
write(PCM);
write(nChannels);
write(sampleRate);
write(nChannels * sampleRate * SAMPLE_SIZE);
write((short)(nChannels * SAMPLE_SIZE));
write((short)16);
}
// ------------------------------------------------------------
public void writeData(short[] data, int start, int end)
{
write("data");
write(nSamples*SMALLINT);
for(int i=start; i<=end; write(data[i++]));
}
// ------------------------------------------------------------
private void write(byte b)
{
output[cursor++]=b;
}
// ------------------------------------------------------------
private void write(String id)
{
if(id.length()!=ID_STRING_SIZE){
}
else {
for(int i=0; i<ID_STRING_SIZE; ++i) write((byte)id.charAt(i));
}
}
// ------------------------------------------------------------
private void write(int i)
{
write((byte) (i&0xFF)); i>>=8;
write((byte) (i&0xFF)); i>>=8;
write((byte) (i&0xFF)); i>>=8;
write((byte) (i&0xFF));
}
// ------------------------------------------------------------
private void write(short i)
{
write((byte) (i&0xFF)); i>>=8;
write((byte) (i&0xFF));
}
// ------------------------------------------------------------
public boolean writeToFile(File fileParent , String filename)
{
boolean ok=false;
try {
File path=new File(fileParent, filename);
FileOutputStream outFile = new FileOutputStream(path);
outFile.write(output);
outFile.close();
ok=true;
} catch (FileNotFoundException e) {
e.printStackTrace();
ok=false;
} catch (IOException e) {
ok=false;
e.printStackTrace();
}
return ok;
}
/**
* by Udi for test: write file with temp name so if you write many packets each packet will be written to a new file instead of deleting
* the previous file. (this is mainly for debug)
* #param fileParent
* #param filename
* #return
*/
public boolean writeToTmpFile(File fileParent , String filename)
{
boolean ok=false;
try {
File outputFile = File.createTempFile(filename, ".wav",fileParent);
FileOutputStream fileoutputstream = new FileOutputStream(outputFile);
fileoutputstream.write(output);
fileoutputstream.close();
ok=true;
} catch (FileNotFoundException e) {
e.printStackTrace();
ok=false;
} catch (IOException e) {
ok=false;
e.printStackTrace();
}
return ok;
}
}
I am new to android and I am trying to build an APP to record audio, do FFT to get freq spectrum.
The buffer size of complete audio is 155 * 2048
i.e. 155* AudioRecord.getMinBufferSize(44100, mono_channel, PCM_16bit)
Each chunk from the recorder is of 2048 shorts , i convert type short into type double and pass it to the FFT library. The library returns me the real and imaginary part which i will use to construct the frequency spectrum. Then i append each chunk to an array.
Now here is the problem:
In app 1 there are no UI elements or Fragments just a simple basic button which is attach to a listener that execute an Async task for reading chunks from Audio.Recorder and does FFT on it chunk by chunk ( each chunk = 2048 short). This process (Recording and FFT) for 155 chunks with sample rate 44100 should take 7 seconds ( 2048 * 155 / 44100 ) but the task took around 9 seconds, which is a lag of 2 seconds (which is acceptable).
In app 2 there are 7 fragments with login and signup screen where each fragment is separate from each other and linked to main activity. The same code here does the task (recording and fft) for 155 * 2048 chunks in 40-45 seconds which means the lag is upto 33-37 seconds. This lag is too much for my purpose. What could be the cause of so much lag in app 2 and how can i reduce it ?
Following is the FFT Library Code and Complex Type Code
FFT.java , Complex.java
My application Code
private boolean is_recording = false;
private AudioRecord recorder = null;
int minimum_buffer_size = AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int bufferSize = 155 * AudioRecord.getMinBufferSize(SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
private static final int SAMPLE_RATE = 44100;
private Thread recordingThread = null;
short[] audioBuffer = new short[bufferSize];
MainTask recordTask;
double finalData[];
Complex[] fftArray;
boolean recieved = false;
int data_trigger_point = 10;
int trigger_count = 0;
double previous_level_1 ;
double previous_level_2 ;
double previous_level_3 ;
int no_of_chunks_to_be_send = 30;
int count = 0;
short[] sendingBuffer = new short[minimum_buffer_size * no_of_chunks_to_be_send];
public static final int RequestPermissionCode = 1;
mButton = (ImageButton) view.findViewById(R.id.submit);
mButton.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if (is_recording) {
mButton.setBackgroundResource(R.drawable.play);
stopRecodringWithoutTone();
}
else {
mButton.setBackgroundResource(R.drawable.wait);
is_recording = true;
recordTask = new MainTask();
recordTask.execute();
}
}
});
public class MainTask extends AsyncTask<Void, int[], Void> {
#Override
protected Void doInBackground(Void... arg0) {
try {
recorder = new AudioRecord(
MediaRecorder.AudioSource.DEFAULT,
SAMPLE_RATE,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minimum_buffer_size);
recorder.startRecording();
short[] buffer_recording = new short[minimum_buffer_size];
int recieve_counter = 0;
while (is_recording) {
if (count < bufferSize) {
int bufferReadResult = recorder.read(buffer_recording, 0, minimum_buffer_size);
System.arraycopy(buffer_recording, 0, audioBuffer, count, buffer_recording.length);
count += bufferReadResult;
System.out.println(count);
finalData = convert_to_double(buffer_recording);
int [] magnitudes = processFFT(finalData);
}
else {
stopRecording();
}
}
}
catch (Throwable t) {
t.printStackTrace();
Log.e("V1", "Recording Failed");
}
return null;
}
#Override
protected void onProgressUpdate(int[]... magnitudes) {
}
}
private int[] processFFT(double [] data){
Complex[] fftTempArray = new Complex[finalData.length];
for (int i=0; i<finalData.length; i++)
{
fftTempArray[i] = new Complex(finalData[i], 0);
}
fftArray = FFT.fft(fftTempArray);
int [] magnitude = new int[fftArray.length/2];
for (int i=0; i< fftArray.length/2; i++) {
magnitude[i] = (int) fftArray[i].abs();
}
return magnitude;
}
private double[] convert_to_double(short data[]) {
double[] transformed = new double[data.length];
for (int j=0;j<data.length;j++) {
transformed[j] = (double)data[j];
}
return transformed;
}
private void stopRecording() {
if (null != recorder) {
recorder.stop();
postAudio(audioBuffer);
recorder.release();
is_recording = false;
recorder = null;
recordingThread = null;
count = 0;
recieved = false;
}
}
I am not sure why there is a lag, however you can circumvent this problem : Run two async tasks, task 1 records the data and stores it in an array. 2nd async task takes chunks form this array and does FFT.
AsyncTask runs at a lower priority to make sure the UI thread will remain responsive. Thus more UI, more delay in AsyncTask
You're facing the delay because of the scheduling of BACKGROUND thread priority by the Linux cgroup that Android uses that has to live with 10% CPU time altogether.
If you go with THREAD_PRIORITY_BACKGROUND + THREAD_PRIORITY_MORE_FAVORABLE
your thread will be lifted with 10% limitation.
So your code will look like this:
protected final Void doInBackground(Void... arg0) {
Process.setThreadPriority(THREAD_PRIORITY_BACKGROUND + THREAD_PRIORITY_MORE_FAVORABLE);
...//your code here
}
If that doesn't work due to some reasons on next call of doInBackground because Android by default resets the priority. In that case, try using Process.THREAD_PRIORITY_FOREGROUND
i create app : record voice and determine frequency of sound
package com.example.recordsound;
import edu.emory.mathcs.jtransforms.fft.DoubleFFT_1D;
import ca.uol.aig.fftpack.RealDoubleFFT;
public class MainActivity extends Activity implements OnClickListener{
int audioSource = MediaRecorder.AudioSource.MIC; // Audio source is the device MIC
int channelConfig = AudioFormat.CHANNEL_IN_MONO; // Recording in mono
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT; // Records in 16bit
private DoubleFFT_1D fft; // The fft double array
private RealDoubleFFT transformer;
int blockSize = 256; // deal with this many samples at a time
int sampleRate = 8000; // Sample rate in Hz
public double frequency = 0.0; // the frequency given
RecordAudio recordTask; // Creates a Record Audio command
TextView tv; // Creates a text view for the frequency
boolean started = false;
Button startStopButton;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
tv = (TextView)findViewById(R.id.textView1);
startStopButton= (Button)findViewById(R.id.button1);
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
private class RecordAudio extends AsyncTask<Void, Double, Void>{
#Override
protected Void doInBackground(Void... params){
/*Calculates the fft and frequency of the input*/
//try{
int bufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioEncoding); // Gets the minimum buffer needed
AudioRecord audioRecord = new AudioRecord(audioSource, sampleRate, channelConfig, audioEncoding, bufferSize); // The RAW PCM sample recording
short[] buffer = new short[blockSize]; // Save the raw PCM samples as short bytes
// double[] audioDataDoubles = new double[(blockSize*2)]; // Same values as above, as doubles
// -----------------------------------------------
double[] re = new double[blockSize];
double[] im = new double[blockSize];
double[] magnitude = new double[blockSize];
// ----------------------------------------------------
double[] toTransform = new double[blockSize];
tv.setText("Hello");
// fft = new DoubleFFT_1D(blockSize);
try{
audioRecord.startRecording(); //Start
}catch(Throwable t){
Log.e("AudioRecord", "Recording Failed");
}
while(started){
/* Reads the data from the microphone. it takes in data
* to the size of the window "blockSize". The data is then
* given in to audioRecord. The int returned is the number
* of bytes that were read*/
int bufferReadResult = audioRecord.read(buffer, 0, blockSize);
// Read in the data from the mic to the array
for(int i = 0; i < blockSize && i < bufferReadResult; i++) {
/* dividing the short by 32768.0 gives us the
* result in a range -1.0 to 1.0.
* Data for the compextForward is given back
* as two numbers in sequence. Therefore audioDataDoubles
* needs to be twice as large*/
// audioDataDoubles[2*i] = (double) buffer[i]/32768.0; // signed 16 bit
//audioDataDoubles[(2*i)+1] = 0.0;
toTransform[i] = (double) buffer[i] / 32768.0; // signed 16 bit
}
//audiodataDoubles now holds data to work with
// fft.complexForward(audioDataDoubles);
transformer.ft(toTransform);
//------------------------------------------------------------------------------------------
// Calculate the Real and imaginary and Magnitude.
for(int i = 0; i < blockSize; i++){
// real is stored in first part of array
re[i] = toTransform[i*2];
// imaginary is stored in the sequential part
im[i] = toTransform[(i*2)+1];
// magnitude is calculated by the square root of (imaginary^2 + real^2)
magnitude[i] = Math.sqrt((re[i] * re[i]) + (im[i]*im[i]));
}
double peak = -1.0;
// Get the largest magnitude peak
for(int i = 0; i < blockSize; i++){
if(peak < magnitude[i])
peak = magnitude[i];
}
// calculated the frequency
frequency = (sampleRate * peak)/blockSize;
//----------------------------------------------------------------------------------------------
/* calls onProgressUpdate
* publishes the frequency
*/
publishProgress(frequency);
try{
audioRecord.stop();
}
catch(IllegalStateException e){
Log.e("Stop failed", e.toString());
}
}
// }
return null;
}
protected void onProgressUpdate(Double... frequencies){
//print the frequency
String info = Double.toString(frequencies[0]);
tv.setText(info);
}
}
#Override
public void onClick(View v) {
// TODO Auto-generated method stub
if(started){
started = false;
startStopButton.setText("Start");
recordTask.cancel(true);
} else {
started = true;
startStopButton.setText("Stop");
recordTask = new RecordAudio();
recordTask.execute();
}
}
}
how detect locate of sound with phone ?
i want arrow display and rotate to sound direction .
thank you...
possible with android phone for sound locator ?
or
create a robot or external sensor ?
thx
edit: I've edited the code to show my fruitless (and maybe completely stupid) attempt to solve the problem myself. With this code I only get an awful rattle-like sound.
I’m rather new to Android app development and now my uncle asked me to develop an app for him, which records audio and simultaneously plays it. As if this wasn't enough, he also wants me to add a frequency filter. Actually, that’s beyond my skills, but I told him I would try, anyway.
I am able to record audio and play it with the RecordAudio and AudioTrack classes, respectively, but I have big problems with the frequency filter. I’ve used Google and searched this forum, of course, and could find some promising code snippets, but nothing really worked.
This is the (working) code I have so far:
public class MainActivity extends ActionBarActivity {
float freq_min;
float freq_max;
boolean isRecording = false;
int SAMPLERATE = 8000;
int AUDIO_FORMAT = AudioFormat.ENCODING_PCM_16BIT;
Thread recordingThread = null;
AudioRecord recorder;
Button cmdPlay;
EditText txtMinFrequency, txtMaxFrequency;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
cmdPlay = (Button)findViewById(R.id.bPlay);
cmdPlay.setOnClickListener(onClickListener);
txtMinFrequency = (EditText)findViewById(R.id.frequency_min);
txtMaxFrequency = (EditText)findViewById(R.id.frequency_max);
}
private OnClickListener onClickListener = new OnClickListener() {
#Override
public void onClick(View v) {
if (!isRecording) {
freq_min = Float.parseFloat(txtMinFrequency.getText().toString());
freq_max = Float.parseFloat(txtMaxFrequency.getText().toString());
isRecording = true;
cmdPlay.setText("stop");
startRecording();
}
else {
isRecording = false;
cmdPlay.setText("play");
stopRecording();
}
}
};
public void startRecording() {
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, SAMPLERATE,
AudioFormat.CHANNEL_IN_MONO, AUDIO_FORMAT, 1024);
recorder.startRecording();
recordingThread = new Thread(new Runnable(){
public void run() {
recordAndWriteAudioData();
}
});
recordingThread.start();
}
public void stopRecording() {
isRecording = false;
recorder.stop();
recorder.release();
recorder = null;
recordingThread = null;
}
private void recordAndWriteAudioData() {
byte audioData[] = new byte[1024];
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, SAMPLERATE, AudioFormat.CHANNEL_OUT_MONO,
AudioFormat.ENCODING_PCM_16BIT, 1024, AudioTrack.MODE_STREAM);
at.play();
while (isRecording) {
recorder.read(audioData, 0, 1024);
// Converting from byte array to float array and dividing floats by 32768 to get values between 0 and 1
float[] audioDataF = shortToFloat(byteToShort(audioData));
for (int i = 0; i < audioDataF.length; i++) {
audioDataF[i] /= 32768.0;
}
// Fast Fourier Transform
FloatFFT_1D fft = new FloatFFT_1D(512);
fft.realForward(audioDataF);
// fiter frequencies
for(int fftBin = 0; fftBin < 512; fftBin++){
float frequency = (float)fftBin * (float)SAMPLERATE / (float)512;
if(frequency < freq_min || frequency > freq_max){
int real = 2 * fftBin;
int imaginary = 2 * fftBin + 1;
audioDataF[real] = 0;
audioDataF[imaginary] = 0;
}
}
//inverse FFT
fft.realInverse(audioDataF, false);
// multiplying the floats by 32768
for (int i = 0; i < audioDataF.length; i++) {
audioDataF[i] *= 32768.0;
}
// converting float array back to short array
audioData = shortToByte(floatToShort(audioDataF));
at.write(audioData, 0, 1024);
}
at.stop();
at.release();
}
public static short[] byteToShort (byte[] byteArray){
short[] shortOut = new short[byteArray.length / 2];
ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);
for (int i = 0; i < shortOut.length; i++) {
shortOut[i] = byteBuffer.getShort();
}
return shortOut;
}
public static float[] shortToFloat (short[] shortArray){
float[] floatOut = new float[shortArray.length];
for (int i = 0; i < shortArray.length; i++) {
floatOut[i] = shortArray[i];
}
return floatOut;
}
public static short[] floatToShort (float[] floatArray){
short[] shortOut = new short[floatArray.length];
for (int i = 0; i < floatArray.length; i++) {
shortOut[i] = (short) floatArray[i];
}
return shortOut;
}
public static byte[] shortToByte (short[] shortArray){
byte[] byteOut = new byte[shortArray.length * 2];
ByteBuffer.wrap(byteOut).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(shortArray);
return byteOut;
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
getMenuInflater().inflate(R.menu.main, menu);
return true;
}
#Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
int id = item.getItemId();
if (id == R.id.action_settings) {
return true;
}
return super.onOptionsItemSelected(item);
}
}
On the site, Filter AudioRecord Frequencies, I found a code, which uses FFT to filter frequencies:
I hope this code is correct, because - to be honest - I wouldn’t know at all how to alter it, if it wasn’t. But the actual problem is, that the audio buffer is a ByteArray, but I need a Float Array for the FFT with values between 0 and 1 (and after the reverse FFT I have to convert the float array back to a ByteArray).
I simply can’t find code anywhere to do this, so any help would be highly appreciated!
byteToShort conversion is incorrect. While the data and most android devices are littlendian, ByteBuffer by default uses big-endian order. So we need to force it little-endian before conversion to short:
public static short[] byteToShort (byte[] byteArray){
short[] shortOut = new short[byteArray.length / 2];
ByteBuffer byteBuffer = ByteBuffer.wrap(byteArray);
byteBuffer.order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().get(shortOut);
return shortOut;
}
What I want is to be able to get the current noise level in decibels (dB) on the click of a Button. I have been playing around with the sensors and can get them to work easily but this.. I'm stumped. I've tried a few codes but none work, or helped me understand this.
How can this be achieved?
EDIT:
I use the following code:
private Thread recordingThread;
private int bufferSize = 800;
private short[][] buffers = new short[256][bufferSize];
private int[] averages = new int[256];
private int lastBuffer = 0;
AudioRecord recorder;
boolean recorderStarted = false;
protected void startListenToMicrophone()
{
if (!recorderStarted)
{
recordingThread = new Thread()
{
#Override
public void run()
{
int minBufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize * 10);
recorder.setPositionNotificationPeriod(bufferSize);
recorder.setRecordPositionUpdateListener(new OnRecordPositionUpdateListener()
{
#Override
public void onPeriodicNotification(AudioRecord recorder)
{
short[] buffer = buffers[++lastBuffer
% buffers.length];
recorder.read(buffer, 0, bufferSize);
long sum = 0;
for (int i = 0; i < bufferSize; ++i)
{
sum += Math.abs(buffer[i]);
}
averages[lastBuffer % buffers.length] = (int) (sum / bufferSize);
lastBuffer = lastBuffer % buffers.length;
Log.i("dB", ""+averages);
tv4.setText("" + averages[1]);
}
#Override
public void onMarkerReached(AudioRecord recorder)
{
}
});
recorder.startRecording();
short[] buffer = buffers[lastBuffer % buffers.length];
recorder.read(buffer, 0, bufferSize);
while (true)
{
if (isInterrupted())
{
recorder.stop();
recorder.release();
break;
}
}
}
};
recordingThread.start();
recorderStarted = true;
}
}
private void stopListenToMicrophone()
{
if (recorderStarted)
{
if (recordingThread != null && recordingThread.isAlive()
&& !recordingThread.isInterrupted())
{
recordingThread.interrupt();
}
recorderStarted = false;
}
}
}
I have two buttons in my app. First one calls startListenToMicrophone and second calls the stop. I don't understand how this works. I got the code from here.
The textview gets a weird and very big value. What I need is the sound level in decibels.
Just a passing thought and I may be ver very wrong but, amplitude in dB=20xlog(S1/S2).
I couldn't find this calculation anywhere in your code. what you need to do is get S1, which will be the current recorded level and get S2 which needs to be the maximum possible value to record. Then calculate the dB value.