I wand to record some audio using AudioRecord. In order to initialize the AudioRecord object you must provide several arguments e.g( rate, channel, encoding) and since different combinations of arguments are supported from hardware devices, I went on and check functional apps like
Ringdroid:
Audio recording done in Ringdroid
and Rehersal Assistant:
Audio recording in Rehersal Assistant
As mentioned in the documentation of the AudioRecord class, the configuration that will always work is rate = 44100 and channel = CHANNEL_IN_MONO.
I am using the same arguments when initializing my AudioRecord object but I still get a runtime error saying that my object is uninitialized. Since RingDroind is working fine on my device (Nexus 5) I have used the same configuration when creating my AudioObject.
package com.example.android.visualizeaudio;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.view.Menu;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
public class MainActivity extends Activity {
int mSampleRate = 44100;
Button startButton;
boolean started = false;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
startButton = (Button) this.findViewById(R.id.start_button);
}
private void RecordAudio() {
int minBufferSize = AudioRecord.getMinBufferSize(
mSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
// make sure minBufferSize can contain at least 1 second of audio (16 bits sample).
if (minBufferSize < mSampleRate * 2) {
minBufferSize = mSampleRate * 2;
}
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
mSampleRate,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize
);
audioRecord.startRecording();
//Do some stuff here with the recorded data
audioRecord.stop();
audioRecord.release();
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
public void startRec(View view) {
if (started) {
started = false;
startButton.setText("Start");
} else {
started = true;
startButton.setText("Stop");
Toast.makeText(this, "Recording started", Toast.LENGTH_LONG);
RecordAudio();
}
}
}
I am attaching the object inspection during debugging in case it provides more insight
Thank you
Switching to SDK target version 22 made the trick. With target SDK 23 I had these errors. I don't know why but it seems that the resource I am trying to access is used by the OS.
Related
I want to play music tracks with AudioTrack class in android, using stereo with PCM 16 bit channel. Here's my code for MusicListFragment.java
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.support.v4.app.Fragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.AdapterView;
import android.widget.ListView;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStream;
public class MusicListFragment extends Fragment implements AdapterView.OnItemClickListener {
private AudioTrack mAudioTrack;
public MusicListFragment() {
}
#Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
// Inflate the layout for this fragment
View view = inflater.inflate(R.layout.fragment_music_list, container, false);
ListView musicListView = (ListView) view.findViewById(R.id.music_list_view);
musicListView.setAdapter(new MusicListAdapter(getActivity()));
int minBufferSize = AudioTrack.getMinBufferSize(
22000,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 22000
, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, minBufferSize,
AudioTrack.MODE_STREAM);
musicListView.setOnItemClickListener(this);
return view;
}
#Override
public void onItemClick(AdapterView<?> adapterView, View view, int i, long l) {
Music music = (Music) adapterView.getItemAtPosition(i);
InputStream inputStream = null;
byte[] bytes = new byte[512];
mAudioTrack.play();
try {
File file = new File(music.getPath());
inputStream = new FileInputStream(file);
int k;
while ((k = inputStream.read(bytes)) != -1) {
mAudioTrack.write(bytes, 0, k);
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
}
The adapter works fine, since I tested it using the MediaPlayer class. (I can provide my Adapter and other classes too, if you want. But I doubt they are the issue.) My list view shows the title, artist and the album of musics and also stores each music's path.
I could play musics easily with MediaPlayer, but I'm having problem using the AudioTrack. The code makes device play static sounds, like the old TVs when they have no signal! :)
As you can see in adapter's on click listener, I'm
1. getting the music that is selected.
2. reading the music file into an InputStream
3. and finally writing the bytes to the audio track instance. (I've also put the line: mAudioTrack.play() after the try, catch statement, no luck) What am I doing wrong here?
Playing the binary contents of a compressed audio file to the AudioTrack, perhaps? This won't work unless your music files are stored in raw, uncompressed format. AudioTracks use PCM format. Even the header on a .wav file would sound like static, until you reached the raw samples.
Thanks to #greeble31 answer, I understand my issue now, I searched about how to decode .mp3 and .wav files to PCM format and I found some useful answers here and here , in case anyone needs it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
i need change instream and outstream of voice call.
for example change voice of man to woman or change human voice to cartoon voice . on-demand
if you have any idea or android source code , please share it
You can make use of google glass project same as reference. Here is the extract \
here
package com.google.android.glass.sample.waveform;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder.AudioSource;
import android.os.Bundle;
import android.widget.TextView;
/**
* Receives audio input from the microphone and displays a visualization of that data as a waveform
* on the screen.
*/
public class WaveformActivity extends Activity {
// The sampling rate for the audio recorder.
private static final int SAMPLING_RATE = 44100;
private WaveformView mWaveformView;
private TextView mDecibelView;
private RecordingThread mRecordingThread;
private int mBufferSize;
private short[] mAudioBuffer;
private String mDecibelFormat;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.layout_waveform);
mWaveformView = (WaveformView) findViewById(R.id.waveform_view);
mDecibelView = (TextView) findViewById(R.id.decibel_view);
// Compute the minimum required audio buffer size and allocate the buffer.
mBufferSize = AudioRecord.getMinBufferSize(SAMPLING_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT);
mAudioBuffer = new short[mBufferSize / 2];
mDecibelFormat = getResources().getString(R.string.decibel_format);
}
#Override
protected void onResume() {
super.onResume();
mRecordingThread = new RecordingThread();
mRecordingThread.start();
}
#Override
protected void onPause() {
super.onPause();
if (mRecordingThread != null) {
mRecordingThread.stopRunning();
mRecordingThread = null;
}
}
/**
* A background thread that receives audio from the microphone and sends it to the waveform
* visualizing view.
*/
private class RecordingThread extends Thread {
private boolean mShouldContinue = true;
#Override
public void run() {
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_AUDIO);
AudioRecord record = new AudioRecord(AudioSource.MIC, SAMPLING_RATE,
AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
record.startRecording();
while (shouldContinue()) {
record.read(mAudioBuffer, 0, mBufferSize / 2);
mWaveformView.updateAudioData(mAudioBuffer);
updateDecibelLevel();
}
record.stop();
record.release();
}
/**
* Gets a value indicating whether the thread should continue running.
*
* #return true if the thread should continue running or false if it should stop
*/
private synchronized boolean shouldContinue() {
return mShouldContinue;
}
/** Notifies the thread that it should stop running at the next opportunity. */
public synchronized void stopRunning() {
mShouldContinue = false;
}
/**
* Computes the decibel level of the current sound buffer and updates the appropriate text
* view.
*/
private void updateDecibelLevel() {
// Compute the root-mean-squared of the sound buffer and then apply the formula for
// computing the decibel level, 20 * log_10(rms). This is an uncalibrated calculation
// that assumes no noise in the samples; with 16-bit recording, it can range from
// -90 dB to 0 dB.
double sum = 0;
for (short rawSample : mAudioBuffer) {
double sample = rawSample / 32768.0;
sum += sample * sample;
}
double rms = Math.sqrt(sum / mAudioBuffer.length);
final double db = 20 * Math.log10(rms);
// Update the text view on the main thread.
mDecibelView.post(new Runnable() {
#Override
public void run() {
mDecibelView.setText(String.format(mDecibelFormat, db));
}
});
}
}
}
I'm trying to write an app that will listen for sound over a phone/tablet's microphone. I think capturing sound is not too hard, I've found some code here.
I was wondering how I would go about associating a volume level? Ideally I'd like to convert the sound level into decibels, but any arbitrary scale would do just fine.
In the end I just used the Java SDK to do this, and then when I converted the app to iOS I rewrote it in objective-c.
On Android you can import the following libraries:
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
And then make some calls like this to see the amplitude of the audio recording:
int minSize = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioRecord ar = new AudioRecord(MediaRecorder.AudioSource.MIC, 8000,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,minSize);
short[] buffer = new short[minSize];
ar.startRecording();
while(1){
ar.read(buffer, 0, val*minSize);
average_buffer = 0;
for (int i = 0; i < minSize; i++){
average_buffer += (int) Math.sqrt((float)buffer[i]*buffer[i]);
}
Log.i("NOISE LEVEL", Integer.toString(average_buffer);
}
I use this code to record and play back recorded audio in real time using the AudioTrack and AudioRecord
package com.example.audiotrack;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.util.Log;
public class MainActivity extends Activity {
private int freq = 8000;
private AudioRecord audioRecord = null;
private Thread Rthread = null;
private AudioManager audioManager = null;
private AudioTrack audioTrack = null;
byte[] buffer = new byte[freq];
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
final int bufferSize = AudioRecord.getMinBufferSize(freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize);
audioTrack = new AudioTrack(AudioManager.ROUTE_HEADSET, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate(freq);
final byte[] buffer = new byte[bufferSize];
audioRecord.startRecording();
Log.i("info", "Audio Recording started");
audioTrack.play();
Log.i("info", "Audio Playing started");
Rthread = new Thread(new Runnable() {
public void run() {
while (true) {
try {
audioRecord.read(buffer, 0, bufferSize);
audioTrack.write(buffer, 0, buffer.length);
} catch (Throwable t) {
Log.e("Error", "Read write failed");
t.printStackTrace();
}
}
}
});
Rthread.start();
}
}
My problem :
1.the quality of audio is bad
2.when I try different frequencies the app crashes
Audio quality can be bad because you are using AMR codec to compress audio data. AMR uses compression based on acoustic model so any other sounds than human speech will be in poor quality
Instead of
MediaRecorder.AudioEncoder.AMR_NB
try
AudioFormat.ENCODING_PCM_16BIT
AudioRecord is low level tool, so you must take care of, parameters compatibility on your own. As said in documentation many frequencies are not guranteed to work.
So it is good idea to go through all combinations and check wich of them are accesible before trying to record or play.
Nice solution was mentioned few times on stackOverflow, e.g here
Frequency detection on Android - AudioRecord
check public AudioRecord findAudioRecord() method
I have been searching everywhere to find a reliable method to calculate the FFT of an audio byte stream received by a native function in android SDK (through eclipse IDE). I have come across the libgdx fft and Jtransform. Jtransform Found here
JTransform
.
I have downloaded them all and added the .jar files to a created libs folder in the root directory for the project. I have then linked the project to the new .jar files through project properties > java Build Path > Libraries.
My src java file looks like this attempting to use Jtransform.
package com.spectrum;
import android.app.Activity;
import android.graphics.Bitmap;
import android.graphics.BitmapFactory;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.media.MediaPlayer;
import android.os.Bundle;
import android.widget.LinearLayout;
import android.widget.TextView;
import android.view.View;
import com.badlogic.gdx.audio.analysis.*;
import edu.emory.mathcs.jtransforms.fft.*;
import edu.emory.mathcs.utils.*;
public class spectrum extends Activity {
static byte [] sin = null;
static int f = 2000;
static int fs = 44100;
double buf;
/** Called when the activity is first created. */
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
//initialize(new SpectrumDesktop(), false);
sin = playsin(f,fs);
buf = bytearray2double(sin);
public DoubleFFT_1D(512); //<Undefined>
public void complexForward(sin) //<Undefined>
playsound(sin);
}
public static double bytearray2double(byte[] b) {
ByteBuffer buf = ByteBuffer.wrap(b);
return buf.getDouble();
}
private void playsound(byte[] sin2){
int intSize = android.media.AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO,
AudioFormat.ENCODING_PCM_16BIT, intSize, AudioTrack.MODE_STREAM);
at.play();
at.write(sin2, 0, sin2.length );
at.stop();
at.release();
}
#Override
protected void onDestroy() {
super.onDestroy();
// TODO Auto-generated method stub
if (mMediaPlayer != null) {
mMediaPlayer.release();
mMediaPlayer = null;
}
}
public native byte[] playsin(int f,int fs);
/** Load jni .so on initialisation */
static {
System.loadLibrary("SPL");
}
}
In this example I am only using the Jtransform packages, however I have been getting the same compile error for the lingdx packages. The compiler says that DoubleFFT_1D and complexForward are undefined.
So there is something I am missing, like not linking my libraries correctly, I am not sure.
Any help would be greatly appreciated. Am I meant to declare an instance of DoubleFFT_1D and complexForward before onCreate or something?
I know this is a noob question, but I am new to object oriented languages and learning java on the go. Thanks :)
You first need to create a Fourier Transform object
DoubleFFT_1D fft = new DoubleFFT_1D(int n);
Where n is the size of the transform you want. n may have to be 2 times bigger than you think since it expects real and imaginary parts side by side in the same input matrix.
Then you can apply your methods to fft, eg
fft.complexForward(double[]);
Strangely the result is saved into the input array.
1.In Project properties -> Java build path -> Order and export, check all your added dependencies to be included with project class files.
2.Select Android Tools > Fix Project Properties
Than run your app