I use this code to record and play back recorded audio in real time using the AudioTrack and AudioRecord
package com.example.audiotrack;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.util.Log;
public class MainActivity extends Activity {
private int freq = 8000;
private AudioRecord audioRecord = null;
private Thread Rthread = null;
private AudioManager audioManager = null;
private AudioTrack audioTrack = null;
byte[] buffer = new byte[freq];
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
final int bufferSize = AudioRecord.getMinBufferSize(freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize);
audioTrack = new AudioTrack(AudioManager.ROUTE_HEADSET, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate(freq);
final byte[] buffer = new byte[bufferSize];
audioRecord.startRecording();
Log.i("info", "Audio Recording started");
audioTrack.play();
Log.i("info", "Audio Playing started");
Rthread = new Thread(new Runnable() {
public void run() {
while (true) {
try {
audioRecord.read(buffer, 0, bufferSize);
audioTrack.write(buffer, 0, buffer.length);
} catch (Throwable t) {
Log.e("Error", "Read write failed");
t.printStackTrace();
}
}
}
});
Rthread.start();
}
}
My problem :
1.the quality of audio is bad
2.when I try different frequencies the app crashes
Audio quality can be bad because you are using AMR codec to compress audio data. AMR uses compression based on acoustic model so any other sounds than human speech will be in poor quality
Instead of
MediaRecorder.AudioEncoder.AMR_NB
try
AudioFormat.ENCODING_PCM_16BIT
AudioRecord is low level tool, so you must take care of, parameters compatibility on your own. As said in documentation many frequencies are not guranteed to work.
So it is good idea to go through all combinations and check wich of them are accesible before trying to record or play.
Nice solution was mentioned few times on stackOverflow, e.g here
Frequency detection on Android - AudioRecord
check public AudioRecord findAudioRecord() method
Related
The goal is to organize a voice call between two devices. The problem is in reciveing part, I get a very high levl of noise, so it is impossible to understand the speach. Here is my code:
The sending part:
public void startRecording() {
// private static final int RECORDER_SAMPLERATE = 44100;
// private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
// private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
// bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, bufferSize);
int i = recorder.getState();
if (i == 1)
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
byte data[] = new byte[bufferSize];
bluetoothCall.sendMessage(data);
}
}, "AudioRecorder Thread");
recordingThread.start();
}
The receiving part ( probably the problem is in this part ) :
private final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MESSAGE_WRITE:
// ...
case MESSAGE_READ:
try{
// private int sampleRate = 44100 ;
// int bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
byte[] readBuf = (byte[]) msg.obj;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.PERFORMANCE_MODE_LOW_LATENCY);
mAudioTrack.play();
mAudioTrack.write(readBuf, 0, readBuf.length);
mAudioTrack.release();
}
catch (Exception e){
}
break;
}
}
};
VoIP quality is typically influenced by several factors:
latency (end to end time taken for a packet)
jitter (variance in latency)
packet loss
Most issues in VoIP implementations are usually around latency and jitter, but from your description of noise it sounds more like you might be losing data or having it corrupted somehow.
Either way, unless you are doing this for learning or academic purposes it may be easier to use a VoIP library which will have solved these issues for you - there is quite a lot of complexity in both the signalling and the voice commuicaton for VoIp calls.
Android has a built in SIP library now:
https://developer.android.com/guide/topics/connectivity/sip.html
This does require a SIP server of some sort, even if you build it into your client, which may not be what you want.
You can also build your own solution around the RTP, the voice data transfer part, but this will require much more work for discovering IP addresses etc:
https://developer.android.com/reference/android/net/rtp/package-summary.html
You can use SIP clients without a server often but you need to work out the IP address and, more trickily, the port (https://stackoverflow.com/a/44449337/334402).
If you do want to use SIP there are opeusource SIP servers available - e.g.:
https://www.opensips.org/About/About
In my app, I use an AudioRecorder to detect when an audio signal is received. I have the app working on a single Android device but am getting errors testing on other devices. Namely, I get the error
start() status -38
Here is my code:
protected AudioTrack mAudioTrack;
protected AudioRecord mRecorder;
protected Runnable mRecordFeed = new Runnable() {
#Override
public void run() {
while (mRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
short[] data = new short[mBufferSize/2]; //the buffer size is in bytes
// gets the audio output from microphone to short array samples
mRecorder.read(data, 0, mBufferSize/2);
mDecoder.appendSignal(data);
}
}
};
protected void setupAudioRecorder(){
Log.d(TAG, "set up audio recorder");
//make sure that the settings of the recorder match the settings of the decoder
//most devices cant record anything but 44100 samples in 16bit PCM format...
mBufferSize = AudioRecord.getMinBufferSize(FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
//scale up the buffer... reading larger amounts of data
//minimizes the chance of missing data because of thread priority
mBufferSize *= 10;
//again, make sure the recorder settings match the decoder settings
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
if (mRecorder.getState() == AudioRecord.STATE_INITIALIZED) {
mRecorder.startRecording();
//start a thread to read the audio data
Thread thread = new Thread(mRecordFeed);
thread.setPriority(Thread.MAX_PRIORITY);
thread.start();
}
else {
Log.i(TAG, "Please check the recorder settings, something is wrong!");
}
}
What does this status -38 mean, and how can I resolve it? I can't seem to find any documentation anywhere.
I am trying to built a mic application, sound from mic is directly played by speaker.The problem is there is a delay in sound heard. Code is given below. Is there a way to avoid this delay? I have heard that we can avoid this by adding native code in c/c++ and then call it from java. Is it possible? If so how?
public class MainActivity extends AppCompatActivity {
boolean isRecording;
AudioManager am;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
Record record = new Record();
record.run();
}
public class Record extends Thread
{
static final int bufferSize = 200000;
final short[] buffer = new short[bufferSize];
short[] readBuffer = new short[bufferSize];
public void run() {
isRecording = true;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int buffersize = AudioRecord.getMinBufferSize(11025,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,buffersize);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM);
am.setRouting(AudioManager.MODE_NORMAL, AudioManager.ROUTE_EARPIECE, AudioManager.ROUTE_ALL);
atrack.setPlaybackRate(11025);
byte[] buffer = new byte[buffersize];
arec.startRecording();
atrack.play();
while(isRecording) {
arec.read(buffer, 0,
buffersize);
atrack.write(buffer, 0,
buffer.length);
}
arec.stop();
atrack.stop();
isRecording = false;
}
}
}
Use this class to set up native audio on Android: https://github.com/superpoweredSDK/Low-Latency-Android-Audio-iOS-Audio-Engine/tree/master/Superpowered/AndroidIO
You can find example projects there as well.
Well you can try this library called superpowered that claims to have low latency audio instead of writing your own native code.
Hope this works for you. the source is also available on git hub.
I wand to record some audio using AudioRecord. In order to initialize the AudioRecord object you must provide several arguments e.g( rate, channel, encoding) and since different combinations of arguments are supported from hardware devices, I went on and check functional apps like
Ringdroid:
Audio recording done in Ringdroid
and Rehersal Assistant:
Audio recording in Rehersal Assistant
As mentioned in the documentation of the AudioRecord class, the configuration that will always work is rate = 44100 and channel = CHANNEL_IN_MONO.
I am using the same arguments when initializing my AudioRecord object but I still get a runtime error saying that my object is uninitialized. Since RingDroind is working fine on my device (Nexus 5) I have used the same configuration when creating my AudioObject.
package com.example.android.visualizeaudio;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioRecord;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.view.Menu;
import android.view.View;
import android.widget.Button;
import android.widget.Toast;
public class MainActivity extends Activity {
int mSampleRate = 44100;
Button startButton;
boolean started = false;
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
startButton = (Button) this.findViewById(R.id.start_button);
}
private void RecordAudio() {
int minBufferSize = AudioRecord.getMinBufferSize(
mSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
// make sure minBufferSize can contain at least 1 second of audio (16 bits sample).
if (minBufferSize < mSampleRate * 2) {
minBufferSize = mSampleRate * 2;
}
AudioRecord audioRecord = new AudioRecord(
MediaRecorder.AudioSource.MIC,
mSampleRate,
AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,
minBufferSize
);
audioRecord.startRecording();
//Do some stuff here with the recorded data
audioRecord.stop();
audioRecord.release();
}
#Override
public boolean onCreateOptionsMenu(Menu menu) {
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
public void startRec(View view) {
if (started) {
started = false;
startButton.setText("Start");
} else {
started = true;
startButton.setText("Stop");
Toast.makeText(this, "Recording started", Toast.LENGTH_LONG);
RecordAudio();
}
}
}
I am attaching the object inspection during debugging in case it provides more insight
Thank you
Switching to SDK target version 22 made the trick. With target SDK 23 I had these errors. I don't know why but it seems that the resource I am trying to access is used by the OS.
I'm implementing an app which will repeat everything I'm telling it.
What I need is to play the sound I'm recording on a buffer just with a second of delay
So that I would be listening myself but 1 second delayed
This is my run method of the Recorder class
public void run()
{
AudioRecord recorder = null;
int ix = 0;
buffers = new byte[256][160];
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
Timer t = new Timer();
SeekBar barra = (SeekBar)findViewById(R.id.barraDelay);
t.schedule(r = new Reproductor(), barra.getProgress());
while(!stopped)
{
byte[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
}
}
catch(Throwable x)
{
}
finally
{
recorder.stop();
recorder.release();
recorder = null;
}
And this is the run one of my player:
public void run() {
reproducir = true;
AudioTrack track = null;
int jx = 0;
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(reproducir)
{
byte[] buffer = buffers[jx++ % buffers.length];
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
track.stop();
track.release();
track = null;
}
}
Reproductor is an inner class extending TimerTask and implementing the "run" method.
Many thanks!
At least you should change the following line of your player
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
to
int N = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
because the API requires that (albeit the constant values are identical).
But this is only a marginal point. The main point is that you did not really present an approach to your problem, but only two generic methods.
The core of a working solution is that you use a ring buffer with a size of 1s and AudioTrack reading a block of it just ahead of writing new data via AudioRecord to the same block, both at the same sample rate.
I would suggest to do that inside a single thread.