I am trying to built a mic application, sound from mic is directly played by speaker.The problem is there is a delay in sound heard. Code is given below. Is there a way to avoid this delay? I have heard that we can avoid this by adding native code in c/c++ and then call it from java. Is it possible? If so how?
public class MainActivity extends AppCompatActivity {
boolean isRecording;
AudioManager am;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
am = (AudioManager) getSystemService(Context.AUDIO_SERVICE);
Record record = new Record();
record.run();
}
public class Record extends Thread
{
static final int bufferSize = 200000;
final short[] buffer = new short[bufferSize];
short[] readBuffer = new short[bufferSize];
public void run() {
isRecording = true;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int buffersize = AudioRecord.getMinBufferSize(11025,AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,buffersize);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_VOICE_CALL, 11025, AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, buffersize, AudioTrack.MODE_STREAM);
am.setRouting(AudioManager.MODE_NORMAL, AudioManager.ROUTE_EARPIECE, AudioManager.ROUTE_ALL);
atrack.setPlaybackRate(11025);
byte[] buffer = new byte[buffersize];
arec.startRecording();
atrack.play();
while(isRecording) {
arec.read(buffer, 0,
buffersize);
atrack.write(buffer, 0,
buffer.length);
}
arec.stop();
atrack.stop();
isRecording = false;
}
}
}
Use this class to set up native audio on Android: https://github.com/superpoweredSDK/Low-Latency-Android-Audio-iOS-Audio-Engine/tree/master/Superpowered/AndroidIO
You can find example projects there as well.
Well you can try this library called superpowered that claims to have low latency audio instead of writing your own native code.
Hope this works for you. the source is also available on git hub.
Related
The goal is to organize a voice call between two devices. The problem is in reciveing part, I get a very high levl of noise, so it is impossible to understand the speach. Here is my code:
The sending part:
public void startRecording() {
// private static final int RECORDER_SAMPLERATE = 44100;
// private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
// private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
// bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, bufferSize);
int i = recorder.getState();
if (i == 1)
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
byte data[] = new byte[bufferSize];
bluetoothCall.sendMessage(data);
}
}, "AudioRecorder Thread");
recordingThread.start();
}
The receiving part ( probably the problem is in this part ) :
private final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MESSAGE_WRITE:
// ...
case MESSAGE_READ:
try{
// private int sampleRate = 44100 ;
// int bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
byte[] readBuf = (byte[]) msg.obj;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.PERFORMANCE_MODE_LOW_LATENCY);
mAudioTrack.play();
mAudioTrack.write(readBuf, 0, readBuf.length);
mAudioTrack.release();
}
catch (Exception e){
}
break;
}
}
};
VoIP quality is typically influenced by several factors:
latency (end to end time taken for a packet)
jitter (variance in latency)
packet loss
Most issues in VoIP implementations are usually around latency and jitter, but from your description of noise it sounds more like you might be losing data or having it corrupted somehow.
Either way, unless you are doing this for learning or academic purposes it may be easier to use a VoIP library which will have solved these issues for you - there is quite a lot of complexity in both the signalling and the voice commuicaton for VoIp calls.
Android has a built in SIP library now:
https://developer.android.com/guide/topics/connectivity/sip.html
This does require a SIP server of some sort, even if you build it into your client, which may not be what you want.
You can also build your own solution around the RTP, the voice data transfer part, but this will require much more work for discovering IP addresses etc:
https://developer.android.com/reference/android/net/rtp/package-summary.html
You can use SIP clients without a server often but you need to work out the IP address and, more trickily, the port (https://stackoverflow.com/a/44449337/334402).
If you do want to use SIP there are opeusource SIP servers available - e.g.:
https://www.opensips.org/About/About
In my app, I use an AudioRecorder to detect when an audio signal is received. I have the app working on a single Android device but am getting errors testing on other devices. Namely, I get the error
start() status -38
Here is my code:
protected AudioTrack mAudioTrack;
protected AudioRecord mRecorder;
protected Runnable mRecordFeed = new Runnable() {
#Override
public void run() {
while (mRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
short[] data = new short[mBufferSize/2]; //the buffer size is in bytes
// gets the audio output from microphone to short array samples
mRecorder.read(data, 0, mBufferSize/2);
mDecoder.appendSignal(data);
}
}
};
protected void setupAudioRecorder(){
Log.d(TAG, "set up audio recorder");
//make sure that the settings of the recorder match the settings of the decoder
//most devices cant record anything but 44100 samples in 16bit PCM format...
mBufferSize = AudioRecord.getMinBufferSize(FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
//scale up the buffer... reading larger amounts of data
//minimizes the chance of missing data because of thread priority
mBufferSize *= 10;
//again, make sure the recorder settings match the decoder settings
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
if (mRecorder.getState() == AudioRecord.STATE_INITIALIZED) {
mRecorder.startRecording();
//start a thread to read the audio data
Thread thread = new Thread(mRecordFeed);
thread.setPriority(Thread.MAX_PRIORITY);
thread.start();
}
else {
Log.i(TAG, "Please check the recorder settings, something is wrong!");
}
}
What does this status -38 mean, and how can I resolve it? I can't seem to find any documentation anywhere.
I was trying to record the sound from the mic. The sound is sampled against the tone running in background.
To make it clear i want to run a tone in background and when i make some noise from microphone this should be mixed with the background tone that is already playing.
The final output should be a mix of the tone played and the signals from the microphone which is the noise. How can i achieve this.
I was referring to the post Android : recording audio using audiorecord class play as fast forwarded in stackoverflow to record data from microphone. But i need to record the background tone as well as the microphone input.
public class StartRecording {
private int samplePerSec = 8000;
public void Start(){
stopRecording.setEnabled(true);
bufferSize = AudioRecord.getMinBufferSize(samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, this.samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize * 10);
audioRecorder.startRecording();
isRecording = true;
while (isRecording && audioRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING)
{
short recordedData[] = new short[bufferSize];
audioRecorder.read(recordedData, 0, recordedData.length); // Reading from the audiorecorder
byte[] bData = shortTobyte(recordedData);
}
}
}
private byte[] shortTobyte(short[] recordedData) {
int tempBuff = recordedData.length;
byte[] bytes = new byte[tempBuff * 10];
for (int i = 0; i < tempBuff; i++) {
bytes[i * 2] = (byte) (recordedData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (recordedData[i] >> 8);
recordedData[i] = 0;
}
return bytes;
}
Thanks in advance...
You have to use AudioTrack and AudioRecord simulteanously.
Then all buffers from AudioRecord must be mixed to your tone (there are some algo on google for mixing 2 audio signals) and written in the AudioTrack.
You will have latency and some problems with echo if you don't use a headset.
I use this code to record and play back recorded audio in real time using the AudioTrack and AudioRecord
package com.example.audiotrack;
import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder;
import android.os.Bundle;
import android.util.Log;
public class MainActivity extends Activity {
private int freq = 8000;
private AudioRecord audioRecord = null;
private Thread Rthread = null;
private AudioManager audioManager = null;
private AudioTrack audioTrack = null;
byte[] buffer = new byte[freq];
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
final int bufferSize = AudioRecord.getMinBufferSize(freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize);
audioTrack = new AudioTrack(AudioManager.ROUTE_HEADSET, freq,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
MediaRecorder.AudioEncoder.AMR_NB, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.setPlaybackRate(freq);
final byte[] buffer = new byte[bufferSize];
audioRecord.startRecording();
Log.i("info", "Audio Recording started");
audioTrack.play();
Log.i("info", "Audio Playing started");
Rthread = new Thread(new Runnable() {
public void run() {
while (true) {
try {
audioRecord.read(buffer, 0, bufferSize);
audioTrack.write(buffer, 0, buffer.length);
} catch (Throwable t) {
Log.e("Error", "Read write failed");
t.printStackTrace();
}
}
}
});
Rthread.start();
}
}
My problem :
1.the quality of audio is bad
2.when I try different frequencies the app crashes
Audio quality can be bad because you are using AMR codec to compress audio data. AMR uses compression based on acoustic model so any other sounds than human speech will be in poor quality
Instead of
MediaRecorder.AudioEncoder.AMR_NB
try
AudioFormat.ENCODING_PCM_16BIT
AudioRecord is low level tool, so you must take care of, parameters compatibility on your own. As said in documentation many frequencies are not guranteed to work.
So it is good idea to go through all combinations and check wich of them are accesible before trying to record or play.
Nice solution was mentioned few times on stackOverflow, e.g here
Frequency detection on Android - AudioRecord
check public AudioRecord findAudioRecord() method
I'm implementing an app which will repeat everything I'm telling it.
What I need is to play the sound I'm recording on a buffer just with a second of delay
So that I would be listening myself but 1 second delayed
This is my run method of the Recorder class
public void run()
{
AudioRecord recorder = null;
int ix = 0;
buffers = new byte[256][160];
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
Timer t = new Timer();
SeekBar barra = (SeekBar)findViewById(R.id.barraDelay);
t.schedule(r = new Reproductor(), barra.getProgress());
while(!stopped)
{
byte[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
}
}
catch(Throwable x)
{
}
finally
{
recorder.stop();
recorder.release();
recorder = null;
}
And this is the run one of my player:
public void run() {
reproducir = true;
AudioTrack track = null;
int jx = 0;
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(reproducir)
{
byte[] buffer = buffers[jx++ % buffers.length];
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
track.stop();
track.release();
track = null;
}
}
Reproductor is an inner class extending TimerTask and implementing the "run" method.
Many thanks!
At least you should change the following line of your player
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
to
int N = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
because the API requires that (albeit the constant values are identical).
But this is only a marginal point. The main point is that you did not really present an approach to your problem, but only two generic methods.
The core of a working solution is that you use a ring buffer with a size of 1s and AudioTrack reading a block of it just ahead of writing new data via AudioRecord to the same block, both at the same sample rate.
I would suggest to do that inside a single thread.