I'm streaming the mic audio between two devices, everything is working but i have a bad echo.
Here what i'm doing
Reading thread
int sampleFreq = 22050;
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleFreq,
channelConfig,
audioFormat,
minBuffer,
AudioTrack.MODE_STREAM);
atrack.play();
byte[] buffer = new byte[minBuffer];
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
atrack.flush();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
break;
}
}
Here the recording thread
int sampleRate = 22050;
int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelMode,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
buffer = new byte[buffersize];
arec.startRecording();
while (true) {
arec.read(buffer, 0, buffersize);
new Thread( new Runnable(){
#Override
public void run() {
try {
mOutputStream.write(buffer);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Am I doing something wrong?
You need echo cancellation logic. Here is what I did on my Arm5 (WM8650) processor (Android 2.2) to remove the echo.
I wrapped Speex with JNI and called echo processing routines before sending PCM frames to encoder. No echo was canceled no matter what Speex settings I tried.
Because Speex is very sensitive to delay between playback and echo frames I implemented a queue and queued all packets sent to AudioTrack. The size of the queue should be roughly equal to the size of internal AudioTrack buffer. This way packet were sent to echo_playback roughly at the time when AudioTrack send packets to the sound card from its internal buffer. The delay was removed with this approach but echo was still not cancelled
I wrapped WebRtc echo cancellation part with JNI and called its methods before sending packets to encoder. The echo was still present but the library obviously was trying to cancel it.
I applied the buffer technique described in P2 and it finally started to work. The delay needs to be adjusted for each device though. Note also that WebRtc has mobile and full version of echo cancellation. The full version substantially slows the processor and should probably be run on ARM7 only. The mobile version works but with lower quality
I hope this will help someone.
Could be this:
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
If the buffer remains full from previous call and the new one is not full (so bytes < buffer.length) you re-play hold part of track.
Related
The goal is to organize a voice call between two devices. The problem is in reciveing part, I get a very high levl of noise, so it is impossible to understand the speach. Here is my code:
The sending part:
public void startRecording() {
// private static final int RECORDER_SAMPLERATE = 44100;
// private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
// private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
// bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, bufferSize);
int i = recorder.getState();
if (i == 1)
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
byte data[] = new byte[bufferSize];
bluetoothCall.sendMessage(data);
}
}, "AudioRecorder Thread");
recordingThread.start();
}
The receiving part ( probably the problem is in this part ) :
private final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MESSAGE_WRITE:
// ...
case MESSAGE_READ:
try{
// private int sampleRate = 44100 ;
// int bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
byte[] readBuf = (byte[]) msg.obj;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.PERFORMANCE_MODE_LOW_LATENCY);
mAudioTrack.play();
mAudioTrack.write(readBuf, 0, readBuf.length);
mAudioTrack.release();
}
catch (Exception e){
}
break;
}
}
};
VoIP quality is typically influenced by several factors:
latency (end to end time taken for a packet)
jitter (variance in latency)
packet loss
Most issues in VoIP implementations are usually around latency and jitter, but from your description of noise it sounds more like you might be losing data or having it corrupted somehow.
Either way, unless you are doing this for learning or academic purposes it may be easier to use a VoIP library which will have solved these issues for you - there is quite a lot of complexity in both the signalling and the voice commuicaton for VoIp calls.
Android has a built in SIP library now:
https://developer.android.com/guide/topics/connectivity/sip.html
This does require a SIP server of some sort, even if you build it into your client, which may not be what you want.
You can also build your own solution around the RTP, the voice data transfer part, but this will require much more work for discovering IP addresses etc:
https://developer.android.com/reference/android/net/rtp/package-summary.html
You can use SIP clients without a server often but you need to work out the IP address and, more trickily, the port (https://stackoverflow.com/a/44449337/334402).
If you do want to use SIP there are opeusource SIP servers available - e.g.:
https://www.opensips.org/About/About
In my app, I use an AudioRecorder to detect when an audio signal is received. I have the app working on a single Android device but am getting errors testing on other devices. Namely, I get the error
start() status -38
Here is my code:
protected AudioTrack mAudioTrack;
protected AudioRecord mRecorder;
protected Runnable mRecordFeed = new Runnable() {
#Override
public void run() {
while (mRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING) {
short[] data = new short[mBufferSize/2]; //the buffer size is in bytes
// gets the audio output from microphone to short array samples
mRecorder.read(data, 0, mBufferSize/2);
mDecoder.appendSignal(data);
}
}
};
protected void setupAudioRecorder(){
Log.d(TAG, "set up audio recorder");
//make sure that the settings of the recorder match the settings of the decoder
//most devices cant record anything but 44100 samples in 16bit PCM format...
mBufferSize = AudioRecord.getMinBufferSize(FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
//scale up the buffer... reading larger amounts of data
//minimizes the chance of missing data because of thread priority
mBufferSize *= 10;
//again, make sure the recorder settings match the decoder settings
mRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, FSKConfig.SAMPLE_RATE_44100, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
if (mRecorder.getState() == AudioRecord.STATE_INITIALIZED) {
mRecorder.startRecording();
//start a thread to read the audio data
Thread thread = new Thread(mRecordFeed);
thread.setPriority(Thread.MAX_PRIORITY);
thread.start();
}
else {
Log.i(TAG, "Please check the recorder settings, something is wrong!");
}
}
What does this status -38 mean, and how can I resolve it? I can't seem to find any documentation anywhere.
I'm currently developing an application to transmit audio. I have two services running, one to receive it, one to send it. The important stuff of the sender looks like this:
final DatagramSocket dSocket = new DatagramSocket();
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
Log.d(TAG, "Thread starting...");
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioRecord arec = new AudioRecord(
MediaRecorder.AudioSource.MIC, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
byte[] buffer = new byte[buffersize];
Log.d(TAG, "Starting to record, buffersize=" + buffersize);
arec.startRecording();
while (isRunning && !isInterrupted()) {
try {
Log.d(TAG, "Recording..");
arec.read(buffer, 0, buffersize);
DatagramPacket dPacket = new DatagramPacket(buffer,
buffersize);
for (Peer cur : mPeers) {
if(cur.isSelf) continue;
dPacket.setAddress(InetAddress
.getByName(cur.IP_ADDRESS));
dPacket.setPort(Config.UDP_PORT);
dSocket.send(dPacket);
}
} catch (Exception e) {
e.printStackTrace();
}
}
This code works and submits audio packets.
The receiver service looks like this:
// DatagramSocket dSocket = new DatagramSocket();
DatagramChannel dChannel = DatagramChannel.open();
DatagramSocket dSocket = dChannel.socket();
dSocket.setReuseAddress(true);
dSocket.setSoTimeout(2000);
dSocket.bind(new InetSocketAddress(Config.UDP_PORT));
Log.d(TAG, "DatagramSocket open.");
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack aTrack = new AudioTrack(
AudioManager.STREAM_VOICE_CALL, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize,
AudioTrack.MODE_STREAM);
DatagramPacket dPacket = new DatagramPacket(
new byte[buffersize], buffersize);
Log.d(TAG, "Packet with buffersize=" + buffersize);
aTrack.play();
Log.d(TAG, "Playing track..");
byte[] buffer = new byte[buffersize];
while (isRunning && !isInterrupted()) {
try {
dSocket.receive(dPacket);
buffer = dPacket.getData();
aTrack.setPlaybackRate(11025);
aTrack.write(buffer, 0, buffer.length);
} catch (Exception e) {
e.printStackTrace();
}
}
aTrack.stop();
This also works, but after sending for more than a couple of seconds, there is a huge delay, the packets do still arive but slow and the audio playback simply "lags" - what can I do to improve the quality? This is a direct peer-to-peer connection, no servers involved. Should I increase the buffer size? The current buffer size is the minimum buffer size I get from Android, which is 1024 on my devices (two Galaxy Nexus). BTW, the services to start another thread, which has its priority set to "URGENT" (which I believe is the highest available). For my purposes, the mPeers list only has one peer, so the "for" loop is not really delaying this I'd guess.
Have you checked what happens when you remove the network-related part of the sender-loop? I.e., does the read()-call from the microphone return immediately? Also, you describe that packets arrive slowly, but have you checked if there is a large delay between when they are sent as well?
The reason I am asking is that the phenomena you describe could be caused by the send socket blocking, because it's buffer is full. If the socket is blocking, the send()-call will take a long time to complete. Unless you have very high-bandwidth traffic, or a very slow CPU, it should not happen with UDP sockets (they are typically fire and forget), but is worth checking.
In order to avoid blocking, create a non-blocking socket. I am not too familiar with Java networking, but it seems like a DatagramChannel is needed to do this.
Okay, so the delay is gone. What I've done is that I've simply increased the UDP packet's buffer size. The minimum buffer size I received from Android was (on my devices) 1024 bytes. Now I do something like this:
int maxBufferSize = 4096; // my value. see what's working best for you.
int minBufferSize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int actualBufferSize = Math.max(minBufferSize, maxBufferSize);
With 4 KB buffer, the audio transmission is really good and there is absolutely no delay.
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?
I am trying to record data from my mobile phone's audio interface. I used audiorecord function. Following is my code:
public void Initialize() {
buffersizebytes = AudioRecord.getMinBufferSize(SAMPPERSEC,channelConfiguration, audioEncoding); // 4096 on ion
buffer = new short[buffersizebytes];
buflen = buffersizebytes / 2;
audioRecord = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC, SAMPPERSEC,
channelConfiguration, audioEncoding, buffersizebytes);
acquire();
for(int i=0; i<4096; i++) buffer[i]=1;
}
public void acquire() {
try {
audioRecord.startRecording();
mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);
audioRecord.stop();
} catch (Throwable t) {
// Log.e("AudioRecord", "Recording Failed");
}
}
I want to put my acquired data into a buffer of 4096 bytes. But my program only put data into 1024 bytes. Also first 432 bytes also zeros. But I am sending data continuously. What could be the issue?
getMinBufferSize, as the name implies gives you the minimum buffer size. You can set anything bigger, including 4096.
As for the first samples after initialization, my phone gives two gigantic peaks that last for about 0.5 seconds, so I guess it is caused by the recorder starting up. Try skipping a few samples (let's say 500) before processing real data.
Furthermore, the size of buffer should be buffersizebytes/2.