I'm currently developing an application to transmit audio. I have two services running, one to receive it, one to send it. The important stuff of the sender looks like this:
final DatagramSocket dSocket = new DatagramSocket();
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
Log.d(TAG, "Thread starting...");
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioRecord arec = new AudioRecord(
MediaRecorder.AudioSource.MIC, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
byte[] buffer = new byte[buffersize];
Log.d(TAG, "Starting to record, buffersize=" + buffersize);
arec.startRecording();
while (isRunning && !isInterrupted()) {
try {
Log.d(TAG, "Recording..");
arec.read(buffer, 0, buffersize);
DatagramPacket dPacket = new DatagramPacket(buffer,
buffersize);
for (Peer cur : mPeers) {
if(cur.isSelf) continue;
dPacket.setAddress(InetAddress
.getByName(cur.IP_ADDRESS));
dPacket.setPort(Config.UDP_PORT);
dSocket.send(dPacket);
}
} catch (Exception e) {
e.printStackTrace();
}
}
This code works and submits audio packets.
The receiver service looks like this:
// DatagramSocket dSocket = new DatagramSocket();
DatagramChannel dChannel = DatagramChannel.open();
DatagramSocket dSocket = dChannel.socket();
dSocket.setReuseAddress(true);
dSocket.setSoTimeout(2000);
dSocket.bind(new InetSocketAddress(Config.UDP_PORT));
Log.d(TAG, "DatagramSocket open.");
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack aTrack = new AudioTrack(
AudioManager.STREAM_VOICE_CALL, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize,
AudioTrack.MODE_STREAM);
DatagramPacket dPacket = new DatagramPacket(
new byte[buffersize], buffersize);
Log.d(TAG, "Packet with buffersize=" + buffersize);
aTrack.play();
Log.d(TAG, "Playing track..");
byte[] buffer = new byte[buffersize];
while (isRunning && !isInterrupted()) {
try {
dSocket.receive(dPacket);
buffer = dPacket.getData();
aTrack.setPlaybackRate(11025);
aTrack.write(buffer, 0, buffer.length);
} catch (Exception e) {
e.printStackTrace();
}
}
aTrack.stop();
This also works, but after sending for more than a couple of seconds, there is a huge delay, the packets do still arive but slow and the audio playback simply "lags" - what can I do to improve the quality? This is a direct peer-to-peer connection, no servers involved. Should I increase the buffer size? The current buffer size is the minimum buffer size I get from Android, which is 1024 on my devices (two Galaxy Nexus). BTW, the services to start another thread, which has its priority set to "URGENT" (which I believe is the highest available). For my purposes, the mPeers list only has one peer, so the "for" loop is not really delaying this I'd guess.
Have you checked what happens when you remove the network-related part of the sender-loop? I.e., does the read()-call from the microphone return immediately? Also, you describe that packets arrive slowly, but have you checked if there is a large delay between when they are sent as well?
The reason I am asking is that the phenomena you describe could be caused by the send socket blocking, because it's buffer is full. If the socket is blocking, the send()-call will take a long time to complete. Unless you have very high-bandwidth traffic, or a very slow CPU, it should not happen with UDP sockets (they are typically fire and forget), but is worth checking.
In order to avoid blocking, create a non-blocking socket. I am not too familiar with Java networking, but it seems like a DatagramChannel is needed to do this.
Okay, so the delay is gone. What I've done is that I've simply increased the UDP packet's buffer size. The minimum buffer size I received from Android was (on my devices) 1024 bytes. Now I do something like this:
int maxBufferSize = 4096; // my value. see what's working best for you.
int minBufferSize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int actualBufferSize = Math.max(minBufferSize, maxBufferSize);
With 4 KB buffer, the audio transmission is really good and there is absolutely no delay.
Related
The goal is to organize a voice call between two devices. The problem is in reciveing part, I get a very high levl of noise, so it is impossible to understand the speach. Here is my code:
The sending part:
public void startRecording() {
// private static final int RECORDER_SAMPLERATE = 44100;
// private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
// private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
// bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, bufferSize);
int i = recorder.getState();
if (i == 1)
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
byte data[] = new byte[bufferSize];
bluetoothCall.sendMessage(data);
}
}, "AudioRecorder Thread");
recordingThread.start();
}
The receiving part ( probably the problem is in this part ) :
private final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MESSAGE_WRITE:
// ...
case MESSAGE_READ:
try{
// private int sampleRate = 44100 ;
// int bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
byte[] readBuf = (byte[]) msg.obj;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.PERFORMANCE_MODE_LOW_LATENCY);
mAudioTrack.play();
mAudioTrack.write(readBuf, 0, readBuf.length);
mAudioTrack.release();
}
catch (Exception e){
}
break;
}
}
};
VoIP quality is typically influenced by several factors:
latency (end to end time taken for a packet)
jitter (variance in latency)
packet loss
Most issues in VoIP implementations are usually around latency and jitter, but from your description of noise it sounds more like you might be losing data or having it corrupted somehow.
Either way, unless you are doing this for learning or academic purposes it may be easier to use a VoIP library which will have solved these issues for you - there is quite a lot of complexity in both the signalling and the voice commuicaton for VoIp calls.
Android has a built in SIP library now:
https://developer.android.com/guide/topics/connectivity/sip.html
This does require a SIP server of some sort, even if you build it into your client, which may not be what you want.
You can also build your own solution around the RTP, the voice data transfer part, but this will require much more work for discovering IP addresses etc:
https://developer.android.com/reference/android/net/rtp/package-summary.html
You can use SIP clients without a server often but you need to work out the IP address and, more trickily, the port (https://stackoverflow.com/a/44449337/334402).
If you do want to use SIP there are opeusource SIP servers available - e.g.:
https://www.opensips.org/About/About
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?
I am trying to record data from my mobile phone's audio interface. I used audiorecord function. Following is my code:
public void Initialize() {
buffersizebytes = AudioRecord.getMinBufferSize(SAMPPERSEC,channelConfiguration, audioEncoding); // 4096 on ion
buffer = new short[buffersizebytes];
buflen = buffersizebytes / 2;
audioRecord = new AudioRecord(
android.media.MediaRecorder.AudioSource.MIC, SAMPPERSEC,
channelConfiguration, audioEncoding, buffersizebytes);
acquire();
for(int i=0; i<4096; i++) buffer[i]=1;
}
public void acquire() {
try {
audioRecord.startRecording();
mSamplesRead = audioRecord.read(buffer, 0, buffersizebytes);
audioRecord.stop();
} catch (Throwable t) {
// Log.e("AudioRecord", "Recording Failed");
}
}
I want to put my acquired data into a buffer of 4096 bytes. But my program only put data into 1024 bytes. Also first 432 bytes also zeros. But I am sending data continuously. What could be the issue?
getMinBufferSize, as the name implies gives you the minimum buffer size. You can set anything bigger, including 4096.
As for the first samples after initialization, my phone gives two gigantic peaks that last for about 0.5 seconds, so I guess it is caused by the recorder starting up. Try skipping a few samples (let's say 500) before processing real data.
Furthermore, the size of buffer should be buffersizebytes/2.
I'm streaming the mic audio between two devices, everything is working but i have a bad echo.
Here what i'm doing
Reading thread
int sampleFreq = 22050;
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleFreq,
channelConfig,
audioFormat,
minBuffer,
AudioTrack.MODE_STREAM);
atrack.play();
byte[] buffer = new byte[minBuffer];
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
atrack.flush();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
break;
}
}
Here the recording thread
int sampleRate = 22050;
int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelMode,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
buffer = new byte[buffersize];
arec.startRecording();
while (true) {
arec.read(buffer, 0, buffersize);
new Thread( new Runnable(){
#Override
public void run() {
try {
mOutputStream.write(buffer);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Am I doing something wrong?
You need echo cancellation logic. Here is what I did on my Arm5 (WM8650) processor (Android 2.2) to remove the echo.
I wrapped Speex with JNI and called echo processing routines before sending PCM frames to encoder. No echo was canceled no matter what Speex settings I tried.
Because Speex is very sensitive to delay between playback and echo frames I implemented a queue and queued all packets sent to AudioTrack. The size of the queue should be roughly equal to the size of internal AudioTrack buffer. This way packet were sent to echo_playback roughly at the time when AudioTrack send packets to the sound card from its internal buffer. The delay was removed with this approach but echo was still not cancelled
I wrapped WebRtc echo cancellation part with JNI and called its methods before sending packets to encoder. The echo was still present but the library obviously was trying to cancel it.
I applied the buffer technique described in P2 and it finally started to work. The delay needs to be adjusted for each device though. Note also that WebRtc has mobile and full version of echo cancellation. The full version substantially slows the processor and should probably be run on ARM7 only. The mobile version works but with lower quality
I hope this will help someone.
Could be this:
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
If the buffer remains full from previous call and the new one is not full (so bytes < buffer.length) you re-play hold part of track.
I'm working with Android, trying to make my AudioTrack application play a Windows .wav file (Tada.wav). Frankly, it shouldn't be this hard, but I'm hearing a lot of strange stuff. The file is saved on my phone's mini SD card and reading the contents doesn't seem to be a problem, but when I play the file (with parameters I'm only PRETTY SURE are right), I get a few seconds of white noise before the sound seems to resolve itself into something that just may be right.
I have successfully recorded and played my own voice back on the phone -- I created a .pcm file according to the directions in this example:
http://emeadev.blogspot.com/2009/09/raw-audio-manipulation-in-android.html
(without the backwards masking)...
Anybody got some suggestions or awareness of an example on the web for playing a .wav file on an Android??
Thanks,
R.
I stumbled on the answer (frankly, by trying &^#! I didn't think would work), in case anybody's interested... In my original code (which is derived from the example in the link in the original post), the data is read from the file like so:
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readShort(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is array of SHORTS. So, the readShort() method would seem to make sense here, since the data is 16-bit PCM... However, on the Android that seems to be the problem. I changed that code to the following:
music=new byte[(int) file.length()];//size & length of the file
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readByte(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is an array of BYTES. I'm still telling the AudioTrack that it's 16-bit PCM data, and my Android doesn't seem to have a problem with writing an array of bytes into an AudioTrack thus configured... Anyway, it finally sounds right, so if anyone else wants to play Windows sounds on their Android, for some reason, that's the solution. Ah, Endianness......
R.
I found a lot of long answers to this question. My final solution, which given all the cutting and pasting is hardly mine, comes down to:
public boolean play() {
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.noise);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try{
music = new byte[512];
at.play();
while((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
STOPPED is just a "true" sent back as a signal to reset the pause/play button.
And in the class initializer:
public Mp3Track(Context context) {
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
Context is just "this" from the calling activity.
You can use a FileInputStream on the sdcard, etc. My files are in res/raw
Are you skipping the first 44 bytes of the file before you dump the rest of the file's data into the buffer? The first 44 bytes are the WAVE header and they would sound like random noise if you tried to play them.
Also, are you sure you are creating the AudioTrack with the same properties as the WAVE you are trying to play (sample rate, bit rate, number of channels, etc)? Windows actually does a good job of giving this information to you in the File Properties page:
As said by Aaron C, you have to skip initial 44 bytes or (as I prefer) read first 44 bytes that are the WAVE header. In this way you know how many channels, bits per sample, length, etc... the WAVE contains.
Here you can find a good implementation of a WAVE header parser/writer.
Please don't perpetuate terrible parsing code. WAV parsing is trivial to implement
http://soundfile.sapp.org/doc/WaveFormat/
and you will thank yourself by being able to parse things such as the sampling rate, bit depth, and number of channels.
Also x86 and ARM (at least by default) are both little endian , so native-endian WAV files should be fine without any shuffling.
Just confirm if you have AudioTrack.MODE_STREAM and not AudioTrack.MODE_STATIC in the AudioTrack constructor:
AudioTrack at = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
// buffer length in bytes
outputBufferSize,
AudioTrack.MODE_STREAM
);
Sample wav file:
http://www.mauvecloud.net/sounds/pcm1644m.wav
Sample Code:
public class AudioTrackPlayer {
Context mContext;
int minBufferSize;
AudioTrack at;
boolean STOPPED;
public AudioTrackPlayer(Context context) {
Log.d("------","init");
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
public boolean play() {
Log.d("------","play");
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.pcm1644m);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try {
music = new byte[512];
at.play();
while ((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
}