How to play a byte stream ? ( voice call between devices ) - android

The goal is to organize a voice call between two devices. The problem is in reciveing part, I get a very high levl of noise, so it is impossible to understand the speach. Here is my code:
The sending part:
public void startRecording() {
// private static final int RECORDER_SAMPLERATE = 44100;
// private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_STEREO;
// private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
// bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
RECORDER_SAMPLERATE, RECORDER_CHANNELS, RECORDER_AUDIO_ENCODING, bufferSize);
int i = recorder.getState();
if (i == 1)
recorder.startRecording();
isRecording = true;
recordingThread = new Thread(new Runnable() {
#Override
public void run() {
byte data[] = new byte[bufferSize];
bluetoothCall.sendMessage(data);
}
}, "AudioRecorder Thread");
recordingThread.start();
}
The receiving part ( probably the problem is in this part ) :
private final Handler mHandler = new Handler() {
#Override
public void handleMessage(Message msg) {
switch (msg.what) {
case MESSAGE_WRITE:
// ...
case MESSAGE_READ:
try{
// private int sampleRate = 44100 ;
// int bufferSize = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
byte[] readBuf = (byte[]) msg.obj;
mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.PERFORMANCE_MODE_LOW_LATENCY);
mAudioTrack.play();
mAudioTrack.write(readBuf, 0, readBuf.length);
mAudioTrack.release();
}
catch (Exception e){
}
break;
}
}
};

VoIP quality is typically influenced by several factors:
latency (end to end time taken for a packet)
jitter (variance in latency)
packet loss
Most issues in VoIP implementations are usually around latency and jitter, but from your description of noise it sounds more like you might be losing data or having it corrupted somehow.
Either way, unless you are doing this for learning or academic purposes it may be easier to use a VoIP library which will have solved these issues for you - there is quite a lot of complexity in both the signalling and the voice commuicaton for VoIp calls.
Android has a built in SIP library now:
https://developer.android.com/guide/topics/connectivity/sip.html
This does require a SIP server of some sort, even if you build it into your client, which may not be what you want.
You can also build your own solution around the RTP, the voice data transfer part, but this will require much more work for discovering IP addresses etc:
https://developer.android.com/reference/android/net/rtp/package-summary.html
You can use SIP clients without a server often but you need to work out the IP address and, more trickily, the port (https://stackoverflow.com/a/44449337/334402).
If you do want to use SIP there are opeusource SIP servers available - e.g.:
https://www.opensips.org/About/About

Related

How To Record Sound in Android with Better Quality and Reduce Noise

I’m trying to build a music analytics app for android platform.
the app is using MediaRecorder.AudioSource.MIC
to record the music form the MIC and them encode it PCM 16BIT with 11025 freq, but the recorded audio sample are very low quality is there any way to make it better, decrease the noise?
mRecordInstance = new AudioRecord(MediaRecorder.AudioSource.MIC,FREQUENCY, CHANNEL,ENCODING, minBufferSize);
mRecordInstance.startRecording();
do
{
samplesIn += mRecordInstance.read(audioData, samplesIn, bufferSize - samplesIn);
if(mRecordInstance.getRecordingState() == AudioRecord.RECORDSTATE_STOPPED)
break;
}
while (samplesIn < bufferSize);
Thanks in Advance
The solution above didnt work for me.
So, i searched around and found this article.
Long story short, I used MediaRecorder.AudioSource.VOICE_RECOGNITION instead of AudioSource.MIC, which gave me really good results and noise in the background did reduce very much.
The great thing about this solution is, it can be used with both AudioRecord and MediaRecorder class.
The best combination of SR and buffer size is very device dependant, so your results will vary depending on the hardware. I use this utility to figure out what the best combination is for devices running Android 4.2 and above;
public static DeviceValues getDeviceValues(Context context) {
try {
AudioManager am = (AudioManager) context.getSystemService(Context.AUDIO_SERVICE);
try {
Method getProperty = AudioManager.class.getMethod("getProperty", String.class);
Field bufferSizeField = AudioManager.class.getField("PROPERTY_OUTPUT_FRAMES_PER_BUFFER");
Field sampleRateField = AudioManager.class.getField("PROPERTY_OUTPUT_SAMPLE_RATE");
int bufferSize = Integer.valueOf((String)getProperty.invoke(am, (String)bufferSizeField.get(am)));
int sampleRate = Integer.valueOf((String)getProperty.invoke(am, (String)sampleRateField.get(am)));
return new DeviceValues(sampleRate, bufferSize);
} catch(NoSuchMethodException e) {
return selectBestValue(getValidSampleRates(context));
}
} catch(Exception e) {
return new DeviceValues(DEFAULT_SAMPLE_RATE, DEFAULT_BUFFER_SIZE);
}
}
This uses reflection to check if the getProperty method is available, because this method was introduced in API level 17. If you are developing for a specific device type, you might want to experiment with various buffer sizes and sample rates. The defaults that I use as a fallback are;
private static final int DEFAULT_SAMPLE_RATE = 22050;
private static final int DEFAULT_BUFFER_SIZE = 1024;
Additionally I check the various SR by seeing if getMinBufferSize returns a reasonable value for use;
private static List<DeviceValues> getValidSampleRates(Context context) {
List<DeviceValues> available = new ArrayList<DeviceValues>();
for (int rate : new int[] {8000, 11025, 16000, 22050, 32000, 44100, 48000, 96000}) { // add the rates you wish to check against
int bufferSize = AudioRecord.getMinBufferSize(rate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
if (bufferSize > 0 && bufferSize < 2048) {
available.add(new DeviceValues(rate, bufferSize * 2));
}
}
return available;
}
This depends on the logic that if getMinBufferSize returns 0, the sample rate is not available in the device. You should experiment with these values for your particular use case.
Though it is an old question following solution will be helpful.
We can use MediaRecorder to record audio with ease.
private void startRecording() {
MediaRecorder recorder = new MediaRecorder();
recorder.setAudioSource(MediaRecorder.AudioSource.MIC);
recorder.setOutputFormat(MediaRecorder.OutputFormat.MPEG_4);
recorder.setAudioEncoder(MediaRecorder.AudioEncoder.AAC);
recorder.setAudioEncodingBitRate(96000)
recorder.setAudioSamplingRate(44100)
recorder.setOutputFile(".../audioName.m4a");
try {
recorder.prepare();
} catch (IOException e) {
Log.e(LOG_TAG, "prepare() failed");
}
recorder.start();
}
Note:
MediaRecorder.AudioEncoder.AAC is used as MediaRecorder.AudioEncoder.AMR_NB encoding is no longer supported in iOS. Reference
AudioEncodingBitRate should be used either 96000 or 128000 as required for clarity of sound.

Delay in audio submission via UDP packets

I'm currently developing an application to transmit audio. I have two services running, one to receive it, one to send it. The important stuff of the sender looks like this:
final DatagramSocket dSocket = new DatagramSocket();
android.os.Process
.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
Log.d(TAG, "Thread starting...");
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioRecord arec = new AudioRecord(
MediaRecorder.AudioSource.MIC, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
byte[] buffer = new byte[buffersize];
Log.d(TAG, "Starting to record, buffersize=" + buffersize);
arec.startRecording();
while (isRunning && !isInterrupted()) {
try {
Log.d(TAG, "Recording..");
arec.read(buffer, 0, buffersize);
DatagramPacket dPacket = new DatagramPacket(buffer,
buffersize);
for (Peer cur : mPeers) {
if(cur.isSelf) continue;
dPacket.setAddress(InetAddress
.getByName(cur.IP_ADDRESS));
dPacket.setPort(Config.UDP_PORT);
dSocket.send(dPacket);
}
} catch (Exception e) {
e.printStackTrace();
}
}
This code works and submits audio packets.
The receiver service looks like this:
// DatagramSocket dSocket = new DatagramSocket();
DatagramChannel dChannel = DatagramChannel.open();
DatagramSocket dSocket = dChannel.socket();
dSocket.setReuseAddress(true);
dSocket.setSoTimeout(2000);
dSocket.bind(new InetSocketAddress(Config.UDP_PORT));
Log.d(TAG, "DatagramSocket open.");
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
int buffersize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioTrack aTrack = new AudioTrack(
AudioManager.STREAM_VOICE_CALL, 11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, buffersize,
AudioTrack.MODE_STREAM);
DatagramPacket dPacket = new DatagramPacket(
new byte[buffersize], buffersize);
Log.d(TAG, "Packet with buffersize=" + buffersize);
aTrack.play();
Log.d(TAG, "Playing track..");
byte[] buffer = new byte[buffersize];
while (isRunning && !isInterrupted()) {
try {
dSocket.receive(dPacket);
buffer = dPacket.getData();
aTrack.setPlaybackRate(11025);
aTrack.write(buffer, 0, buffer.length);
} catch (Exception e) {
e.printStackTrace();
}
}
aTrack.stop();
This also works, but after sending for more than a couple of seconds, there is a huge delay, the packets do still arive but slow and the audio playback simply "lags" - what can I do to improve the quality? This is a direct peer-to-peer connection, no servers involved. Should I increase the buffer size? The current buffer size is the minimum buffer size I get from Android, which is 1024 on my devices (two Galaxy Nexus). BTW, the services to start another thread, which has its priority set to "URGENT" (which I believe is the highest available). For my purposes, the mPeers list only has one peer, so the "for" loop is not really delaying this I'd guess.
Have you checked what happens when you remove the network-related part of the sender-loop? I.e., does the read()-call from the microphone return immediately? Also, you describe that packets arrive slowly, but have you checked if there is a large delay between when they are sent as well?
The reason I am asking is that the phenomena you describe could be caused by the send socket blocking, because it's buffer is full. If the socket is blocking, the send()-call will take a long time to complete. Unless you have very high-bandwidth traffic, or a very slow CPU, it should not happen with UDP sockets (they are typically fire and forget), but is worth checking.
In order to avoid blocking, create a non-blocking socket. I am not too familiar with Java networking, but it seems like a DatagramChannel is needed to do this.
Okay, so the delay is gone. What I've done is that I've simply increased the UDP packet's buffer size. The minimum buffer size I received from Android was (on my devices) 1024 bytes. Now I do something like this:
int maxBufferSize = 4096; // my value. see what's working best for you.
int minBufferSize = AudioRecord.getMinBufferSize(11025,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
int actualBufferSize = Math.max(minBufferSize, maxBufferSize);
With 4 KB buffer, the audio transmission is really good and there is absolutely no delay.

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

How to play back AudioRecord with some delay

I'm implementing an app which will repeat everything I'm telling it.
What I need is to play the sound I'm recording on a buffer just with a second of delay
So that I would be listening myself but 1 second delayed
This is my run method of the Recorder class
public void run()
{
AudioRecord recorder = null;
int ix = 0;
buffers = new byte[256][160];
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
Timer t = new Timer();
SeekBar barra = (SeekBar)findViewById(R.id.barraDelay);
t.schedule(r = new Reproductor(), barra.getProgress());
while(!stopped)
{
byte[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
}
}
catch(Throwable x)
{
}
finally
{
recorder.stop();
recorder.release();
recorder = null;
}
And this is the run one of my player:
public void run() {
reproducir = true;
AudioTrack track = null;
int jx = 0;
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(reproducir)
{
byte[] buffer = buffers[jx++ % buffers.length];
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
track.stop();
track.release();
track = null;
}
}
Reproductor is an inner class extending TimerTask and implementing the "run" method.
Many thanks!
At least you should change the following line of your player
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
to
int N = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
because the API requires that (albeit the constant values are identical).
But this is only a marginal point. The main point is that you did not really present an approach to your problem, but only two generic methods.
The core of a working solution is that you use a ring buffer with a size of 1s and AudioTrack reading a block of it just ahead of writing new data via AudioRecord to the same block, both at the same sample rate.
I would suggest to do that inside a single thread.

AudioRecord and AudioTrack echo

I'm streaming the mic audio between two devices, everything is working but i have a bad echo.
Here what i'm doing
Reading thread
int sampleFreq = 22050;
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleFreq,
channelConfig,
audioFormat,
minBuffer,
AudioTrack.MODE_STREAM);
atrack.play();
byte[] buffer = new byte[minBuffer];
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
atrack.flush();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
break;
}
}
Here the recording thread
int sampleRate = 22050;
int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelMode,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
buffer = new byte[buffersize];
arec.startRecording();
while (true) {
arec.read(buffer, 0, buffersize);
new Thread( new Runnable(){
#Override
public void run() {
try {
mOutputStream.write(buffer);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Am I doing something wrong?
You need echo cancellation logic. Here is what I did on my Arm5 (WM8650) processor (Android 2.2) to remove the echo.
I wrapped Speex with JNI and called echo processing routines before sending PCM frames to encoder. No echo was canceled no matter what Speex settings I tried.
Because Speex is very sensitive to delay between playback and echo frames I implemented a queue and queued all packets sent to AudioTrack. The size of the queue should be roughly equal to the size of internal AudioTrack buffer. This way packet were sent to echo_playback roughly at the time when AudioTrack send packets to the sound card from its internal buffer. The delay was removed with this approach but echo was still not cancelled
I wrapped WebRtc echo cancellation part with JNI and called its methods before sending packets to encoder. The echo was still present but the library obviously was trying to cancel it.
I applied the buffer technique described in P2 and it finally started to work. The delay needs to be adjusted for each device though. Note also that WebRtc has mobile and full version of echo cancellation. The full version substantially slows the processor and should probably be run on ARM7 only. The mobile version works but with lower quality
I hope this will help someone.
Could be this:
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
If the buffer remains full from previous call and the new one is not full (so bytes < buffer.length) you re-play hold part of track.

Categories

Resources