Android - WebSocketClient Byte[] into audio - android

I have an android application that is receiving bytes[] data through a WebSocketClient. Those are the bytes i'm receiving:
#Override
public void onBinaryReceived(byte[] data) {
//System.out.println("onBinaryReceived");
Log.i("Binary: ", data.toString());
}
[B#91e31b9
[B#317b9fe
Those bytes are 3LAS mp3 voice data that are being streamed from an embedded system (Intel Edison). I'd like to decode those bytes into audio in my Android application. I've done some research and figured I should be using AudioTrack to do this. Sadly i'm pretty new to android so I have no idea where to start or how I should code that. If I understand the protocol correctly, I should concatenate thoses bytes continuously into a bigger byte that is being sent to the AudioTrack class.
First things first, should I be using AudioTrack to transform those bytes into audio ? Should I use the android MediaPlayer to have the audio played in my application ?
Honestly i'm pretty lost so any help / advice would be greatly appreciated. Don't hesitate to ask for more informations.
EDIT :
I finally went for AudioTrack and tried using it like such :
This is the initialization of my webSocket and my audio track.
public void createWebSocketClient() {
final URI uri;
int bufferSize = AudioTrack.getMinBufferSize(8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT);
final AudioTrack mAudioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 8000, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
mAudioTrack.play();
try {
uri = new URI("ws://10.0.0.1:9601/");
Log.i("Uri:", "Good");
} catch (URISyntaxException e) {
e.printStackTrace();
Log.i("Uri:", "Bad");
return;
}
webSocketClient = new WebSocketClient(uri) {
public void onBinaryReceived(byte[] data) {
//System.out.println("onBinaryReceived");
Log.i("Binary: ", String.valueOf(data));
mAudioTrack.write(data, 0, data.length);
}
}
With this i'm hearing something but it's not voice, more like a high pitch continuous noise. I'm supposed to be receiving MP3 and i'm using AudioFormat.ENCODING_PCM_16BIT, I guess the problem is here. Problem is, in the documentation there is nothing for ENCODING_MP3_*.
I just arrived to the conclusion that I had to decode MP3 into PCM. I'm trying to do so with JLayer but i'm having a lot of trouble. For a weird reason using JLayer causes an exception to my WebSocket and disconnects it :
11-15 15:18:27.506 11099-11712/com.example.jneb.myapplication I/System.out: Exceptionlength=433; index=433
Here is the code i'm using to decode the byte into PCM inside the onMessage webSocket function :
#Override
public void onBinaryReceived(byte[] data) {
System.out.println("onBinaryReceived");
// Should be decoding shit here //
try {
ByteArrayInputStream bis = new ByteArrayInputStream(data);
Bitstream bits = new Bitstream(bis);
Header frameHeader = bits.readFrame();
SampleBuffer buff = (SampleBuffer) decoder.decodeFrame(frameHeader, bits);
Log.i("LengthBuff: ", String.valueOf(buff.getBuffer().length));
mAudioTrack.write(buff.getBuffer(), 0, buff.getBuffer().length);
bits.closeFrame();
} catch (BitstreamException e) {
Log.i("Exception", " : Shit");
e.printStackTrace();
} catch (DecoderException e) {
Log.i("Exception", " : Shit shit shit");
e.printStackTrace();
}
}

Related

How to capture and encode audio in a video system_Android

I am trying to build a opensource video system in android, since we have no access to the data in a closed system. In this system, we can modify the raw data captured by camera.
I used MediaCodec and MediaMux to do the video data encoding and muxing job, and that works. But I have no idea about the audio part. I used onFramePreview to get each frame and do the encoding/muxing work by frame. But how do I do the audio recording at the same time(I mean capturing the audio by frame, encode it and send the data to the MediaMux).
I've done some research. It seems that we use audiorecorder to get the raw data of audio. But audiorecorder does a constant recording job, I don't think it can work.
Can anyone give me a hint? Thank you!
Create audioRecorder like this:
private AudioRecord getRecorderInstance() {
AudioRecord ar = null;
try {
//Get a audiorecord
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
ar = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
}
catch (Exception e) {
}
return ar; //Returns null if mic is unavailable
}
Prepare and send the data for the encoding and muxing later like this is separate thread:
public class MicrophoneInput implements Runnable {
#Override
public void run() {
// Buffer for 200 milliseconds of data, e.g. 400 samples at 8kHz.
byte[] buffer200ms = new byte[8000 / 10];
try {
while (recording) {
audioRecorder.read(buffer200ms, 0, buffer200ms.length);
//process buffer i.e send to encoder
//don't forget to set correct timestamps synchronized with video
}
}
catch(Throwable x) {
//
}
}
}

Android sound recorder bad performance

//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

AudioRecord and AudioTrack echo

I'm streaming the mic audio between two devices, everything is working but i have a bad echo.
Here what i'm doing
Reading thread
int sampleFreq = 22050;
int channelConfig = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int minBuffer = 2*AudioTrack.getMinBufferSize(sampleFreq, channelConfig, audioFormat);
AudioTrack atrack = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleFreq,
channelConfig,
audioFormat,
minBuffer,
AudioTrack.MODE_STREAM);
atrack.play();
byte[] buffer = new byte[minBuffer];
while (true) {
try {
// Read from the InputStream
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
atrack.flush();
} catch (IOException e) {
Log.e(TAG, "disconnected", e);
break;
}
}
Here the recording thread
int sampleRate = 22050;
int channelMode = AudioFormat.CHANNEL_CONFIGURATION_MONO;
int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
int buffersize = 2*AudioTrack.getMinBufferSize(sampleRate, channelMode, audioFormat);
AudioRecord arec = new AudioRecord(MediaRecorder.AudioSource.MIC,
sampleRate, channelMode,
AudioFormat.ENCODING_PCM_16BIT, buffersize);
buffer = new byte[buffersize];
arec.startRecording();
while (true) {
arec.read(buffer, 0, buffersize);
new Thread( new Runnable(){
#Override
public void run() {
try {
mOutputStream.write(buffer);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}).start();
}
Am I doing something wrong?
You need echo cancellation logic. Here is what I did on my Arm5 (WM8650) processor (Android 2.2) to remove the echo.
I wrapped Speex with JNI and called echo processing routines before sending PCM frames to encoder. No echo was canceled no matter what Speex settings I tried.
Because Speex is very sensitive to delay between playback and echo frames I implemented a queue and queued all packets sent to AudioTrack. The size of the queue should be roughly equal to the size of internal AudioTrack buffer. This way packet were sent to echo_playback roughly at the time when AudioTrack send packets to the sound card from its internal buffer. The delay was removed with this approach but echo was still not cancelled
I wrapped WebRtc echo cancellation part with JNI and called its methods before sending packets to encoder. The echo was still present but the library obviously was trying to cancel it.
I applied the buffer technique described in P2 and it finally started to work. The delay needs to be adjusted for each device though. Note also that WebRtc has mobile and full version of echo cancellation. The full version substantially slows the processor and should probably be run on ARM7 only. The mobile version works but with lower quality
I hope this will help someone.
Could be this:
bytes = mmInStream.read(buffer);
atrack.write(buffer, 0, buffer.length);
If the buffer remains full from previous call and the new one is not full (so bytes < buffer.length) you re-play hold part of track.

real time audio recording and sending

I'm trying to capture audio from a microphone and no matter what buffer
size (bufferSizeInBytes) I set at construct time of AudioRecord, when
I do a AudioRecord.read I always get 8192 bytes of audio data (128
ms). I would like the AudioRecord.read be able to read 40ms of data
(2560 bytes). The only working sample code that I can find is Sipdroid
(RtpStreamSender.java) which I haven't tried.
Here's the relevant piece of my code:
public void run()
{
running = true;
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URG ENT_AUDIO);
try {
frameSize = AudioRecord.getMinBufferSize(samplingRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
Log.i(TAG, "trying to capture " + String.format("%d", frameSize) + " bytes");
record = new AudioRecord(MediaRecorder.AudioSource.MIC, samplingRate,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT, frameSize);
record.startRecording();
byte[] buffer = new byte[frameSize];
while (running)
{
record.read(buffer, 0, frameSize);
Log.i(TAG, "Captured " + String.format("%d", frameSize) + " bytes of audio");
}
record.stop();
record.release();
} catch (Throwable t) {
Log.e(TAG, "Failed to capture audio");
}
}
My questions/comments are:
Is this a limitation of AudioRecord class and/or the particular
device ?
I don't see a method to query the supported sampling rate (steps of
bufferSizeInBytes) a given device can do. It would be nice if there's
one.
Can we use AudioRecord.OnRecordPositionUpdateListener and how?
In above code encoding pcm in 8 bit is not working. why?
if anyone tried sipdroid code, help me.
Thanks in advance,

Categories

Resources