PCM stream/Microphone stutter - android

I am currently developing an Android application that has to record the microphone input as PCM stream.
Whenever I record something, I experience some strange stutter and I can't find a solution to this.
Here's my code:
In my MainActivity I have an ASyncTask for the Microphone input:
ArrayList<byte[]> mBufferList;
#Override
protected String doInBackground(String... params) {
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
mMicrophone = new Microphone();
mMicrophone.init();
byte[] buffer;
while (mRecord) {
try {
mMicrophone.record();
buffer = mMicrophone.getBuffer();
mBufferList.add(buffer);
}
catch
{
}
}
}
In my Microphone class I initialize the AudioRecorder:
public void init() {
Log.d("DEBUG", "Microphone: Recording started");
mBufferSize = AudioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
mRecorder = new AudioRecord(AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
mRecorder.startRecording();
mBuffer = new short[mBufferSize];
}
The record method:
public void record() throws IOException {
mRecorder.read(mBuffer, 0, mBufferSize);
}
Short[] to Byte[]:
public byte[] shortToBytes(short[] sData) {
int shortArrsize = sData.length;
byte[] bytes = new byte[shortArrsize * 2];
for (int i = 0; i < shortArrsize; i++) {
bytes[i * 2] = (byte) (sData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (sData[i] >> 8);
sData[i] = 0;
}
return bytes;
}
Method to retrieve the buffer:
public byte[] getBuffer() {
byte[] buffer = shortToBytes(mBuffer);
return buffer;
}
I have uploaded a wav-file which demonstrates the stutter effect. I'm saying 'One':
Wav-File
I already tried to change the samplerates, buffersizes et cetera, but with no avail.
Any help is very appreciated! Would be great if anyone could help me out!
Please note: This error is not caused by the way I replay the pcm stream since I have tested it on an android devices and even sent the raw data to a server to convert the file to a wav there.

After hours and hours of desperately searching for a solution I have finally found the error.
I accidentaly created my short buffer in the Microphone class like this:
mBuffer = new short[mBufferSize];
The buffer size is in bytes though, so I of course have to use mBuffersize/2
mBuffer = new short[mBufferSize/2];
I will keep my question online in case anyone is interested in the code and /or has a similar problem.

Related

Cannot play PCM stream with Encoding ENCODING_PCM_FLOAT

I know that there are a lot of questions about this topic. But I'm having issue with ENCODING_PCM_FLOAT.
I have an PCM stream with following information:
Encoding : 32 bit float
Byte order: Little Endian
Channels : 2 channels ( stereo)
Sample Rate : 48000
And I want to feed it to AudioTrack. I use the following APIs:
private final int SAMPLE_RATE = 48000;
private final int CHANNEL_COUNT = AudioFormat.CHANNEL_OUT_STEREO;
private final int ENCODING = AudioFormat.ENCODING_PCM_FLOAT;
...
int bufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, CHANNEL_COUNT, ENCODING);
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build();
AudioFormat audioFormat = new AudioFormat.Builder()
.setEncoding(ENCODING)
.setSampleRate(SAMPLE_RATE)
.build();
audioTrack = new AudioTrack(audioAttributes, audioFormat, bufferSize
, AudioTrack.MODE_STREAM, AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.play();
And then I start listening from audio stream:
private void startListening() {
while (true) {
try {
byte[] buffer = new byte[8192];
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
mSocket.receive(packet);
FloatBuffer floatBuffer = ByteBuffer.wrap(packet.getData()).asFloatBuffer();
float[] audioFloats = new float[floatBuffer.capacity()];
floatBuffer.get(audioFloats);
for (int i = 0; i < audioFloats.length; i++) {
audioFloats[i] = audioFloats[i] / 0x8000000;
}
audioTrack.write(audioFloats, 0, audioFloats.length, AudioTrack.WRITE_NON_BLOCKING);
} catch (IOException e) {
e.printStackTrace();
}
}
}
But I don't hear any sound at all. I can play the byte array as PCM_16 (without convert it to float array), but it contains so much noise. So I think the stream input is not the problem.
If you have any idea, please let me know.
Thanks for reading!
for (int i = 0; i < audioFloats.length; i++) {
audioFloats[i] = audioFloats[i] / 0x8000000;
}
I was an idiot, the code block above is not needed. After removed, the audio play normally.

Mix audio from speaker and microphone in android

I was trying to record the sound from the mic. The sound is sampled against the tone running in background.
To make it clear i want to run a tone in background and when i make some noise from microphone this should be mixed with the background tone that is already playing.
The final output should be a mix of the tone played and the signals from the microphone which is the noise. How can i achieve this.
I was referring to the post Android : recording audio using audiorecord class play as fast forwarded in stackoverflow to record data from microphone. But i need to record the background tone as well as the microphone input.
public class StartRecording {
private int samplePerSec = 8000;
public void Start(){
stopRecording.setEnabled(true);
bufferSize = AudioRecord.getMinBufferSize(samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT);
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.MIC, this.samplePerSec, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, bufferSize * 10);
audioRecorder.startRecording();
isRecording = true;
while (isRecording && audioRecorder.getRecordingState() == AudioRecord.RECORDSTATE_RECORDING)
{
short recordedData[] = new short[bufferSize];
audioRecorder.read(recordedData, 0, recordedData.length); // Reading from the audiorecorder
byte[] bData = shortTobyte(recordedData);
}
}
}
private byte[] shortTobyte(short[] recordedData) {
int tempBuff = recordedData.length;
byte[] bytes = new byte[tempBuff * 10];
for (int i = 0; i < tempBuff; i++) {
bytes[i * 2] = (byte) (recordedData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (recordedData[i] >> 8);
recordedData[i] = 0;
}
return bytes;
}
Thanks in advance...
You have to use AudioTrack and AudioRecord simulteanously.
Then all buffers from AudioRecord must be mixed to your tone (there are some algo on google for mixing 2 audio signals) and written in the AudioTrack.
You will have latency and some problems with echo if you don't use a headset.

Playing music with AudioTrack buffer by buffer on Eclipse - no sound

i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.

How to play back AudioRecord with some delay

I'm implementing an app which will repeat everything I'm telling it.
What I need is to play the sound I'm recording on a buffer just with a second of delay
So that I would be listening myself but 1 second delayed
This is my run method of the Recorder class
public void run()
{
AudioRecord recorder = null;
int ix = 0;
buffers = new byte[256][160];
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
recorder = new AudioRecord(AudioSource.MIC, 44100, AudioFormat.CHANNEL_IN_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
Timer t = new Timer();
SeekBar barra = (SeekBar)findViewById(R.id.barraDelay);
t.schedule(r = new Reproductor(), barra.getProgress());
while(!stopped)
{
byte[] buffer = buffers[ix++ % buffers.length];
N = recorder.read(buffer,0,buffer.length);
}
}
catch(Throwable x)
{
}
finally
{
recorder.stop();
recorder.release();
recorder = null;
}
And this is the run one of my player:
public void run() {
reproducir = true;
AudioTrack track = null;
int jx = 0;
try
{
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
track = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
track.play();
/*
* Loops until something outside of this thread stops it.
* Reads the data from the recorder and writes it to the audio track for playback.
*/
while(reproducir)
{
byte[] buffer = buffers[jx++ % buffers.length];
track.write(buffer, 0, buffer.length);
}
}
catch(Throwable x)
{
}
/*
* Frees the thread's resources after the loop completes so that it can be run again
*/
finally
{
track.stop();
track.release();
track = null;
}
}
Reproductor is an inner class extending TimerTask and implementing the "run" method.
Many thanks!
At least you should change the following line of your player
int N = AudioRecord.getMinBufferSize(44100,AudioFormat.CHANNEL_IN_STEREO,AudioFormat.ENCODING_PCM_16BIT);
to
int N = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
because the API requires that (albeit the constant values are identical).
But this is only a marginal point. The main point is that you did not really present an approach to your problem, but only two generic methods.
The core of a working solution is that you use a ring buffer with a size of 1s and AudioTrack reading a block of it just ahead of writing new data via AudioRecord to the same block, both at the same sample rate.
I would suggest to do that inside a single thread.

FSK Modulation and Playing sine Tone in Android

I want to do some FSK Modulation over the audio port. So the problem is that my sinus wave isn't very good. It is disturb by even parts. I used the code original from http://marblemice.blogspot.com/2010/04/generate-and-play-tone-in-android.html with the further modification from Playing an arbitrary tone with Android and https://market.android.com/details?id=re.serialout&feature=search_result .
So where is the failure? What do I wrong?
private static int bitRate=300;
private static int sampleRate=48000;
private static int freq1=600;
public static void loopOnes(){
playque.add(UARTHigh);
athread.interrupt();
}
private static byte[] UARTHigh() {
int numSamples=sampleRate/bitRate;
double sample[]=new double[numSamples];
byte[] buffer=new byte[numSamples*2];
for(int i=0; i<numSamples;++i){
sample[i]=Math.sin(2*Math.PI*i*freq1/sampleRate);
}
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
buffer[idx++] = (byte) (val & 0x00ff);
buffer[idx++] = (byte) ((val & 0xff00) >>> 8);
}
return buffer;
}
private static void playSound(){
active = true;
while(active)
{
try {Thread.sleep(Long.MAX_VALUE);} catch (InterruptedException e) {
while (playque.isEmpty() == false)
{
if (atrk != null)
{
if (generatedSnd != null)
{
// Das letzte Sample erst fertig abspielen lassen
// systemClock.sleep(xx) xx könnte angepasst werden
while (atrk.getPlaybackHeadPosition() < (generatedSnd.length))
SystemClock.sleep(50); // let existing sample finish first: this can probably be set to a smarter number using the information above
}
atrk.release();
}
UpdateParameters(); // might as well do it at every iteration, it's cheap
generatedSnd = playque.poll();
length = generatedSnd.length;
if (minbufsize<length)
minbufsize=length;
atrk = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minbufsize,
AudioTrack.MODE_STATIC);
atrk.setStereoVolume(1,1);
atrk.write(generatedSnd, 0, length);
atrk.play();
}
// Playque is Empty =>send StopBit!
// Set Loop Points
int setLoopError=atrk.setLoopPoints(0, length, -1);
atrk.play();
}
}
}
}
So the answer is to change from MODE_STATIC to MODE_STREAM and don't use Looping Points. In a new thread with low priority a busy loop writes the tracks.

Categories

Resources