AudioRecord produces choppy audio - android

I am using AudioRecord to capture audio packets and stream them to a voice recognition server.
In my Galaxy Note 4, Android M device, it works perfectly fine.
However, when I use other devices (Nexus 7/Android L and HTC combo/android ICS) the resulting audio is choppy, with glitchy noises in the sound every half a second that spoil the speech recognition process at the server.
I know this is a complicated topic, does somebody know how to deal with this audio capture irregularities in android?
This is my code setup:
private static final int AUDIO_SOURCE = MediaRecorder.AudioSource.VOICE_RECOGNITION;
private static final int SAMPLING_RATE = 16000; //44100,
private static final int CHANNEL_CONFIG = AudioFormat.CHANNEL_IN_MONO;
private static final int BIT_RATE = AudioFormat.ENCODING_PCM_16BIT;
bufferSize = AudioRecord.getMinBufferSize(SAMPLING_RATE,
CHANNEL_CONFIG,
BIT_RATE;
audioBuffer = new byte[bufferSize];
AudioRecord recorder = new AudioRecord(AUDIO_SOURCE,
SAMPLING_RATE,
CHANNEL_CONFIG,
BIT_RATE,
bufferSize);
recorder.startRecording();
while (isRecording) {
short[] buffer = new short[bufferSize];
int shorts_recorded = recorder.read(buffer, 0, buffer.length);
byte[] audioBytes = new byte[bufferSize*2]; //bufferSize*2
ByteBuffer.wrap(audioBytes).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);//runningBuffer.get(0));
Integer.toString(audioBytes.length)+","+audioBuffer.length);
handler.onAudioDataCapture(audioBytes); //expose audio data to upload callback interface
proceed();
}

Related

Audio routing between Android and PC produces white noise

I am trying to send audio between windows and android, I was successfully able to do that windows to windows but when I stream audio from android, it produces a white noise only. I think it is an issue with the AudioFormat in android and Windows because when I changed the sample Bits to 8 I guess, I heard the voice in one side of my headphones but then it went away too.
On Android Side
int BUFFER_MS = 15; // do not buffer more than BUFFER_MS milliseconds
int bufferSize = 48000 * 2 * BUFFER_MS / 1000;
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 48000, 2,
AudioFormat.ENCODING_PCM_16BIT, bufferSize, AudioTrack.MODE_STREAM);
byte[] buffer = new byte[bufferSize];
int bytesRead;
audioTrack.play();
while (socket.isConnected()) {
bytesRead = inputStream.read(buffer, 0, buffer.length);
audioTrack.write(buffer,0,bytesRead);
}
On Windows Side
AudioFormat format = getAudioFormat();
DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
// checks if system supports the data line
if (!AudioSystem.isLineSupported(info)) {
throw new LineUnavailableException(
"The system does not support the specified format.");
}
TargetDataLine audioLine = AudioSystem.getTargetDataLine(format);
audioLine.open(format);
audioLine.start();
byte[] buffer = new byte[BUFFER_SIZE];
int bytesRead;
while (socket.isConnected()) {
bytesRead = audioLine.read(buffer, 0, buffer.length);
outputStream.write(buffer,0,bytesRead);
}
and getAudioFormat function is
AudioFormat getAudioFormat() {
float sampleRate = 48000;
int sampleSizeInBits = 16;
int channels = 2;
boolean signed = true;
boolean bigEndian = true;
return new AudioFormat(sampleRate, sampleSizeInBits, channels, signed,
bigEndian);
}
Only hearing a white noise, if someone can help please do.
Okayyyy So I found out the problem. I just had to put bigEndian to false -_-
It's the byte order difference. I don't understand why it's different in android and pc but seems like it does the trick.

audiotrack: playing noise for a raw pcm 16bit wav file

I am trying to play a wav file of size 230mb and 20 min whose properties are as below:
ffmpeg -i 1.wav
Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s
I am learning how to use audiotrack.
I found two solutions to play this audio play using audiotrack.
Solution 1: the following plays the audio
int frequency = 44100;
int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_STEREO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration,audioEncoding);
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency,
channelConfiguration, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
int count = 0;
byte[] data = new byte[bufferSize];
try{
FileInputStream fileInputStream = new FileInputStream(listMusicFiles.get(0).listmusicfiles_fullfilepath);
DataInputStream dataInputStream = new DataInputStream(fileInputStream);
audioTrack.play();
while((count = dataInputStream.read(data, 0, bufferSize)) > -1){
audioTrack.write(data, 0, count);
}
audioTrack.stop();
audioTrack.release();
dataInputStream.close();
fileInputStream.close();
}
catch (FileNotFoundException e){
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
Second Solution: Playing noise
int frequency = 44100;
int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_STEREO;
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
int bufferSize = AudioTrack.getMinBufferSize(frequency, channelConfiguration,audioEncoding);
short[] audiodata = new short[bufferSize];
try {
DataInputStream dis = new DataInputStream(
new BufferedInputStream(new FileInputStream(
listMusicFiles.get(0).listmusicfiles_fullfilepath)));
AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency,
channelConfiguration, audioEncoding, bufferSize,
AudioTrack.MODE_STREAM);
audioTrack.play();
while (dis.available() > 0) {
int i = 0;
while (dis.available() > 0 && i < audiodata.length) {
audiodata[i] = dis.readShort();
i++;
}
audioTrack.write(audiodata, 0, audiodata.length);
}
dis.close();
} catch (Throwable t) {
Log.e("AudioTrack", "Playback Failed");
}
I am new to short sample and byte samples. I tried to understand but it not so easy.
I could understand the first solution is using byte sample and the second solution is using short samples.
So why is the second solution not working.
The default size of a short type is two bytes. You might have a look at this documentation as well.
The audio track has a recommended buffer size and sample rate which can be found using the way suggested here in this answer. Please have a look.
However, it is important to play the audio using the recommended sample rate which is 44100 Hz in your case and the recommended buffer size that you get using the following code segment.
AudioTrack.getMinBufferSize(frequency, channelConfiguration, audioEncoding)
In your implementation with short array, the buffer size is doubled and hence its creating noises in case of playing the audio. I would suggest, you might consider changing the buffer size by dividing the size by two in your implementation using short.
short[] audiodata = new short[(int) bufferSize / 2];
Hope you have understood the problem.

Getting Audio issue in Youtube Live streaming in android application

I am Developing an app in android live streaming. I am able to stream the live videos to the YouTube channel. But the problem was, getting no audio to that live streaming video.
My code will like below
private static final int frequency= 44100;
public void recordThread() {
Log.d(MainActivity.APP_NAME, "recordThread");
int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
int channelConfiguration = AudioFormat.CHANNEL_IN_STEREO;
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
Log.i(MainActivity.APP_NAME, "AudioRecord buffer size: " + bufferSize);
// 16 bit PCM stereo recording was chosen as example.
AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.MIC, frequency, channelConfiguration,
audioEncoding, bufferSize);
recorder.startRecording();
// Make bufferSize be in samples instead of bytes.
bufferSize /= 2;
short[] buffer = new short[bufferSize];
while (!cancel) {
int bufferReadResult = recorder.read(buffer, 0, bufferSize);
// Utils.Debug("bufferReadResult: " + bufferReadResult);
if (bufferReadResult > 0) {
frameCallback.handleFrame(buffer, bufferReadResult);
} else if (bufferReadResult < 0) {
Log.w(MainActivity.APP_NAME, "Error calling recorder.read: " + bufferReadResult);
}
}
recorder.stop();
Log.d(MainActivity.APP_NAME, "exit recordThread");
}
please some suggest me the solution to get out of this issue.
Search in github. There you will find a sample project(yasea) for audio encoding. Intermix those two projects, you will get the solution.

Whistle detection with devices having ICS Android?

I am using musicg library for Whistle Detection in my app.
So far, the library works great when I test it on devices having Froyo, GingerBread or even JellyBean, but when testing on ICS it does not detect the whistles properly.
In the library for Whistle detection, it has a class named, WhistleApi.java having boundary values for frequency, intensity, standard deviation etc:
protected void init(){
// settings for detecting a whistle
minFrequency = 600.0f;
maxFrequency = Double.MAX_VALUE;
minIntensity = 100.0f;
maxIntensity = 100000.0f;
minStandardDeviation = 0.1f;
maxStandardDeviation = 1.0f;
highPass = 100;
lowPass = 10000;
minNumZeroCross = 50;
maxNumZeroCross = 200;
numRobust = 10;
}
So far, I know by analyzing logs that the whistle is not being recognized in the isPassedStandardDeviation(double[][] spectrogramData) of the class DetectionApi.java.
The AudioRecord is being initialized like this:
private int channelConfiguration = AudioFormat.CHANNEL_IN_MONO;
private int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
private int sampleRate = 44100;
private int frameByteSize = 2048;
int sampleRate = AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
int recBufSize = AudioRecord.getMinBufferSize(sampleRate, channelConfiguration, audioEncoding); // need to be larger than size of a frame
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.MIC, sampleRate, channelConfiguration, audioEncoding, recBufSize);
I have tried different workarounds by changing the boundary value in WhistleApi.java class to no results. Can anyone guide me, where am I overlooking or where can I find the mistake in the library ?
Thanks in advance :)
I faces similar problem. For me whistleDemo was detecting whistle on micromax funbook pro tablet with ICS but was not detecting whistle in micromax A110 phone with ICS.
In whistleApi I changed minNumZeroCross from 50 to 20 and it resolved the problem.

AudioRecord: buffer overflow?

I'm getting buffer overflow while RECORDING with my app. The recording is performed in a Service. I could not figure out why I'm getting this message from AudioFlinger.
Below I instantiate the AudioRecord object and set it's callbacks.
bufferSize = AudioRecord.getMinBufferSize(sampleRate, channelConfig, audioFormat);
aRecorder = new AudioRecord(audioSource, sampleRate, channelConfig, audioFormat, bufferSize);
aRecorder.setRecordPositionUpdateListener(updateListener);
bytesPerSample = bitsPerSample / 8;
int bytesPerFrame = nChannels * bytesPerSample;
framePeriod = bufferSize / bytesPerFrame; // nr of frames that can be kept in a bufferSize dimension
int result = aRecorder.setPositionNotificationPeriod(framePeriod);
buffer = new byte[bufferSize];
The audioRecord callback:
private AudioRecord.OnRecordPositionUpdateListener updateListener = new AudioRecord.OnRecordPositionUpdateListener(){
public void onPeriodicNotification(AudioRecord recorder){
int result = aRecorder.read(buffer, 0, buffer.length);
}
public void onMarkerReached(AudioRecord recorder)
{}
};
I suspect the problem is related to the:aRecorder.setPositionNotificationPeriod(framePeriod); - maybe the period is too big for this bufferSize and a faster(smaller) period will solve the issue.
Could someone tells me how to get rid of the buffer overflow?
To fix that issue, change the buffer size of AudioRecord to 2 times the minimum buffer size.
You can use AudioRecord.getMinBufferSize() static method. This will give you the minimum buffer size to use for your current format.
The syntax of getMinBufferSize() method is:
public static int getMinBufferSize (
int sampleRateInHz, int channelConfig, int audioFormat)
Anything less than this number will result in failure while creating the AudioRecord object.
You should have been reducing the buffer size, so as not to overwhelm the audio subsystem with demands for data.
Remember to put the overridden methods (#Override) for audioRecord callback as follows:
private AudioRecord.OnRecordPositionUpdateListener updateListener = new AudioRecord.OnRecordPositionUpdateListener(){
#Override
public void onPeriodicNotification(AudioRecord recorder){
int result = aRecorder.read(buffer, 0, buffer.length);
}
#Override
public void onMarkerReached(AudioRecord recorder)
{}
};
I recommend to read the post: Android audio recording, part 2
One more thing that you could try is to use threads on recording and the other process on the recorded bytes, thus avoiding too much overload on the main UI thread.
The open source sample code for this approach: musicg_android_demo
Check this post for more - android-audiorecord-class-process-live-mic-audio-quickly-set-up-callback-func
Thats because :
framePeriod = bufferSize / bytesPerFrame;
You need to multiply and not divide your buffersize.
Try with :
framePeriod = bufferSize * bytesPerFrame;
And if you need a sample : here is a complete audio capture class
hope it helps

Categories

Resources