I know that there are a lot of questions about this topic. But I'm having issue with ENCODING_PCM_FLOAT.
I have an PCM stream with following information:
Encoding : 32 bit float
Byte order: Little Endian
Channels : 2 channels ( stereo)
Sample Rate : 48000
And I want to feed it to AudioTrack. I use the following APIs:
private final int SAMPLE_RATE = 48000;
private final int CHANNEL_COUNT = AudioFormat.CHANNEL_OUT_STEREO;
private final int ENCODING = AudioFormat.ENCODING_PCM_FLOAT;
...
int bufferSize = AudioTrack.getMinBufferSize(SAMPLE_RATE, CHANNEL_COUNT, ENCODING);
AudioAttributes audioAttributes = new AudioAttributes.Builder()
.setUsage(AudioAttributes.USAGE_MEDIA)
.setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
.build();
AudioFormat audioFormat = new AudioFormat.Builder()
.setEncoding(ENCODING)
.setSampleRate(SAMPLE_RATE)
.build();
audioTrack = new AudioTrack(audioAttributes, audioFormat, bufferSize
, AudioTrack.MODE_STREAM, AudioManager.AUDIO_SESSION_ID_GENERATE);
audioTrack.play();
And then I start listening from audio stream:
private void startListening() {
while (true) {
try {
byte[] buffer = new byte[8192];
DatagramPacket packet = new DatagramPacket(buffer, buffer.length);
mSocket.receive(packet);
FloatBuffer floatBuffer = ByteBuffer.wrap(packet.getData()).asFloatBuffer();
float[] audioFloats = new float[floatBuffer.capacity()];
floatBuffer.get(audioFloats);
for (int i = 0; i < audioFloats.length; i++) {
audioFloats[i] = audioFloats[i] / 0x8000000;
}
audioTrack.write(audioFloats, 0, audioFloats.length, AudioTrack.WRITE_NON_BLOCKING);
} catch (IOException e) {
e.printStackTrace();
}
}
}
But I don't hear any sound at all. I can play the byte array as PCM_16 (without convert it to float array), but it contains so much noise. So I think the stream input is not the problem.
If you have any idea, please let me know.
Thanks for reading!
for (int i = 0; i < audioFloats.length; i++) {
audioFloats[i] = audioFloats[i] / 0x8000000;
}
I was an idiot, the code block above is not needed. After removed, the audio play normally.
Related
I am new to Android . I want to record some audio , and then I created an Android app . It works .
But the audio it recorded is too loud , and easy to produce clipping distortion .
For example , first I start my app and speak 'hello' to my Android phone , I get the recording file 'recording1.pcm' ; then I start another recording app from GooglePlay , speak 'hello' to my Android phone at the same volume , I get the recording file 'recording2.pcm' .
then I open those two files in Audition , the waveform of 'recording1.pcm' (records by my app) is clipped , and the waveform of 'recording2.pcm' (records by another app) is not clipped . (I am sorry I cannot embed images )
the core code of recordings as follows :
private class RecordAudio extends AsyncTask<Void, Integer, Void> {
#Override
protected Void doInBackground(Void... params) {
isRecording = true;
try {
DataOutputStream dos = new DataOutputStream(new BufferedOutputStream(new FileOutputStream(recordingFile)));
int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);
AudioRecord audioRecord = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER, frequency, channelConfiguration, audioEncoding, bufferSize);
short[] buffer = new short[bufferSize * 99];
audioRecord.startRecording();
int r = 0;
while (isRecording) {
int bufferReadResult = audioRecord.read(buffer, 0, bufferSize);
for (int i = 0; i < bufferReadResult; i++) {
dos.writeShort(buffer[i]);
}
publishProgress(new Integer(r));
r++;
}
audioRecord.stop();
dos.close();
} catch (Throwable t) {
Log.e("AudioRecord", "Recording Failed");
}
return null;
}
}
Other parameters in my app:
frequency = 16000;
channelConfiguration = AudioFormat.CHANNEL_IN_STEREO;
audioEncoding = AudioFormat.ENCODING_PCM_16BIT;
I have tried the following :
I change the code dos.writeShort(buffer[i]); to dos.writeShort(buffer[i]/2); , then the amplitude of the waveform become half of the past , but still with clipping distortion . I think that before the buffer got the audio data , the data has been clipped .
I look through the API of 'audioRecord' , but no API for microphone volume .
Thanks for your help !
I'm working on a project using stereo record of the Android phones (note 3). But I need to split the data from different channels (right, left). Any idea of how to perform that?
Now, I use AudioRecord to record the sound of internal microphones. And I can record, save the sound to .raw and .wav files.
Some codes as follows.
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 44100;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
bufferSizeInBytes = AudioRecord.getMinBufferSize(sampleRateInHz,
channelConfig, audioFormat);
audioRecord = new AudioRecord(audioSource, sampleRateInHz,
channelConfig, audioFormat, bufferSizeInBytes);
// some other codes....
//get the data from audioRecord
readsize = audioRecord.read(audiodata, 0, bufferSizeInBytes);
Finally, I got the answers. I used stereo record of android phone. And the audioFormat is PCM_16BIT.
private int audioSource = MediaRecorder.AudioSource.MIC;
private static int sampleRateInHz = 48000;
private static int channelConfig = AudioFormat.CHANNEL_IN_STEREO;
private static int audioFormat = AudioFormat.ENCODING_PCM_16BIT;
which means the data stored in buffer as follows.
leftChannel data: [0,1],[4,5]...
rightChannel data: [2,3],[6,7]...
So the code of splitting data of stereo record.
readSize = audioRecord.read(audioShortData, 0, bufferSizeInBytes);
for(int i = 0; i < readSize/2; i = i + 2)
{
leftChannelAudioData[i] = audiodata[2*i];
leftChannelAudioData[i+1] = audiodata[2*i+1];
rightChannelAudioData[i] = audiodata[2*i+2];
rightChannelAudioData[i+1] = audiodata[2*i+3];
}
Then you can write the data to file.
leftChannelFos = new FileOutputStream(rawLeftChannelDataFile);
rightChannelFos = new FileOutputStream(rawRightChannelDataFile);
leftChannelBos = new BufferedOutputStream(leftChannelFos);
rightChannelBos = new BufferedOutputStream(rightChannelFos);
leftChannelDos = new DataOutputStream(leftChannelBos);
rightChannelDos = new DataOutputStream(rightChannelBos);
leftChannelDos.write(leftChannelAudioData);
rightChannelDos.write(rightChannelAudioData);
Happy coding!
I am currently developing an Android application that has to record the microphone input as PCM stream.
Whenever I record something, I experience some strange stutter and I can't find a solution to this.
Here's my code:
In my MainActivity I have an ASyncTask for the Microphone input:
ArrayList<byte[]> mBufferList;
#Override
protected String doInBackground(String... params) {
Thread.currentThread().setPriority(Thread.MAX_PRIORITY);
mMicrophone = new Microphone();
mMicrophone.init();
byte[] buffer;
while (mRecord) {
try {
mMicrophone.record();
buffer = mMicrophone.getBuffer();
mBufferList.add(buffer);
}
catch
{
}
}
}
In my Microphone class I initialize the AudioRecorder:
public void init() {
Log.d("DEBUG", "Microphone: Recording started");
mBufferSize = AudioRecord.getMinBufferSize(44100,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT);
mRecorder = new AudioRecord(AudioSource.MIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, mBufferSize);
mRecorder.startRecording();
mBuffer = new short[mBufferSize];
}
The record method:
public void record() throws IOException {
mRecorder.read(mBuffer, 0, mBufferSize);
}
Short[] to Byte[]:
public byte[] shortToBytes(short[] sData) {
int shortArrsize = sData.length;
byte[] bytes = new byte[shortArrsize * 2];
for (int i = 0; i < shortArrsize; i++) {
bytes[i * 2] = (byte) (sData[i] & 0x00FF);
bytes[(i * 2) + 1] = (byte) (sData[i] >> 8);
sData[i] = 0;
}
return bytes;
}
Method to retrieve the buffer:
public byte[] getBuffer() {
byte[] buffer = shortToBytes(mBuffer);
return buffer;
}
I have uploaded a wav-file which demonstrates the stutter effect. I'm saying 'One':
Wav-File
I already tried to change the samplerates, buffersizes et cetera, but with no avail.
Any help is very appreciated! Would be great if anyone could help me out!
Please note: This error is not caused by the way I replay the pcm stream since I have tested it on an android devices and even sent the raw data to a server to convert the file to a wav there.
After hours and hours of desperately searching for a solution I have finally found the error.
I accidentaly created my short buffer in the Microphone class like this:
mBuffer = new short[mBufferSize];
The buffer size is in bytes though, so I of course have to use mBuffersize/2
mBuffer = new short[mBufferSize/2];
I will keep my question online in case anyone is interested in the code and /or has a similar problem.
i'm programming for Android 2.1.Could you help me with the following problem?
I have three files, and the general purpose is to play a sound with audiotrack buffer by buffer. I'm getting pretty desperate here because I tried about everything, and there still is no sound coming out of my speakers (while android's integrated mediaplayer has no problem playing sounds via the emulator).
Source code:
An audioplayer class, which implements the audio track. It will receive a buffer, in which the sound is contained.
public AudioPlayer(int sampleRate, int channelConfiguration, int audioFormat) throws ProjectException {
minBufferSize = AudioTrack.getMinBufferSize(sampleRate, channelConfiguration, audioFormat);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate, channelConfiguration,
audioFormat, minBufferSize, AudioTrack.MODE_STREAM);
if(audioTrack == null)
throw new ProjectException("Erreur lors de l'instantiation de AudioTrack");
audioTrack.setStereoVolume((float)1.0, (float)1.0);
}
#Override
public void addToQueue(short[] buffer) {
audioTrack.write(buffer, 0, buffer.length*Short.SIZE);
if(!isPlaying ) {
audioTrack.play();
isPlaying = true;
}
}
A model class, which I use to fill the buffer. Normally, it would load sound from a file, but here it just uses a simulator (440Hz), for debugging.
Buffer sizes are chosen very loosely; normally first buffer size should be 6615 and then 4410. That's, again, only for debug.
public void onTimeChange() {
if(begin) {
//First fill about 300ms
begin = false;
short[][] buffer = new short[channels][numFramesBegin];
//numFramesBegin is for example 10000
//For debugging only buffer[0] is useful
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFramesBegin;
audioPlayer.addToQueue(buffer[0]);
}
else {
try {
short[][] buffer = new short[channels][numFrames];
//Afterwards fill like 200ms
fillSimulatedBuffer(buffer, framesRead);
framesRead += numFrames;
audioPlayer.addToQueue(buffer[0]);
} catch (Exception e) {
e.printStackTrace();
}
}
}
private short simulator(int time, short amplitude) {
//a pure A (frequency=440)
//this is probably wrong due to sampling rate, but 44 and 4400 won't work either
return (short)(amplitude*((short)(Math.sin((double)(simulatorFrequency*time)))));
}
private void fillSimulatedBuffer(short[][] buffer, int offset) {
for(int i = 0; i < buffer[0].length; i++)
buffer[0][i] = simulator(offset + i, amplitude);
}
A timeTask class that calls model.ontimechange() every 200 ms.
public class ReadMusic extends TimerTask {
private final Model model;
public ReadMusic(Model model) {
this.model = model;
}
#Override
public void run() {
System.out.println("Task run");
model.onTimeChange();
}
}
What debugging showed me:
timeTask works fine, it does its job;
Buffer values seem coherent, and buffer size is bigger than minBufSize;
Audiotrack's playing state is "playing"
no exceptions are caught in model functions.
Any ideas would be greatly appreciated!
OK I found the problem.
There is an error in the current AudioTrack documentation regarding AudioTrack and short buffer input: the specified buffer size should be the size of the buffer itself (buffer.length) and not the size in bytes.
I want to do some FSK Modulation over the audio port. So the problem is that my sinus wave isn't very good. It is disturb by even parts. I used the code original from http://marblemice.blogspot.com/2010/04/generate-and-play-tone-in-android.html with the further modification from Playing an arbitrary tone with Android and https://market.android.com/details?id=re.serialout&feature=search_result .
So where is the failure? What do I wrong?
private static int bitRate=300;
private static int sampleRate=48000;
private static int freq1=600;
public static void loopOnes(){
playque.add(UARTHigh);
athread.interrupt();
}
private static byte[] UARTHigh() {
int numSamples=sampleRate/bitRate;
double sample[]=new double[numSamples];
byte[] buffer=new byte[numSamples*2];
for(int i=0; i<numSamples;++i){
sample[i]=Math.sin(2*Math.PI*i*freq1/sampleRate);
}
int idx = 0;
for (final double dVal : sample) {
// scale to maximum amplitude
final short val = (short) ((dVal * 32767));
// in 16 bit wav PCM, first byte is the low order byte
buffer[idx++] = (byte) (val & 0x00ff);
buffer[idx++] = (byte) ((val & 0xff00) >>> 8);
}
return buffer;
}
private static void playSound(){
active = true;
while(active)
{
try {Thread.sleep(Long.MAX_VALUE);} catch (InterruptedException e) {
while (playque.isEmpty() == false)
{
if (atrk != null)
{
if (generatedSnd != null)
{
// Das letzte Sample erst fertig abspielen lassen
// systemClock.sleep(xx) xx könnte angepasst werden
while (atrk.getPlaybackHeadPosition() < (generatedSnd.length))
SystemClock.sleep(50); // let existing sample finish first: this can probably be set to a smarter number using the information above
}
atrk.release();
}
UpdateParameters(); // might as well do it at every iteration, it's cheap
generatedSnd = playque.poll();
length = generatedSnd.length;
if (minbufsize<length)
minbufsize=length;
atrk = new AudioTrack(AudioManager.STREAM_MUSIC,
sampleRate, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minbufsize,
AudioTrack.MODE_STATIC);
atrk.setStereoVolume(1,1);
atrk.write(generatedSnd, 0, length);
atrk.play();
}
// Playque is Empty =>send StopBit!
// Set Loop Points
int setLoopError=atrk.setLoopPoints(0, length, -1);
atrk.play();
}
}
}
}
So the answer is to change from MODE_STATIC to MODE_STREAM and don't use Looping Points. In a new thread with low priority a busy loop writes the tracks.