I'm working with Android, trying to make my AudioTrack application play a Windows .wav file (Tada.wav). Frankly, it shouldn't be this hard, but I'm hearing a lot of strange stuff. The file is saved on my phone's mini SD card and reading the contents doesn't seem to be a problem, but when I play the file (with parameters I'm only PRETTY SURE are right), I get a few seconds of white noise before the sound seems to resolve itself into something that just may be right.
I have successfully recorded and played my own voice back on the phone -- I created a .pcm file according to the directions in this example:
http://emeadev.blogspot.com/2009/09/raw-audio-manipulation-in-android.html
(without the backwards masking)...
Anybody got some suggestions or awareness of an example on the web for playing a .wav file on an Android??
Thanks,
R.
I stumbled on the answer (frankly, by trying &^#! I didn't think would work), in case anybody's interested... In my original code (which is derived from the example in the link in the original post), the data is read from the file like so:
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readShort(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is array of SHORTS. So, the readShort() method would seem to make sense here, since the data is 16-bit PCM... However, on the Android that seems to be the problem. I changed that code to the following:
music=new byte[(int) file.length()];//size & length of the file
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readByte(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is an array of BYTES. I'm still telling the AudioTrack that it's 16-bit PCM data, and my Android doesn't seem to have a problem with writing an array of bytes into an AudioTrack thus configured... Anyway, it finally sounds right, so if anyone else wants to play Windows sounds on their Android, for some reason, that's the solution. Ah, Endianness......
R.
I found a lot of long answers to this question. My final solution, which given all the cutting and pasting is hardly mine, comes down to:
public boolean play() {
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.noise);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try{
music = new byte[512];
at.play();
while((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
STOPPED is just a "true" sent back as a signal to reset the pause/play button.
And in the class initializer:
public Mp3Track(Context context) {
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
Context is just "this" from the calling activity.
You can use a FileInputStream on the sdcard, etc. My files are in res/raw
Are you skipping the first 44 bytes of the file before you dump the rest of the file's data into the buffer? The first 44 bytes are the WAVE header and they would sound like random noise if you tried to play them.
Also, are you sure you are creating the AudioTrack with the same properties as the WAVE you are trying to play (sample rate, bit rate, number of channels, etc)? Windows actually does a good job of giving this information to you in the File Properties page:
As said by Aaron C, you have to skip initial 44 bytes or (as I prefer) read first 44 bytes that are the WAVE header. In this way you know how many channels, bits per sample, length, etc... the WAVE contains.
Here you can find a good implementation of a WAVE header parser/writer.
Please don't perpetuate terrible parsing code. WAV parsing is trivial to implement
http://soundfile.sapp.org/doc/WaveFormat/
and you will thank yourself by being able to parse things such as the sampling rate, bit depth, and number of channels.
Also x86 and ARM (at least by default) are both little endian , so native-endian WAV files should be fine without any shuffling.
Just confirm if you have AudioTrack.MODE_STREAM and not AudioTrack.MODE_STATIC in the AudioTrack constructor:
AudioTrack at = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
// buffer length in bytes
outputBufferSize,
AudioTrack.MODE_STREAM
);
Sample wav file:
http://www.mauvecloud.net/sounds/pcm1644m.wav
Sample Code:
public class AudioTrackPlayer {
Context mContext;
int minBufferSize;
AudioTrack at;
boolean STOPPED;
public AudioTrackPlayer(Context context) {
Log.d("------","init");
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
public boolean play() {
Log.d("------","play");
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.pcm1644m);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try {
music = new byte[512];
at.play();
while ((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
}
Related
I am currently working on an app which is running in the background and muting the microphone during incoming and outgoing calls.But am unable to mute the microphone when there is any type audio recording.
It will be of great help if I am provided with any type of solutions.
Thanks in advance
I assume your talking about a Android app, but you should be more concise in your questions.
Regardless, this is a post about muting the microphone which should answer your question.
How does setMicrophoneMute() work?
AudioManager::setMicrophoneMute only applies to voice calls (and VoIP). It's possible that it will affect recordings as well on some products, but there's no guarantee that it will, so you can't rely on it.
It should still mute the voice call uplink so that the other party can't hear what you're saying even if there's a recording ongoing. If it doesn't I would consider that a bug in the implementation of the device you're testing this on. However, what you say will end up in the recording that you do locally (unless you're using the VOICE_DOWNLINK AudioSource).
If you don't want to record audio while video recording. You can set
AudioManager.setStreamMute(AudioManager.STREAM_MUSIC, true);
It also work by AudioManager.STREAM_SYSTEM for some devices.
There is no direct method to mute on AudioRecorder. We need to add some trick for mute the recording.
What I can do is, I download silence wav file and convert it to byte and add on the byte array.
When user click on Mute Button, isMuteClick = true and when unmute it will be false.
while (isStreaming)
{
if(!isMuteClick){
// read() is a blocking call // can set blocking see docs
int bytesRead = recorder.read(readBuffer, 0,bytesReadTotal);
bytesReadTotal += bytesRead; // above ...chunk - bytesReadTotal);
mainBuffer.write(readBuffer, 0, bytesRead);
}else{
int bytesRead = recorder.read(WavToByteArray(R.raw.silence), 0, chunk - bytesReadTotal); // 505 // 4410 //chunk - bytesReadTotal
bytesReadTotal += bytesRead; // above ...chunk - bytesReadTotal);
mainBuffer.write(WavToByteArray(R.raw.silence), 0, bytesRead);
}
} /
And here is code for converting silence wav file to byte array
private byte[] WavToByteArray(int resourceId) {
byte[] filteredByteArray = new byte[1024];
try {
InputStream inputStream = this.getResources().openRawResource(resourceId);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] soundBytes = new byte[1024];
int i = Integer.MAX_VALUE;
while ((i = inputStream.read(soundBytes, 0, soundBytes.length)) > 0) {
outputStream.write(soundBytes, 0, i);
}
inputStream.close();
outputStream.close();
// remove .wav header
byte[] audioBytes = outputStream.toByteArray();
filteredByteArray = Arrays.copyOfRange(audioBytes, 44, audioBytes.length);
} catch (Exception e) {
e.printStackTrace();
}
return filteredByteArray;
}
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?
anyone know of any usefull links for learning audio dsp for android?
or a sound library?
im trying to make a basic mixer for playing wav files but realised i dont know enough about dsp, and i cant find anything at all for android.
i have a wav file loaded into a byte array and an AudioTrack on a short loop.
how can i feed the data in?
i expect this post will be ignored but its worth a try.
FileInputStream is = new FileInputStream(filePath);
BufferedInputStream bis = new BufferedInputStream(is);
DataInputStream dis = new DataInputStream(bis);
int i = 0;
while (dis.available() > 0) {
byteData[i] = dis.readByte(); //byteData
i++;
}
final int minSize = AudioTrack.getMinBufferSize( 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT );
track = new AudioTrack( AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_STEREO, AudioFormat.ENCODING_PCM_16BIT,
minSize, AudioTrack.MODE_STREAM);
track.play();
bRun=true;
new Thread(new Runnable() {
public void run() {
track.write(byteData, 0, minSize);
}
}).start();
I'll give this a shot just because I was in your position a few months ago...
If you already have the wav file audio samples in a byte array, you simple need to pass the samples to the audio track object (lookup the write() methods).
To mix audio together you simply add the sames from each track. For example, add the first sample from track 1 to track 2, add the second sample from track 1 to track 2 and so on. The end result would ideally be a third array containing the added samplws which you pass to the 'write' method of your audio track instance.
You must be mindful of clipping here. If your data type 'short' then the maximum value allowed is 32768. A simple way to ensure that your added samples do not exceed this limit is to peform the addition and store the result in a variable whose data type is larger than a short (eg. int) and evaluate the result. If it's greater than 32768 then make it equal to 32768 and cast it back to a short.
int result = track1[i] + track2[i];
if(result > 32768) {
result = 32768;
}
else if(result < -32768) {
result = -32768;
}
mixedAudio[i] = (short)result;
Notice how the snippet above also tests for the minimum range of a short.
Appologies for the lack of formatting here, I'm on my mobile phone on a train :-)
Good luck.
Background
I am creating a VoIP app. I know that there are plenty of ones out already, but I have my reasons. Due to commercial implications I cannot just fork SipDroid, although it is a quality app. This app is aimed at Level 10 Gingerbread 2.3.3.
Problem
I have created a simple Activity which creates an AudioRecord instance, and then begins a loop:
int timestamp = 0;
int seqNr = 12;
while(true) {
byte[] buffer = new byte[bufferSize];
int num = recorder.read(buffer, 0, bufferSize);
try {
byte[] pcm = new byte[bufferSize];
//
// presumably here I convert the byte[] from PCM into G711??
//
RTPStream.Write(pcm,seqNr,timestamp);
timestamp += num;
seqNr++;
} catch (IOException e) {
e.printStackTrace();
}
}
Question
How do I turn the PCM 44KHz 16bit Mono byte[]'s into G711u/a byte[]'s??
AudioGroup is available internally. That is what Native SipAudioCall is using. There is a a way to use internal API. Knowing that the class will be available in API 12. You should use it.
Try using AudioStram instead. Set codec via setCodec(AudioCodec) and acquire audio via AudioGroup.
I am trying to read the data from the MIC and process it and store it in a file. But i am not getting any data from the MIC, the buffer is all zeroes.
int MIN_BUF = AudioRecord.getMinBufferSize(8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(
MediaRecorder.AudioSource.MIC, 8000,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, MIN_BUF);
byte[] pcm_in = new byte[320];
recorder.startRecording();
while(record)
{
int bytes_read = recorder.read(pcm_in, 0, pcm_in.length);
switch(bytes_read)
{
case AudioRecord.ERROR_INVALID_OPERATION:
case AudioRecord.ERROR_BAD_VALUE:
Log.i("Microphone", "Error in reading the data");
break;
default:
print(pcm_in);
break;
}
}
recorder.stop();
recorder.release();
But in the print(pcm), when i printed byte by byte i am getting all zeroes. Some posts are there in stackoverflow with similar issues, but my issue didn't got fixed with that.
Please help me in fixing this.
Thanks & Regards,
SSuman185
print(pcm_in) will show you the actual data. you need to get the pcm_in data to pcm in a loop till you stop recording.
Meaning that your variable record is a boolean right. and you will make it false in another method. so till you make it false recorder.read(pcm_in, 0, pcm_in.length) operation will get the data from your mic and put it into the pcm_in(so you need to be sure that the size of pcm_in is equal to pcm). the bytes_read will be the size of the bytes read in this operation. so you can copy the pcm_in bytes to pcm in a loop that can read whole pcm_in data.
for example:
bytes_read = recorder.read(pcm_in, 0, pcm_in.length);
for(int i=0; i<bytes_read ;i++){
pcm[i] = pcm_in[i];
}
But this is a weird usage. I think your pcm should be as large as the file you need to load in it. and make sure you are addin the pcm_in to it , not overriding. I think this is what you want.