I am currently working on an app which is running in the background and muting the microphone during incoming and outgoing calls.But am unable to mute the microphone when there is any type audio recording.
It will be of great help if I am provided with any type of solutions.
Thanks in advance
I assume your talking about a Android app, but you should be more concise in your questions.
Regardless, this is a post about muting the microphone which should answer your question.
How does setMicrophoneMute() work?
AudioManager::setMicrophoneMute only applies to voice calls (and VoIP). It's possible that it will affect recordings as well on some products, but there's no guarantee that it will, so you can't rely on it.
It should still mute the voice call uplink so that the other party can't hear what you're saying even if there's a recording ongoing. If it doesn't I would consider that a bug in the implementation of the device you're testing this on. However, what you say will end up in the recording that you do locally (unless you're using the VOICE_DOWNLINK AudioSource).
If you don't want to record audio while video recording. You can set
AudioManager.setStreamMute(AudioManager.STREAM_MUSIC, true);
It also work by AudioManager.STREAM_SYSTEM for some devices.
There is no direct method to mute on AudioRecorder. We need to add some trick for mute the recording.
What I can do is, I download silence wav file and convert it to byte and add on the byte array.
When user click on Mute Button, isMuteClick = true and when unmute it will be false.
while (isStreaming)
{
if(!isMuteClick){
// read() is a blocking call // can set blocking see docs
int bytesRead = recorder.read(readBuffer, 0,bytesReadTotal);
bytesReadTotal += bytesRead; // above ...chunk - bytesReadTotal);
mainBuffer.write(readBuffer, 0, bytesRead);
}else{
int bytesRead = recorder.read(WavToByteArray(R.raw.silence), 0, chunk - bytesReadTotal); // 505 // 4410 //chunk - bytesReadTotal
bytesReadTotal += bytesRead; // above ...chunk - bytesReadTotal);
mainBuffer.write(WavToByteArray(R.raw.silence), 0, bytesRead);
}
} /
And here is code for converting silence wav file to byte array
private byte[] WavToByteArray(int resourceId) {
byte[] filteredByteArray = new byte[1024];
try {
InputStream inputStream = this.getResources().openRawResource(resourceId);
ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
byte[] soundBytes = new byte[1024];
int i = Integer.MAX_VALUE;
while ((i = inputStream.read(soundBytes, 0, soundBytes.length)) > 0) {
outputStream.write(soundBytes, 0, i);
}
inputStream.close();
outputStream.close();
// remove .wav header
byte[] audioBytes = outputStream.toByteArray();
filteredByteArray = Arrays.copyOfRange(audioBytes, 44, audioBytes.length);
} catch (Exception e) {
e.printStackTrace();
}
return filteredByteArray;
}
Related
I'm working on an Android app and I would like to play some short sounds(~ 2s). I tried Soundpool but it doesn't really suit for me since it can't check if a sounds is already playing. So I decided to use AudioTrack.
It works quite good BUT most of the time, when it begins to play a sound there is a "click" sound.
I checked my audiofiles and they are clean.
I use audiotrack on stream mode. I saw that static mode is better for short sounds but after many searchs I still don't understand how to make it work.
I also read that the clicking noise can be caused by the header of the wav file, so maybe the sound would disappear if I skip this header with setPlaybackHeadPosition(int positionInFrames) function (that is supposed to work only in static mode)
Here is my code (so the problem is the ticking noise at the beginning)
int minBufferSize = AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, 44100, AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT, minBufferSize, AudioTrack.MODE_STREAM);
audioTrack.play();
int i = 0;
int bufferSize = 2048; //don't really know which value to put
audioTrack.setPlaybackRate(88200);
byte [] buffer = new byte[bufferSize];
//there we open the wav file >
InputStream inputStream = getResources().openRawResource(R.raw.abordage);
try {
while((i = inputStream.read(buffer)) != -1)
audioTrack.write(buffer, 0, i);
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
try {
inputStream.close();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Does anyone has a solution to avoid that noise? I tried this, that works sometimes but not everytime. Could someone show me how to implement audiotrack in MODE_STATIC ?
Thank you
I found that Scott Stensland's reasoning was fitting my issue (thanks!).
I eliminated the pop by running a dead simple linear fade-in filter over the beginning of the sample array. The filter makes sample values start from 0 and slowly increase in amplitude to their original value. By always starting at a value of 0 at the zero cross over point the pop never occurs.
A similar fade-out filter was applied at the end of the sample array. The filter duration can easily be adjusted.
import android.util.Log;
public class FadeInFadeOutFilter
{
private static final String TAG = FadeInFadeOutFilter.class.getSimpleName();
private final int filterDurationInSamples;
public FadeInFadeOutFilter ( int filterDurationInSamples )
{
this.filterDurationInSamples = filterDurationInSamples;
}
public void filter ( short[] audioShortArray )
{
filter(audioShortArray, audioShortArray.length);
}
public void filter ( short[] audioShortArray, int audioShortArraySize )
{
if ( audioShortArraySize/2 <= filterDurationInSamples ) {
Log.w(TAG, "filtering audioShortArray with less samples filterDurationInSamples; untested, pops or even crashes may occur. audioShortArraySize="+audioShortArraySize+", filterDurationInSamples="+filterDurationInSamples);
}
final int I = Math.min(filterDurationInSamples, audioShortArraySize/2);
// Perform fade-in and fade-out simultaneously in one loop.
final int fadeOutOffset = audioShortArraySize - filterDurationInSamples;
for ( int i = 0 ; i < I ; i++ ) {
// Fade-in beginning.
final double fadeInAmplification = (double)i/I; // Linear ramp-up 0..1.
audioShortArray[i] = (short)(fadeInAmplification * audioShortArray[i]);
// Fade-out end.
final double fadeOutAmplification = 1 - fadeInAmplification; // Linear ramp-down 1..0.
final int j = i + fadeOutOffset;
audioShortArray[j] = (short)(fadeOutAmplification * audioShortArray[j]);
}
}
}
In my case. It was WAV-header.
And...
byte[] buf44 = new byte[44];
int read = inputStream.read(buf44, 0, 44);
...solved it.
A common cause of audio "pop" is due to the rendering process not starting/stopping sound at the zero cross over point (assuming min/max of -1 to +1 cross over would be 0). Transducers like speakers or ear-buds are at rest (no sound input) which maps to this zero cross level. If an audio rendering process fails to start/stop from/to this zero, the transducer is being asked to do the impossible, namely instantaneously go from its resting state to some non-zero position in its min/max movement range, (or visa versa if you get a "pop" at the end).
Finally, after a lot of experimentation, I made it work without the click noise. Here is my code (unfortunaly, I can't read the size of the inputStream since the getChannel().size() method only works with FileInputStream type)
try{
long totalAudioLen = 0;
InputStream inputStream = getResources().openRawResource(R.raw.abordage); // open the file
totalAudioLen = inputStream.available();
byte[] rawBytes = new byte[(int)totalAudioLen];
AudioTrack track = new AudioTrack(AudioManager.STREAM_MUSIC,
44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO,
AudioFormat.ENCODING_PCM_16BIT,
(int)totalAudioLen,
AudioTrack.MODE_STATIC);
int offset = 0;
int numRead = 0;
track.setPlaybackHeadPosition(100); // IMPORTANT to skip the click
while (offset < rawBytes.length
&& (numRead=inputStream.read(rawBytes, offset, rawBytes.length-offset)) >= 0) {
offset += numRead;
} //don't really know why it works, it reads the file
track.write(rawBytes, 0, (int)totalAudioLen); //write it in the buffer?
track.play(); // launch the play
track.setPlaybackRate(88200);
inputStream.close();
}
catch (FileNotFoundException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IOException e) {
Log.e(TAG, "Error loading audio to bytes", e);
} catch (IllegalArgumentException e) {
Log.e(TAG, "Error loading audio to bytes", e);
}
So the solution to skip the clicking noise is to use MODE_STATIC and setPlaybackHeadPosition function to skip the beginning of the audio file (that is probably the header or I don't know what).
I hope that this part of code will help someone, I spent too many time trying to find a static mode code sample without finding a way to load a raw ressource.
Edit: After testing this solution on various devices, it appears that they have the clicking noise anyway.
For "setPlaybackHeadPosition" to work, you have to play and pause first. It doesn't work if your track is stopped or not started. Trust me. This is dumb. But it works:
track.play();
track.pause();
track.setPlaybackHeadPosition(100);
// then continue with track.write, track.play, etc.
I want to write a program to check if the internal microphone of android phone is on, off or in use by some other application.
If this is possible then how can I do this?
I read related questions at stack overflow but did not find a solution.
Here's what I'm using to check if the microphone is busy (based on Odaym answer and my own tests):
(Updated with Android 6.0 Marshmallow compatibility, as suggested in comments)
public static boolean checkIfMicrophoneIsBusy(Context ctx){
AudioRecord audio = null;
boolean ready = true;
try{
int baseSampleRate = 44100;
int channel = AudioFormat.CHANNEL_IN_MONO;
int format = AudioFormat.ENCODING_PCM_16BIT;
int buffSize = AudioRecord.getMinBufferSize(baseSampleRate, channel, format );
audio = new AudioRecord(MediaRecorder.AudioSource.MIC, baseSampleRate, channel, format, buffSize );
audio.startRecording();
short buffer[] = new short[buffSize];
int audioStatus = audio.read(buffer, 0, buffSize);
if(audioStatus == AudioRecord.ERROR_INVALID_OPERATION || audioStatus == AudioRecord.STATE_UNINITIALIZED /* For Android 6.0 */)
ready = false;
}
catch(Exception e){
ready = false;
}
finally {
try{
audio.release();
}
catch(Exception e){}
}
return ready;
}
If you are using an AudioRecord object to record audio, like:
AudioRecord audio = new AudioRecord(MediaRecorder.AudioSource.MIC,
Constants.SAMPLE_RATE, AudioFormat.CHANNEL_IN_MONO,
AudioFormat.ENCODING_PCM_16BIT,Constants.BUFFER_SIZE_BYTES);
audio.startRecording();
Then right after audio.startRecording(), you're going to have to provide a buffer for reading the audio data into, and begin reading. You do that with:
int audioStatus = audio.read(bufferObject, 0, bufferSize);
The Android documentation for read() mentions the return value ERROR_INVALID_OPERATION (Constant Value: -3), this is only returned when the Mic is busy so you can check for that in your code and show a message that the Audio source is busy with another app.
As far as I know, there is no way to know the microphone's state (Busy, Available,..). Sorry
//constructor
android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
/////////////
//thread run() method
int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
AudioRecord recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
recorder.startRecording();
while(!stopped)
{
try {
//if not paused upload audio
if (uploadAudio == true) {
short[][] buffers = new short[256][160];
int ix = 0;
//allocate buffer for audio data
short[] buffer = buffers[ix++ % buffers.length];
//write audio data to track
N = recorder.read(buffer,0,buffer.length);
//create bytes big enough to hold audio data
byte[] bytes2 = new byte[buffer.length * 2];
//convert audio data from short[][] to byte[]
ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);
//encode audio data for ulaw
read(bytes2, 0, bytes2.length);
See here for ulaw encoder code. Im using the read, maxAbsPcm and encode methods
//send audio data
//os.write(bytes2,0,bytes2.length);
}
} finally {
}
}
os.close();
}
catch(Throwable x)
{
Log.w("AudioWorker", "Error reading voice AudioWorker", x);
}
finally
{
recorder.stop();
recorder.release();
}
///////////
So this works ok. The audio is sent in the proper format to the server and played at the opposite end. However the audio skips often. Example: saying 1,2,3,4 will play back with the 4 cut off.
I believe it to be a performance issue because I have timed some of these methods and when they take 0 or less seconds everything works but they quite often take a couple seconds. With the converting of bytes and encoding taking the most.
Any idea how I can optimize this code to get better performance? Or maybe a way to deal with lag (possibly build a cache)?
Background
I am creating a VoIP app. I know that there are plenty of ones out already, but I have my reasons. Due to commercial implications I cannot just fork SipDroid, although it is a quality app. This app is aimed at Level 10 Gingerbread 2.3.3.
Problem
I have created a simple Activity which creates an AudioRecord instance, and then begins a loop:
int timestamp = 0;
int seqNr = 12;
while(true) {
byte[] buffer = new byte[bufferSize];
int num = recorder.read(buffer, 0, bufferSize);
try {
byte[] pcm = new byte[bufferSize];
//
// presumably here I convert the byte[] from PCM into G711??
//
RTPStream.Write(pcm,seqNr,timestamp);
timestamp += num;
seqNr++;
} catch (IOException e) {
e.printStackTrace();
}
}
Question
How do I turn the PCM 44KHz 16bit Mono byte[]'s into G711u/a byte[]'s??
AudioGroup is available internally. That is what Native SipAudioCall is using. There is a a way to use internal API. Knowing that the class will be available in API 12. You should use it.
Try using AudioStram instead. Set codec via setCodec(AudioCodec) and acquire audio via AudioGroup.
I'm working with Android, trying to make my AudioTrack application play a Windows .wav file (Tada.wav). Frankly, it shouldn't be this hard, but I'm hearing a lot of strange stuff. The file is saved on my phone's mini SD card and reading the contents doesn't seem to be a problem, but when I play the file (with parameters I'm only PRETTY SURE are right), I get a few seconds of white noise before the sound seems to resolve itself into something that just may be right.
I have successfully recorded and played my own voice back on the phone -- I created a .pcm file according to the directions in this example:
http://emeadev.blogspot.com/2009/09/raw-audio-manipulation-in-android.html
(without the backwards masking)...
Anybody got some suggestions or awareness of an example on the web for playing a .wav file on an Android??
Thanks,
R.
I stumbled on the answer (frankly, by trying &^#! I didn't think would work), in case anybody's interested... In my original code (which is derived from the example in the link in the original post), the data is read from the file like so:
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readShort(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is array of SHORTS. So, the readShort() method would seem to make sense here, since the data is 16-bit PCM... However, on the Android that seems to be the problem. I changed that code to the following:
music=new byte[(int) file.length()];//size & length of the file
InputStream is = new FileInputStream (file);
BufferedInputStream bis = new BufferedInputStream (is, 8000);
DataInputStream dis = new DataInputStream (bis); // Create a DataInputStream to read the audio data from the saved file
int i = 0; // Read the file into the "music" array
while (dis.available() > 0)
{
music[i] = dis.readByte(); // This assignment does not reverse the order
i++;
}
dis.close(); // Close the input stream
In this version, music[] is an array of BYTES. I'm still telling the AudioTrack that it's 16-bit PCM data, and my Android doesn't seem to have a problem with writing an array of bytes into an AudioTrack thus configured... Anyway, it finally sounds right, so if anyone else wants to play Windows sounds on their Android, for some reason, that's the solution. Ah, Endianness......
R.
I found a lot of long answers to this question. My final solution, which given all the cutting and pasting is hardly mine, comes down to:
public boolean play() {
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.noise);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try{
music = new byte[512];
at.play();
while((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
STOPPED is just a "true" sent back as a signal to reset the pause/play button.
And in the class initializer:
public Mp3Track(Context context) {
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
Context is just "this" from the calling activity.
You can use a FileInputStream on the sdcard, etc. My files are in res/raw
Are you skipping the first 44 bytes of the file before you dump the rest of the file's data into the buffer? The first 44 bytes are the WAVE header and they would sound like random noise if you tried to play them.
Also, are you sure you are creating the AudioTrack with the same properties as the WAVE you are trying to play (sample rate, bit rate, number of channels, etc)? Windows actually does a good job of giving this information to you in the File Properties page:
As said by Aaron C, you have to skip initial 44 bytes or (as I prefer) read first 44 bytes that are the WAVE header. In this way you know how many channels, bits per sample, length, etc... the WAVE contains.
Here you can find a good implementation of a WAVE header parser/writer.
Please don't perpetuate terrible parsing code. WAV parsing is trivial to implement
http://soundfile.sapp.org/doc/WaveFormat/
and you will thank yourself by being able to parse things such as the sampling rate, bit depth, and number of channels.
Also x86 and ARM (at least by default) are both little endian , so native-endian WAV files should be fine without any shuffling.
Just confirm if you have AudioTrack.MODE_STREAM and not AudioTrack.MODE_STATIC in the AudioTrack constructor:
AudioTrack at = new AudioTrack(
AudioManager.STREAM_MUSIC,
sampleRate,
AudioFormat.CHANNEL_IN_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
// buffer length in bytes
outputBufferSize,
AudioTrack.MODE_STREAM
);
Sample wav file:
http://www.mauvecloud.net/sounds/pcm1644m.wav
Sample Code:
public class AudioTrackPlayer {
Context mContext;
int minBufferSize;
AudioTrack at;
boolean STOPPED;
public AudioTrackPlayer(Context context) {
Log.d("------","init");
mContext = context;
minBufferSize = AudioTrack.getMinBufferSize(44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT);
}
public boolean play() {
Log.d("------","play");
int i = 0;
byte[] music = null;
InputStream is = mContext.getResources().openRawResource(R.raw.pcm1644m);
at = new AudioTrack(AudioManager.STREAM_MUSIC, 44100,
AudioFormat.CHANNEL_CONFIGURATION_MONO, AudioFormat.ENCODING_PCM_16BIT,
minBufferSize, AudioTrack.MODE_STREAM);
try {
music = new byte[512];
at.play();
while ((i = is.read(music)) != -1)
at.write(music, 0, i);
} catch (IOException e) {
e.printStackTrace();
}
at.stop();
at.release();
return STOPPED;
}
}