A lot of smart phone now have more than one microphone. One for voice input and another for reducing the enviroment noise.
I am wondering how could I access the two microphone's signal independently? Or turn off one of the microphone?
Any thoughts or comments are welcomed. Thanks a lot.
I'm not familiar with the Galaxy S3 specifically, but the following is true for the majority of devices I've worked with:
There's no reliable way of selecting whether to use the primary or secondary mic for a mono recording. The MIC AudioSource usually means the primary mic when doing a mono recording, and the CAMCORDER source might mean the secondary mic. But there's no guarantee for this, because it depends e.g. on the physical placement of the mics.
Recording in mono effectively turns the other mic off, and whatever noise reduction is done uses only the signal from one mic (so there's little to no reduction of dynamic noise).
One exception to this might be if you record during a voice call where both mics already have been enabled in order to do uplink noise suppression.Another exception might be if you use the VOICE_RECOGNITION AudioSource, because some vendors apply noise suppression on the signal using one or more extra mics in this use-case.
Recording in stereo records from both mics on a 2-mic device. The left channel contains the input from one mic and the right channel contains the input from the other mic, but there's no guarantee as to which physical mic the channels correspond to (again, depends on the placement).
Related
I'm using EZAudio FFT to analyze audio as the iPhone "hears" it. I am listening for high-pitched sounds embedded into music (17 kHz+). When the iPhone hears the sounds with no music, it records the data perfectly and hears the pitch fine. However, when music is playing the sounds are no longer heard--or only 1 in about 8 are heard. Again, I am using EZAudio, to analyze the sound. I have an Android phone that has a similar app on it (displays an graph of Hz for incoming audio waves), but the Android phone can hear these sounds.
Why would the Android phone hear these high-pitched sounds but not the iPhone? Is it because of a flaw in EZAudio or is it due to a higher quality microphone?
The answer is most likely answer is Automatic Gain Control (AGC). This is enabled by default on the microphone, and is useful for telephony or voice recording.
At 17kHz, you're probably already at a frequency at which the microphone is not particularly sensitive, however, in the absence of audio at other frequencies, the AGC will have increase the gain of the microphone. As soon as other frequencies are present, the gain reduces again, and the 17kHz signal is in the noise.
Looking at the EZAudioFFT source code, it doesn't appear to be setting up the AVAUdioSession to use measurement-mode (which disables AGC, and the HPF on the microphone).
You can achieve this with:
NSError *pError = nil;
[[AVAudioSession sharedInstance] setMode:AVAudioSessionModeMeasurement];
I have an audio recording app on android market which records using PCM-WAV format.
My app also offers custom gain control ([-20dB, +20dB]), so I alter the original audio data with user selected gain value.
It works pretty well when using device built-in mic, but I have a user which uses some external mic plugged into his device, and the output is too loud and full of distortions (because of the loudness of his ext mic). Even when he set the gain to -20dB, the output is loud and contains distortions.
I thought I should add AGC control into the app for cases as this.
Now my question:
This AGC only applies when using DEVICE BUILT-IN mic? Or it applies also when using an ext mic plugged into the handheld?
It's quite likely that the real problem is that his microphone is overdriving the input jack - if that is the case, software can't fix the problem as what the A/D converter sees is already hopelessly distorted.
Your client may need to add an attenuator (resistive voltage divider) to the input signal.
Also, if the input signal is asymmetric it may be necessary to couple through a series capacitor to block any DC component.
Doing a recording with no gain, and examining the resulting waveform in an audio editor like audacity would probably be informative.
(Normally I would not post something this speculative as an answer, but was specificaly asked to convert it to one from its original offering as a comment)
I'm trying to record audio signals from 2 in-built microphone(bottom, top) at the same time. I can pick up bottom microphone signal using
MediaRecorder.AudioSource.MIC
and top microphone signal using
MediaRecorder.AudioSource.CAMCORDER
I can record separately but I want to record at the same time from 2 microphones.
Does anyone know how to record simultaneously?
I tried & or | operator but I can get only 1 channel signal.
I use Galaxy S2 device.
I will appreciate any response :)
Thanks in advance.
There is a misconception that in devices with 2 microphones, both the microphones will be used when recording in the stereo mode.
In my 3 years experience of testing on tens of devices, I have found that this was never the case.
The primary mic alone is used both in mono and stereo recording in the wide range of Android devices that I have worked with - from low-cost mass models to flagships.
One reason for this is that the primary mic is of a better quality (more sensitive, less noisy, etc.) and costlier than the secondary mic.
You can achieve this by doing a stereo recording using the AudioRecord (http://developer.android.com/reference/android/media/AudioRecord.html) class. Have a look at How to access the second mic android such as Galaxy 3.
Specifying the audio format as stereo and the audio source as the camcorder automatically selects two microphones, one for each channel, on a (compatible) two microphone device.
For example:
audioRecorder = new AudioRecord(MediaRecorder.AudioSource.CAMCORDER,sampleRate,android.media.AudioFormat.CHANNEL_CONFIGURATION_STEREO,android.media.AudioFormat.ENCODING_PCM_16BIT,bufferSize);
will initialise a new AudioRecord class, that can record from two device microphones in stereo in PCM, 16 bit format.
For more help on recording using AudioRecord (to record .wav), have a look at: http://i-liger.com/article/android-wav-audio-recording.
I'm developing audio app(iOS/Android), and I can't find nowhere information:
How can a app read or set the microphone gain?
Are audio Digital Signal Processing tools available for the headphone jack?
Also I realize that some manufacturers develop specific accessibility features (like mono sound mode, or sound balance (left - right) on samsung devices), but they don't provide any API to check or control this feature. When I turn on/off mono mode on Samsung GS3 in logs I see:
I/AudioHardwareTinyALSA( 1904): setParameters(toMono=0)
I/audio_wfd_hw( 1904): adev_set_parameters() toMono=0
So I guess this feature provided by samsung specific hardware driver.
May be in some way is possible to get pointer to AudioHardwareTinyALSA an set mono on or off?
Thanks.
There's no API in Android for controlling the input volume (you can mute/unmute the mic during voice calls / VoIP, but that's about the level of control that you've got).
The mic gains are typically set by the OEMs as part of their acoustic tuning process, in order to optimize the performance for each use-case (speech recognition, camera recording, handset call, etc) for that particular product.
Mono/stereo recording should simply be decided by whether the app requests 1 or 2 channels for the recording. At least that's the way it has worked on every product I've worked on, as far as I can recall.
I'm currently starting writing a software for Android which is about to measure the reverberation time of closed rooms.
I had to choose AudioRecord instead of MediaRecorder because it gives me the chance to get the raw data.
You may know that there are many different constant to choose from for the AudioFormat (e.g.: CHANNEL_IN_MONO, CHANNEL_IN_STEREO, CHANNEL_IN_PRESSURE) and you may know that in android smartphones there are more than just one microphone embedded (usually you have 2 microphones in it, in order to have noise cancellation and stuff like that).
Here comes the question: Which constant do I have to choose to be sure that only just ONE microphone is giving me the raw data?
If you do a mono recording the device should only be recording from one microphone. I'm not sure what you mean by "raw" data. There will always be some acoustic compensation processing done (e.g. automatic gain control, equalization, etc), and this is not something that you can turn off.
One thing that also will affect the recording is which AudioSource you choose. If you select CAMCORDER on a phone with 2 or more microphones you'll typically get the back microphone with far-field tuning if you do a mono recording. If you select MIC/DEFAULT you should get the primary mic, but it may be tuned for either near-field recording or far-field recording depending on the vendor (I suspect that you'd want far-field tuning if you're trying to measure room reverberation).