Android microphone constantly gives 32639 or -32640 on newer devices - android

I've implemented code similar to this. I have a noise alert go off in the Log, but it always gives 32639 or -32640 regardless of what kind of noise is going on outside.
short[] buffer = new short[minSize];
boolean thresholdMet = false;
int threshold = sliderThreshold.getProgress();
ar.read(buffer, 0, minSize);
//Iterate through each chunk of amplitude data
//Check if amplitude is greater than threshold
for (short s : buffer) {
if (Math.abs(s) > threshold) {
thresholdMet = true;
Log.w("NoiseThreshold", String.valueOf(s));
break;
}
}
I've tested it on three phones (none of which are rooted):
Samsung Galaxy S3(API 19)
HTC One M9(API 23)
Samsung Galaxy S7(API 24)
It works on the S3, but not the others. I've tried using Sensor Sense on the HTC and it doesn't work for the mic sensor. However, it used to work, and now seems to detect one sample every five seconds or so in the graph view.
Oddly enough, the microphone still works fine for phone calls and video recording on the malfunctioning phones.

You said it works on S3, which is API 19 and doesn't on those with API>=23. So, it's possible that you have a problem with runtime permissions introduced in API 23.
New behaviour (for "old apps" which use static permission model) is to return dummy data if runtime permission is not granted.
Check out this answer:
Request permission for microphone on Android M

Related

LTE signal ASU level always 97

I am capturing the RSSI values of LTE signals using the below code:
cInfoList = telephony_manager.getAllCellInfo()
for (CellInfo info : cInfoList){
if (info instanceof CellInfoLte) {
CellSignalStrengthLte signalstrength_lte = ((CellInfoLte) info).getCellSignalStrength();
displayAsu.setText(signalstrength_lte.getAsuLevel() + "");
displayDbm.setText(signalstrength_lte.getDbm() + "");
}
}
(*note: I just simplified my code: for-loop doesn't override text fields.)
In one phone (LG G4) I am getting meaningful vales: Asu_level=32, dbm=-108
But in another phone (Samsung Galaxy S6) I am getting invalid values: Asu_level=97, dbm=1022
In Samsung phone's Settings->AboutPhone->Status->SignalStrength I see -107dBm 33 asu (which make sense)
LG G4: Android 5.1, API 22 and
Samsung Galaxy S6: Android 5.0.2, API 21
Why does the same code show different behaviors (Asu levels) on different phones?
The S6 appears to put corrupted signal level values in its CellInfoLte objects (and unset levels in its CellInfoCdma objects). https://github.com/Tombarr/Signal-Strength-Detector is an app which uses reflection to dump out a plethora of signal level related data. On my S6, I can see that SignalStrength (which is the parameter to the PhoneStateListener.onSignalStrengthsChanged callback) includes sane looking mCdmaDbm/mLteRsrp values. It's obviously less convenient and presumably more overhead to create a TelephonyManager listener but it looks like that's what it takes on this device :-/

ConsumerIrManager.transmit broken in Lollipop?

I upgraded my Samsung Galaxy S4 from latest KitKat to Lollipop (5.0.1) yesterday and my IR remote control app that I have used for months stopped working.
Since I was using a late copy of KitKat ConsumerIrManager, the transmit( ) function was sending the number of pulses using the code below. It worked very nicely.
private void irSend(int freqHz, int[] pulseTrainInMicroS) {
int [] pulseCounts = new int [pulseTrainInMicroS.length];
for (int i=0; i<pulseTrainInMicroS.length; i++) {
long iValue = pulseTrainInMicroS[i] * freqHz / 1000000;
pulseCounts[i] = (int) iValue;
}
m_IRService.transmit(freqHz, pulseCounts);
}
when it stopped working yesterday, I began looking closely at it.
I noticed that the transmitted waveform is not having any relationship with the requested pulse train. even the code below doesn't work correctly! there is
private void TestSend() {
int [] pulseCounts = {100, 100, 100};
m_IRService.transmit(38000, pulseCounts);
}
the resulting waveforms had many problems and so are entirely useless.
the waveforms were entirely wrong
the frequency was wrong and the pulse spacing was not regular
they were not repeatable
looking at the demodulated waveform:
if my 100, 100, 100 were correctly rendered, I should have seen two pulses 2.6ms (before 4.4.3(?) 100 us) long. instead I received (see attached) "[demodulated] not repeatable 1.BMP" and "[demodulated] not repeatable 2.BMP". note that the waveform isn't 2 pulses...in fact, it's not even repeatable.
as for the captures below, the signal goes low when the IR is detected.
we should have seen two pulses going low for 2.6 ms and 2.6 ms between them (see green line below).
I had also tried shorter pulses using 50, 50, 50 and have observed that the first pulse isn't correct either (see below).
looking at the modulated waveform:
the frequency was not correct; instead, it was about 18kHz and irregular.
I'm quite experienced with this and have formal education in electronics.
It seems to me there's a bug in ConsumerIrManager.transmit( )...
curiously, the "WatchOn" application that comes with the phone still works.
thank you for any insights you can give.
Test equipment:
Tektronix TDS-2014B, 100 MHz, used in peak-detect mode.
As #IvanTellez says, a change was made in Android in respect to this functionality. Strangely, when I had it outputting simple IR signals (for troubleshooting purposes), the function behaves as shown above (erratically, wrong carrier frequency, etc). When I eventually returned to normal types of IR signals, it worked correctly.

equalizer.getNumberOfPresets() return 0 on certains devices

I'm trying to implements presets on an android equalizer, to do so I'm using getNumberOfPresets():
mEqualizer = new Equalizer(0, mMediaPlayer.getAudioSessionId());
mEqualizer.setEnabled(true);
short presetNumber = mEqualizer.getNumberOfPresets();
On my nexus 4 (4.2.2) I'm getting presetNumber=10 but using an other device running android 4.0.4 I get presetNumber=0. With this last value I am not able to use:
mEqualizer.usePreset(short);
How can I force the equalizer to use presets?
thx
All the audio effects are hardware-dependent and not guaranteed on all devices.
Because of this, you should always check if the device supports the AudioEffect.
You can query available effects using AudioEffect.queryEffects();
http://developer.android.com/reference/android/media/audiofx/AudioEffect.html#queryEffects()

How to get CamcorderProfile.videoBitRate for an Android device?

My app uses HLS to stream video from a server, but when I request the HLS stream from the server I need to pass it the max video bitrate the device can handle. In the Android API guides it says that "a device's available video recording profiles can be used as a proxy for media playback capabilities," but when I try to retrieve the videoBitRate for the devices back-facing camera it always comes back as 12Mb/s regardless of the device (Galaxy Nexus, Galaxy Tab Plus 7", Galaxy Tab 8.9), despite the fact that they have 3 different GPUs (PowerVR SGX540, Mali-400 MP, Tegra 250 T20). Here's my code, am I doing something wrong?
CamcorderProfile camcorderProfile = CamcorderProfile.get(CamcorderProfile.QUALITY_HIGH);
targetVideoBitRate = camcorderProfile.videoBitRate;
If I try this on the Galaxy Tab Plus:
boolean hasProfile = CamcorderProfile.hasProfile(CamcorderProfile.QUALITY_HIGH);
it returns True, despite the fact that QUALITY_HIGH is for 1080p recording and the specs say it can only record at 720p.
It looks like I've found the answer to my own question.
I didn't read the documentation closely enough, QUALITY_HIGH is not equivalent to 1080p it is simply a way of specifying the highest quality profile the device supports. Therefore, by definition, CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_HIGH ) is always true. I should have written something like this:
public enum mVideoQuality {
FullHD, HD, SD
}
mVideoQuality mMaxVideoQuality;
int mTargetVideoBitRate;
private void initVideoQuality {
if ( CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_1080P ) ) {
mMaxVideoQuality = mVideoQuality.FullHD;
} else if ( CamcorderProfile.hasProfile( CamcorderProfile.QUALITY_720P ) ) {
mMaxVideoQuality = mVideoQuality.HD;
} else {
mMaxVideoQuality = mVideoQuality.SD;
}
CamcorderProfile cProfile = CamcorderProfile.get( CamcorderProfile.QUALITY_HIGH );
mTargetVideoBitRate = cProfile.videoBitRate;
}
Most of my devices are still reporting support for 1080p encoding, which I'm skeptical of, however I ran this code on a Sony Experia Tipo ( my low end test device ) and it reported a max encode quality of 480p with a videoBitRate of 720Kb/s.
As I said, I'm not sure if every device can be trusted, but I have seen a range of video bitrates from 720Kb/s to 17Mb/s and Profile qualities from 480p - 1080p. Hopefully other people will find this information to be useful.

How avoid automatic gain control with AudioRecord?

How can I do audio recordings using android.media.AudioRecord without any smartphone-manufacturer-dependent fancy signal processing like automatic gain control (AGC) and/or equalization, noise suppression, echo cancellation, ... just the pure microphone signal?
Background
MediaRecorder.AudioSource provides nine constants,
DEFAULT and MIC initially being there,
VOICE_UPLINK, VOICE_DOWNLINK, and VOICE_CALL added in API level 4,
CAMCORDER and VOICE_RECOGNITION added in API 7,
VOICE_COMMUNICATION added in API 11,
REMOTE_SUBMIX added in API 19 but not available to third-party applications.
But none of them does a clean job across all smartphones. Rather, I have to find out myself it seems, which device uses which combinations of signal processing blocks for which MediaRecorder.AudioSource constant.
Would be nice to have a tenth constant like PURE_MIC added in API level 20.
But as long as this is not available, what can I do instead?
Short answer is "Nothing".
The AudioSources correspond to various logical audio input devices depending on the accessories that you have connected to the phone and the current use-case, which in turn corresponds to physical devices (primary built-in mic, secondary mic, wired headset mic, etc) with different tunings.
Each such combination of physical device and tuning is trimmed by the OEM to meet both external requirements (e.g. CTS, operator requirements, etc) and internal acoustic requirements set by the OEM itself. This process may cause the introduction of various filters - such as AGC, noise suppression, equalization, etc - into the audio input path at the hardware codec or multimedia DSP level.
While a PURE_MIC source might be useful in for some applications, it's not something that's available today.
On many devices you can control things like microphone gain, and possibly even the filter chain, by using amixer to write to the hardware codec's ALSA controls. However, this would obviously be a very platform-specific approach, and I also suspect that you have to be running as either the root or audio user to be allowed to do this.
Some devices add AGC effect to the sound input tract by default. Therefore, you need to obtain reference to corresponding AudioEffect object and force it to disable.
First, obtain AutomaticGainControl object linked to the AudioRecord audio session, and then just set it disabled:
if (AutomaticGainControl.isAvailable()) {
AutomaticGainControl agc = AutomaticGainControl.create(
myAudioRecord.getAudioSessionId()
);
agc.setEnabled(false);
}
Note: Most of the audio sources (including DEFAULT) apply processing to the audio signal. To record raw audio select UNPROCESSED. Some devices do not support unprocessed input. Call AudioManager.getProperty("PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED") first to verify it's available. If it is not, try using VOICE_RECOGNITION instead, which does not employ AGC or noise suppression. You can use UNPROCESSED as an audio source even when the property is not supported, but there is no guarantee whether the signal will be unprocessed or not in that case.
Android documentation Link https://developer.android.com/guide/topics/media/mediarecorder.html#example
AudioManager audioManager = (AudioManager)getSystemService(Context.AUDIO_SERVICE);
if(audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED) !=null)
mRecorder.setAudioSource(MediaRecorder.AudioSource.UNPROCESSED);
else
mRecorder.setAudioSource(MediaRecorder.AudioSource.VOICE_RECOGNITION);
MIC should be fine, and for the rest you need to know if they are supported.
I've made a class for this:
enum class AudioSource(val audioSourceValue: Int, val minApi: Int) {
VOICE_CALL(MediaRecorder.AudioSource.VOICE_CALL, 4), DEFAULT(MediaRecorder.AudioSource.DEFAULT, 1), MIC(MediaRecorder.AudioSource.MIC, 1),
VOICE_COMMUNICATION(MediaRecorder.AudioSource.VOICE_COMMUNICATION, 11), CAMCORDER(MediaRecorder.AudioSource.CAMCORDER, 7),
VOICE_RECOGNITION(MediaRecorder.AudioSource.VOICE_RECOGNITION, 7),
VOICE_UPLINK(MediaRecorder.AudioSource.VOICE_UPLINK, 4), VOICE_DOWNLINK(MediaRecorder.AudioSource.VOICE_DOWNLINK, 4),
#TargetApi(Build.VERSION_CODES.KITKAT)
REMOTE_SUBMIX(MediaRecorder.AudioSource.REMOTE_SUBMIX, 19),
#TargetApi(Build.VERSION_CODES.N)
UNPROCESSED(MediaRecorder.AudioSource.UNPROCESSED, 24);
fun isSupported(context: Context): Boolean =
when {
Build.VERSION.SDK_INT < minApi -> false
this != UNPROCESSED -> true
else -> {
val audioManager: AudioManager = context.getSystemService(Context.AUDIO_SERVICE) as AudioManager
Build.VERSION.SDK_INT >= Build.VERSION_CODES.N && "true" == audioManager.getProperty(AudioManager.PROPERTY_SUPPORT_AUDIO_SOURCE_UNPROCESSED)
}
}
companion object {
fun getAllSupportedValues(context: Context): ArrayList<AudioSource> {
val values = AudioSource.values()
val result = ArrayList<AudioSource>(values.size)
for (value in values)
if (value.isSupported(context))
result.add(value)
return result
}
}
}

Categories

Resources