Android Microphone Frequency response pitch detection below 100Hz - android

I'm processing audio using a phone Samsung Galaxy mini and also in a tablet Nexus 7
I've using the class audiorecord, until now, I have been able to correctly analyze audio from frequencies 200 to ~20000 Hz.
I'm detecting pitch through auto-correlation, I based in this code: http://tarsos.0110.be/artikels/lees/YIN_Pitch_Tracker_in_JAVA
I am using 44100Hz of sampling frequency, and I have also used 8000Hz.
I have not being able to detect pitch from lower frequencies, I can hardly detect 100Hz by pointing the microphone to a speaker.
Does someone know the input frequency response of the devices or if are physically or code limited?
I would like at least being able to detect correctly from 50Hz because I'm trying to do a voice detector and I being struggling with this low frequencies in order to detect male voices.
Thank you for all.
-Jessica

I cant tell you about what is the limit of low frequencies that these microphones can capture.
Out of curiosity I did some tests with YIN here...
I'm using one Window = 2048 and Overlap = 1024, and I can find Frequency above 40HZ in recorded files sampled at 44100Hz, this prove me that the algorithm can find low frequencies.
You can do tests with you phone using pure sinusoid at 50Hz and see if your code can track.
"The fundamentals of human voices are roughly in the range of 80 Hz to 1100 Hz"
My guess is that the microphones from smart phones are not so good :-(

Related

What are usual maximum output frequencies on Android audio out?

I wonder if sound pulses of 40kHz or higher can be generated on Android through audio out. Does anyone know if Android devices are capable of generating ultrasound? Do the audio processors support ultrasound?
Note that I'm talking about audio out (left/right) only and not through the speakers.
The end results depends on the hardware and will vary from phone to phone. I've found measurements of headphone jack performance on Samsung Galaxy Note 5, and they show that it can output up to 80 kHz with almost the same level as for the "hearable" audio spectrum:
http://forums.androidcentral.com/samsung-galaxy-note-5/572012-note-5-headphone-jack-audio-performance-measurements.html
Not sure what you mean by "audio processors" -- DSP?
UPDATE:
And for a contrast, here are some measurements of Nexus 7 2013 in 24/96 mode using both its headphones jack and via an external USB DAC:
http://archimago.blogspot.com/2014/04/measurements-nexus-7-to-audioengine-d3.html
If we look at this particular graph in 24/96 mode:
The light blue line (measurements done via the headphone jack) abruptly goes down at 20 kHz, which means the onboard DSP deliberately chops any frequencies above it.
So that's exactly what I'm talking about--it's very much device specific and ideally you should test each device you are targeting.

Using AudioRecord's data outside Android

I'm trying to stream the audio data recorded on android to a micro-controller for playback. the audio is recorded using the AudioRecord class and is then sent over UDP. on the receiving side, the micro-controller receives the data and plays it using PWM. there are a couple of problems though :
I don't exactly know what format the AudioRecord class uses. i'm using ENCODING_PCM_16BIT but don't even know if its bipolar or not and how to convert it to unipolar if it is.
Due to limited bandwidth, i can't send more than 8 bits per sample. since 8 bit PCM isn't supported on my phone, i've used the 16 bit version but for conversion, i've just used the upper 8 bits. i'm not sure if that's right.
Since i've used a weird Crystal Oscillator for my circuit, the audio has to be sampled at 7.2kHz. my phone supports 8kHz sampling so i just use that and send %90 of the recorded data (using a for loop with a float as variable).
I've hooked up a 2W speaker to the OC2 pin on my ATmega32 using a 220 Ohm resistor and a 100nF capacitor to act as a filter. (Schematic) but again i'm not sure if its the correct way to do it.
So all of this put together produces nothing but noise as output. the only thing that changes when i "make some noise" near the MIC is the volume and the pattern of the output noise. the pattern doesn't make any sense though and is the same for human voice or music.
This is the piece of code i wrote to convert the data before sending it over UDP :
float divider = 8/7.2f;
int index=0;
recorder.read(record_buffer,0,buffer_size);
for(float i=0;i<buffer_size;i+=divider)
{
send_buffer[index++]= (byte) (record_buffer[(int)i] >> 8);
}
I don't know where to go from here. any suggestion is appreciated.
Update:
I took RussSchultz's advice and sent a sine wave over UDP and hooked up the output to my cheap O-Scope. this is what i get:
No Data : http://i.stack.imgur.com/1XYE6.png
No Data Close-up: http://i.stack.imgur.com/ip0ip.png
Sine : http://i.stack.imgur.com/rhtn0.png
Sine Close-up: http://i.stack.imgur.com/12JxZ.png
There are gaps when i start sending the sine wave which could be the result of buffer overflow on the hardware. since the gaps follow a pattern, it can't be UDP data loss.
so after working on this for a month i got it to work.
I don't exactly know what format the AudioRecord class uses. i'm using ENCODING_PCM_16BIT but don't even know if its bipolar or not and how to convert it to unipolar if it is.
Due to limited bandwidth, i can't send more than 8 bits per sample. since 8 bit PCM isn't supported on my phone, i've used the 16 bit version but for conversion, i've just used the upper 8 bits. i'm not sure if that's right.
its was bipolar. i had to convert it to 8 bit by adding half the dynamic range to each sample and taking the upper 8 bits.
Since i've used a weird Crystal Oscillator for my circuit, the audio has to be sampled at 7.2kHz. my phone supports 8kHz sampling so i just use that and send %90 of the recorded data (using a for loop with a float as variable).
even though i have a slight frequency shift, its still acceptable.
I've hooked up a 2W speaker to the OC2 pin on my ATmega32 using a 220 Ohm resistor and a 100nF capacitor to act as a filter. (Schematic) but again i'm not sure if its the correct way to do it.
i changed the filter to an exact 3.6KHz low pass RC one (using one of the many online calculators). the speaker should not be connected directly because it requires a current uC can't provide. you will still get an output but the quality is not good at all. what you should do is drive the speaker using a darlington pair or (as i have) use a simple op-amp circuit.

what is the maximum sound recording capacity of mobile hardware?

I am developing an android app for recording the sound. In my app i will display the SPL (Sound Pressure Level) in dB. As part of my search, i come across, mobile hardware can only record sounds up to <= 110 dB. The reason is, mobiles are designed for human voice recording and that falls under the range of 60 dB. So if i need to record the sounds which is more than 110 dB how the mobile hardware will respond to that? Do i need to depend upon external devices and not the mobiles? Please provide your comments.
Thanks & regards,
Siva.
Your question is in fact about the dynamic range of the audio input of a mobile phone - any value you record must be capable of being represented in the scale used to measure it.
There is an associated question of what the largest sound pressure level that a particular phone can record, but this is ultimately limited by the dynamic range and the design of transducer used. Any absolutely measure is relative a calibration point - which in digital audio systems is dB FSD (e.g. ratio sample to maximum), yielding negative values.
The dynamic range in dB of a ideal PCM system is limited by quantisation noise and is related directly to bit-depth (Q) of the sample:
SQNR = 20*log10(2 ^ Q) =~ 6.02Q
State-of-the-art ADCs used in pro-audio equipment typically have 24-bit sample depth giving a SQNR of 144dB. It's worth noting, that in silicon ADCs and DACs, the thermal noise floor of the analogue section of the converter is smaller than this, and the LSB might as well be random.
AFAIK, Android is using 16-bit PCM, which has a SQNR of 96dB. This is the same performance as the CD Audio standard. A SNR of 110dB wouldn't be bad for pro-audio equipment.
In practice, audio quality is rarely a headline feature of phones and most get nowhere near this. Most users use crappy headphones or the on-board speaker of their phone for voice calls and won't notice the difference. It's an obvious corner to cut from both a cost and power budget point of view for a phone manufacturer.
Additionally, good digital audio design is a black-art. Factors such as decoupling of digital signals from ground and physical proximity of analogue components come into play. You find that in tear-downs of Apple kit, they often place the codec right next to the headphone jack, and away from the main board of the system. Again, other cost-conscious manufactures don't do this, and it'll degrade the dynamic range of the system.
In order to get meaningful measurements from the audio input you will need to disable both automatic gain control (AGC) and probably the HFP (used to remove DC bias, and often set with Fc > 100Hz for voice calls).
If your intention is to record absolute SPL, you will need to calibrate the audio system of the device to a set-point. There is no standardisation of this between manufacturers (or even devices from any given manufacturer). Unless you fancy doing this for the devices on the market (of which there are a lot), you'll never provide universally accurate measurements.

How to measure sound volume in dB scale Android

We are working on a cross-platform project that requires sound volume sampling on smartphones and analyse the result with as high accuracy as possible, the IPhone developer used iOS implemented functionality of returnning sound power/volume in dB scale calculated by the OS itself. as far as i know there is no equivalent functionality in Android OS.
as of now, I am working on Android with the MediaRecorder class given by the OS, and i use getMaxAmplitude to measure the sound power/volume, i have seen a lot of answers on the net regard how to transfer amplitude to dB scale, the answer that sounded most reasonable was using the formula :
20*Math.log10(amplitude/MAX_AMPLITUDE)
but then i must know what the MAX_AMPLITUDE that can be returned by getMaxAmplitude, thing is that it is diffrent on diffrent devices, for exemple i tested getMaxAmplitude on HTC Desire, and on Samsung Galaxy S3,
on HTC it was reaching 32767 (which i saw in some answer that is the documented max), and on the S3 it was not going beyond 16383 (half of the HTC).
Q1 :
is this(the approach discussed above) the correct approach? its just that I read that the correct way to measure sound power/volume is by calculating RSM and then convert it to dB, is this how its done on IPhone?
Q2 :
no metter if i use RSM or just the Amplitude from getMaxAmplitude, it seems to me that i still need to know whats the highest amplitude i can get from the record hardware, is there a way to know that? or is there a way to somehow go around it?
90dBspl is an rms value in the acoustic domain.
The digital level of 2500 rms in a 16bit system is the same as approximately -22dB FS rms (actually -22.35), where 0dBFS rms is a full scale square wave. A full scale sinusoidal in such a system is 0dBFS peak and -3dB FS rms (reaching from -32768 to +32767).
A square wave of +/-2500 can be calculated as:
20 * log ( 2500/32767) = -22.35 dB FS rms
Please note that peaks of sinusoidals are always 3dB higher than the rms level. The only signal that has the same rms and peak level is the square wave.
Now, Android has a requirement of 30dB linearity around 90dBspl, but this linearity shall be +12dB above 90dBspl and -18dB below the same point. Outside this range there can be compression in different ways, depending on which phone model you test.
The guaranteed highest linear level inside an Android phone is -22dBFS +12dB = -10dBFS rms. Above this level it is uncertain. The most common scenario is that the last 7dB of peak headroom are still linear, leading to an acoustic maximum level of 90dBspl + (22-3 dB) = 109dB spl rms for a sinusoidal without clipping (or 112 dB spl peak).
In some phones you will find a peak limiter that reduces the gain above 102dBspl rms. The outcome of this is that you can still record up to the level of saturation for the microphone. This saturation level varies, but it is common to have like 2% distortion at 120dB spl. Above this level the microphone component starts to saturate and clip.
Looking at the other end of the scale:
Small phone microphones are in general noisy. The latest microphones can have a noise floor at -63dB below 0dBPa (94dBspl), but most microphones are between -58 and -60dB below 0dBPa.
How can this be calculated to dBFS rms ?
0dBPa rms is 94dB spl rms. From the statement above we know that 90dBspl rms acoustic level will be recorded at the digital level of -22dBFS rms in Android phones. -63dB below 90dBspl is the same as -22dBFSrms +4dB -63dB = -81dBFSrms. The absolute maximum range of dynamics in a 16 bit system can be approximated to 96dB (or 93dB depending how you see it), so the noise level is at least 12dB above the quantization noise in the digital file.
This is a very important finding for video recording mode. Unfortunately many video applications in Android tend to have too high microphone gain in the recording. This leads to clipping when recording loud music concerts and similar situations. We also know that the microphone itself is good up to at least 120dB. So it would be a good idea for any audio system engineer to make a video recording mode that actually used the whole dynamic range of the microphone. This means that the gain should be set at least 8dB lower. It is always possible to change the rms level afterwards in a video recording if the sound is too soft, but if it is clipped, then you have damaged the recording forever.
So, my message to you programmers is to implement a video recording mode where the acoustic level of 90dB spl rms is recorded at -30dBFSrms or slightly below that. Any maximization can be done afterwards. In this way we could record rock concerts with much better sound. Doing automatic gain control does not help the sound quality. The dynamic range is often too big to be controlled automatically. You get a lot of pumping in the sound. It is better to implement two different video recording modes: Concert mode and speech mode. In speech mode (optimized for a talking person at 1m distance) the recording gain could be even higher than -22dBFSrms for 90dBspl. I would say -12dBFS rms for 90dBspl would be a suitable recording level. (speech at 1m distance has an rms level of approximately 57dB spl and peaks 20-30dB higher).
Björn Gröhn
Audio system engineer at Sony mobile Lund, Sweden

How To Get Electric Power From Head Phone Jack?

My quest is if anyone knows how to create an Android app that can send electric charge through the device's headphone jack, like in this video iPocket_LED. The video shows an app for iPhone that controls a LED plugged into the headphone jack.
I want to know how to access the device to send an electric signal.
Sorry about my English, is not my language, I hope some one understand me
Many consumer devices which accept an external microphone will provide "plug-in power". This is a small voltage typically from 1 to 5 volts across two of the contacts in the microphone connection.
Apple and (most) Android devices are no exception. Most use a 4-conductor TRRS connection with the following pin-out:
TIP = left headphone out
RING = right headphone out
RING = ground
SLEEVE = mic in + plug-in power
The plug-in power is usually around 2V on smartphones and is supplied as +2V on the microphone (sleeve) conductor. The phone will only supply it if it detects that a microphone is in place, which it does by testing the resistance across Mic to Ground to see if it's consistent with a microphone's impedance - something like 200 to 5000 ohms impedance, and I hear the iphones can be very fussy with this and need very close to 1600 ohms.
This means the maximum power you could draw from this and still seem like a microphone would be pretty small - around 1.25 milliamps. There are some low powered microcontrollers or other devices you may be able to power with this.
Note that plug-in power may be a similar concept to "phantom power" as used in pro audio gear but it's a different and incompatible standard. "plug-in power" is what causes the tiny electret microphones in smartphone headsets to work without needing their own small battery.
As for how to actually exert control over your attached device from an app, that's getting into much more complicated electronics. Presumably it is possible if you use the left and/or right headphone out lines to send signals to the device.
You'll need to play some audio. A small amount of current flows anytime audio plays, that's what moves the tiny little speakers in your headphones. The voltage will vary with the level of the audio. It is also AC current, such that the frequency of the sound (pitch) affects the frequency of the AC cycle.
It is going to be difficult to integrate with a device using this approach, especially because of the AC current. You can determine the appropriate pitch to send the voltage you want, but most "devices" are probably going to want a +3.3v or +5v DC signal. You'll probably need to do an AC to DC conversion to make that work.
I believe there is a means to integrate with an Android device via the USB interface. That would probably be far better and easier. You could get yourself an Arduino kit with a built-in USB shield/controller, and build your device on top of that.
See External USB devices to Android phones?
Yes using both at the same time is possible as this is how phones are designed to work. In fact depending on which specific device you have, overriding the volume limit will also give you a bit more power.
The best bet as far as lowest possible loss would be active rectification: at the null point have it switch over to +2V and the rest of the time whichever is the highest peak gets rectified. Simple enough to use two dual MOSFETs and this should get you enough power to at least initialize a phone though probably not charge it.

Categories

Resources