I wasn't sure what SE site to ask this in, and Google isn't helping.
I generated the audio whose waveform is shown below in Matlab, with completely different signals in the left and right channels. But I've found that no matter how I try to listen to it (on my Dell XPS 15 Windows 10 laptop as well as on my Pixel 3 phone, with wired and wireless headphones), both channels are audible on both speakers. The audio levels are different on each side, so it is slightly panning (not mono), but it is not hard-panning as designed.
If I just feed the channels simple sine waves with different frequencies, it pans just fine. But this more complex signal won't pan.
What's up with this? Is it impossible to hard-pan some signals now due to some driver-level BS that mixes the channels? Or can I force it to pan as intended?
Related
I wonder if sound pulses of 40kHz or higher can be generated on Android through audio out. Does anyone know if Android devices are capable of generating ultrasound? Do the audio processors support ultrasound?
Note that I'm talking about audio out (left/right) only and not through the speakers.
The end results depends on the hardware and will vary from phone to phone. I've found measurements of headphone jack performance on Samsung Galaxy Note 5, and they show that it can output up to 80 kHz with almost the same level as for the "hearable" audio spectrum:
http://forums.androidcentral.com/samsung-galaxy-note-5/572012-note-5-headphone-jack-audio-performance-measurements.html
Not sure what you mean by "audio processors" -- DSP?
UPDATE:
And for a contrast, here are some measurements of Nexus 7 2013 in 24/96 mode using both its headphones jack and via an external USB DAC:
http://archimago.blogspot.com/2014/04/measurements-nexus-7-to-audioengine-d3.html
If we look at this particular graph in 24/96 mode:
The light blue line (measurements done via the headphone jack) abruptly goes down at 20 kHz, which means the onboard DSP deliberately chops any frequencies above it.
So that's exactly what I'm talking about--it's very much device specific and ideally you should test each device you are targeting.
I am developing an android app for recording the sound. In my app i will display the SPL (Sound Pressure Level) in dB. As part of my search, i come across, mobile hardware can only record sounds up to <= 110 dB. The reason is, mobiles are designed for human voice recording and that falls under the range of 60 dB. So if i need to record the sounds which is more than 110 dB how the mobile hardware will respond to that? Do i need to depend upon external devices and not the mobiles? Please provide your comments.
Thanks & regards,
Siva.
Your question is in fact about the dynamic range of the audio input of a mobile phone - any value you record must be capable of being represented in the scale used to measure it.
There is an associated question of what the largest sound pressure level that a particular phone can record, but this is ultimately limited by the dynamic range and the design of transducer used. Any absolutely measure is relative a calibration point - which in digital audio systems is dB FSD (e.g. ratio sample to maximum), yielding negative values.
The dynamic range in dB of a ideal PCM system is limited by quantisation noise and is related directly to bit-depth (Q) of the sample:
SQNR = 20*log10(2 ^ Q) =~ 6.02Q
State-of-the-art ADCs used in pro-audio equipment typically have 24-bit sample depth giving a SQNR of 144dB. It's worth noting, that in silicon ADCs and DACs, the thermal noise floor of the analogue section of the converter is smaller than this, and the LSB might as well be random.
AFAIK, Android is using 16-bit PCM, which has a SQNR of 96dB. This is the same performance as the CD Audio standard. A SNR of 110dB wouldn't be bad for pro-audio equipment.
In practice, audio quality is rarely a headline feature of phones and most get nowhere near this. Most users use crappy headphones or the on-board speaker of their phone for voice calls and won't notice the difference. It's an obvious corner to cut from both a cost and power budget point of view for a phone manufacturer.
Additionally, good digital audio design is a black-art. Factors such as decoupling of digital signals from ground and physical proximity of analogue components come into play. You find that in tear-downs of Apple kit, they often place the codec right next to the headphone jack, and away from the main board of the system. Again, other cost-conscious manufactures don't do this, and it'll degrade the dynamic range of the system.
In order to get meaningful measurements from the audio input you will need to disable both automatic gain control (AGC) and probably the HFP (used to remove DC bias, and often set with Fc > 100Hz for voice calls).
If your intention is to record absolute SPL, you will need to calibrate the audio system of the device to a set-point. There is no standardisation of this between manufacturers (or even devices from any given manufacturer). Unless you fancy doing this for the devices on the market (of which there are a lot), you'll never provide universally accurate measurements.
I'm processing audio using a phone Samsung Galaxy mini and also in a tablet Nexus 7
I've using the class audiorecord, until now, I have been able to correctly analyze audio from frequencies 200 to ~20000 Hz.
I'm detecting pitch through auto-correlation, I based in this code: http://tarsos.0110.be/artikels/lees/YIN_Pitch_Tracker_in_JAVA
I am using 44100Hz of sampling frequency, and I have also used 8000Hz.
I have not being able to detect pitch from lower frequencies, I can hardly detect 100Hz by pointing the microphone to a speaker.
Does someone know the input frequency response of the devices or if are physically or code limited?
I would like at least being able to detect correctly from 50Hz because I'm trying to do a voice detector and I being struggling with this low frequencies in order to detect male voices.
Thank you for all.
-Jessica
I cant tell you about what is the limit of low frequencies that these microphones can capture.
Out of curiosity I did some tests with YIN here...
I'm using one Window = 2048 and Overlap = 1024, and I can find Frequency above 40HZ in recorded files sampled at 44100Hz, this prove me that the algorithm can find low frequencies.
You can do tests with you phone using pure sinusoid at 50Hz and see if your code can track.
"The fundamentals of human voices are roughly in the range of 80 Hz to 1100 Hz"
My guess is that the microphones from smart phones are not so good :-(
If someone wants to write a android application that interacts with a physical device, specifically a reader using mobiles audio jack
(e.g. Like how Square Inc is doing ) how is this done?
Is there a api's to interact with the reader and get the cards data?
When a company creates a reader (physical device) does it provide relevant apis?
Are the physical details abstracted from the application programmer?
I have found the AudioRecord class which can record magnetic stripe data from audio jack
But I can't fiqure out how to capture the actual card swipe event and
to extract the meaningful data from RAW DATA
Can any one help me with this
Any input is highly welcome!
The way this usually works is by encoding the data signal sent out by the device, like the card reader, in such a way that is can be decoded on the other end. Sound is a wave, and different amplitudes correspond to different loudness, and different frequencies correspond to different pitches. Imagine you have a sine wave, that varies between a high and a low frequency that are sufficiently different from each other so as to be easily distinguishable. The device sending out binary data (0's and 1's) can translate this data into an audio signal that varies by frequency (an alternative is varying amplitude). The receiver, in this case the mobile device, decodes the signal back into 0's and 1's. This is called "Frequency-shift-keying" (check out more here: http://en.wikipedia.org/wiki/Frequency-shift_keying).
The simplest way to implement this is to try and find an open library that already does it. The device sending the data will also need to contain some kind of microcontroller that can perform the initial modulation. If you come across any good libraries, let me know, because I'm currently
looking.
To answer your question, companies do not generally provide APIs etc to perform this.
This may seem like a lot of extra work to convert a digital signal, into an audio signal, and back, and you're right. However, every mobile device has essentially the same headphone jack, whereas the USB port on an Android is drastically different from an iPhone's lighting connector, or the connector in previous iPhones. Sending wirelessly through a network or Bluetooth is also an option, but they have their disadvantages as well.
Now the mobile device must be using a special headphone jack that supports microphones, otherwise it cannot receive input, it can only output sound. Most smartphones can do this.
Radios work on this principle (FM = Frequency modulation, AM = amplitude modulation).
Old dial up modems used FSK, which is why you heard those weird noises each time it connected.
Hope that helps!
My quest is if anyone knows how to create an Android app that can send electric charge through the device's headphone jack, like in this video iPocket_LED. The video shows an app for iPhone that controls a LED plugged into the headphone jack.
I want to know how to access the device to send an electric signal.
Sorry about my English, is not my language, I hope some one understand me
Many consumer devices which accept an external microphone will provide "plug-in power". This is a small voltage typically from 1 to 5 volts across two of the contacts in the microphone connection.
Apple and (most) Android devices are no exception. Most use a 4-conductor TRRS connection with the following pin-out:
TIP = left headphone out
RING = right headphone out
RING = ground
SLEEVE = mic in + plug-in power
The plug-in power is usually around 2V on smartphones and is supplied as +2V on the microphone (sleeve) conductor. The phone will only supply it if it detects that a microphone is in place, which it does by testing the resistance across Mic to Ground to see if it's consistent with a microphone's impedance - something like 200 to 5000 ohms impedance, and I hear the iphones can be very fussy with this and need very close to 1600 ohms.
This means the maximum power you could draw from this and still seem like a microphone would be pretty small - around 1.25 milliamps. There are some low powered microcontrollers or other devices you may be able to power with this.
Note that plug-in power may be a similar concept to "phantom power" as used in pro audio gear but it's a different and incompatible standard. "plug-in power" is what causes the tiny electret microphones in smartphone headsets to work without needing their own small battery.
As for how to actually exert control over your attached device from an app, that's getting into much more complicated electronics. Presumably it is possible if you use the left and/or right headphone out lines to send signals to the device.
You'll need to play some audio. A small amount of current flows anytime audio plays, that's what moves the tiny little speakers in your headphones. The voltage will vary with the level of the audio. It is also AC current, such that the frequency of the sound (pitch) affects the frequency of the AC cycle.
It is going to be difficult to integrate with a device using this approach, especially because of the AC current. You can determine the appropriate pitch to send the voltage you want, but most "devices" are probably going to want a +3.3v or +5v DC signal. You'll probably need to do an AC to DC conversion to make that work.
I believe there is a means to integrate with an Android device via the USB interface. That would probably be far better and easier. You could get yourself an Arduino kit with a built-in USB shield/controller, and build your device on top of that.
See External USB devices to Android phones?
Yes using both at the same time is possible as this is how phones are designed to work. In fact depending on which specific device you have, overriding the volume limit will also give you a bit more power.
The best bet as far as lowest possible loss would be active rectification: at the null point have it switch over to +2V and the rest of the time whichever is the highest peak gets rectified. Simple enough to use two dual MOSFETs and this should get you enough power to at least initialize a phone though probably not charge it.