If someone wants to write a android application that interacts with a physical device, specifically a reader using mobiles audio jack
(e.g. Like how Square Inc is doing ) how is this done?
Is there a api's to interact with the reader and get the cards data?
When a company creates a reader (physical device) does it provide relevant apis?
Are the physical details abstracted from the application programmer?
I have found the AudioRecord class which can record magnetic stripe data from audio jack
But I can't fiqure out how to capture the actual card swipe event and
to extract the meaningful data from RAW DATA
Can any one help me with this
Any input is highly welcome!
The way this usually works is by encoding the data signal sent out by the device, like the card reader, in such a way that is can be decoded on the other end. Sound is a wave, and different amplitudes correspond to different loudness, and different frequencies correspond to different pitches. Imagine you have a sine wave, that varies between a high and a low frequency that are sufficiently different from each other so as to be easily distinguishable. The device sending out binary data (0's and 1's) can translate this data into an audio signal that varies by frequency (an alternative is varying amplitude). The receiver, in this case the mobile device, decodes the signal back into 0's and 1's. This is called "Frequency-shift-keying" (check out more here: http://en.wikipedia.org/wiki/Frequency-shift_keying).
The simplest way to implement this is to try and find an open library that already does it. The device sending the data will also need to contain some kind of microcontroller that can perform the initial modulation. If you come across any good libraries, let me know, because I'm currently
looking.
To answer your question, companies do not generally provide APIs etc to perform this.
This may seem like a lot of extra work to convert a digital signal, into an audio signal, and back, and you're right. However, every mobile device has essentially the same headphone jack, whereas the USB port on an Android is drastically different from an iPhone's lighting connector, or the connector in previous iPhones. Sending wirelessly through a network or Bluetooth is also an option, but they have their disadvantages as well.
Now the mobile device must be using a special headphone jack that supports microphones, otherwise it cannot receive input, it can only output sound. Most smartphones can do this.
Radios work on this principle (FM = Frequency modulation, AM = amplitude modulation).
Old dial up modems used FSK, which is why you heard those weird noises each time it connected.
Hope that helps!
Related
I'm trying to see how gsm affects data on a phone call. Here is what I'm trying to do. One person will be talking on a phone and I will record his voice from phone's mic while he speaks and on the other phone I will get the data coming from gsm and compare them. I want to write an android application to get that data. Is that possible on android or can you suggest another way to achieve this?
Some background (you may know this already)...
When you make a GSM call, the analogue signal in the phone microphone corresponding to your speech is converted into a series of digital values and then encoded with a voice codec. This is basically a clever algorithm to capture as much of the speech as possible, in as little data as possible.
The idea is to maintain very good speech quality while saving on the amount of bandwidth needed for a call. Techniques used include not transmitting quite periods (when you are not speaking) and various compressions and predictive encoding algorithms. There have been and still are a number of codecs in use in GSM, but the latest and general preferred codec is called AMR-Narrowband.
Nearly all GSM deployments encrypt speech between the phone and the base station - while there are publicised weaknesses in the various encryption algorithms, I am assuming (hoping...) that decrypting is not what you are looking for.
Your question - 'I want to see that if there will be data loss or corruption when voice reaches over gsm'
Firstly, is it worth noting that speech is 'relatively' tolerant of small amounts of data loss and corruption, at least compared to data. It is quite common to have bursts of packet loss in VoIP networks and it may cause a temporary degradation in voice quality. Secondly, packet loss in a VoIP network will include delayed packets which can be confusing - if the packet arrives too late to be included in the 'sound' being played in the receivers speaker then it is effectively lost from the VoIP point of view, even though other measures may show that it simply arrived late.
To actually measure the loss between the GSM phone and the basestation you would need access to the data received at the basestation, which you will not usually have unless you are the operator.
If you do an end to end test, from one GSM to another, your speech path will traverse other network nodes also, so you will not know if any loss or corruption is happening over the GSM air interface or in one or more of the other nodes.
You would also need to be aware of handover from one cell to another and from 2G to 3G (GSM to UMTS) which may affect your tests (even stationary phones can handover in certain circumstances).
If your interest is purely academic then the easiest thing might be to create your own GSM base station and test on this - there exists several open source GSM 'network in a box' projects which should allow you do this. I have not used it myself, but this one looks the most actively supported at this time - check out the mailing list under the community tab for a good place to maybe follow up your investigations:
http://openbts.org
I am developing an android app for recording the sound. In my app i will display the SPL (Sound Pressure Level) in dB. As part of my search, i come across, mobile hardware can only record sounds up to <= 110 dB. The reason is, mobiles are designed for human voice recording and that falls under the range of 60 dB. So if i need to record the sounds which is more than 110 dB how the mobile hardware will respond to that? Do i need to depend upon external devices and not the mobiles? Please provide your comments.
Thanks & regards,
Siva.
Your question is in fact about the dynamic range of the audio input of a mobile phone - any value you record must be capable of being represented in the scale used to measure it.
There is an associated question of what the largest sound pressure level that a particular phone can record, but this is ultimately limited by the dynamic range and the design of transducer used. Any absolutely measure is relative a calibration point - which in digital audio systems is dB FSD (e.g. ratio sample to maximum), yielding negative values.
The dynamic range in dB of a ideal PCM system is limited by quantisation noise and is related directly to bit-depth (Q) of the sample:
SQNR = 20*log10(2 ^ Q) =~ 6.02Q
State-of-the-art ADCs used in pro-audio equipment typically have 24-bit sample depth giving a SQNR of 144dB. It's worth noting, that in silicon ADCs and DACs, the thermal noise floor of the analogue section of the converter is smaller than this, and the LSB might as well be random.
AFAIK, Android is using 16-bit PCM, which has a SQNR of 96dB. This is the same performance as the CD Audio standard. A SNR of 110dB wouldn't be bad for pro-audio equipment.
In practice, audio quality is rarely a headline feature of phones and most get nowhere near this. Most users use crappy headphones or the on-board speaker of their phone for voice calls and won't notice the difference. It's an obvious corner to cut from both a cost and power budget point of view for a phone manufacturer.
Additionally, good digital audio design is a black-art. Factors such as decoupling of digital signals from ground and physical proximity of analogue components come into play. You find that in tear-downs of Apple kit, they often place the codec right next to the headphone jack, and away from the main board of the system. Again, other cost-conscious manufactures don't do this, and it'll degrade the dynamic range of the system.
In order to get meaningful measurements from the audio input you will need to disable both automatic gain control (AGC) and probably the HFP (used to remove DC bias, and often set with Fc > 100Hz for voice calls).
If your intention is to record absolute SPL, you will need to calibrate the audio system of the device to a set-point. There is no standardisation of this between manufacturers (or even devices from any given manufacturer). Unless you fancy doing this for the devices on the market (of which there are a lot), you'll never provide universally accurate measurements.
Desperately need help!
The problem is as follows: there is a bunch of medical diagnostic devices packed in a box. They are all fed from the same battery, and their data is supposed to be visualized on a Nexus tablet, also enclosed in the box. Only one device at a time is connected to a tablet. Connection is via USB port, processing off-line only. Data is streaming in real-time (some devices may have recording capability, some don't) and needs to be visualized in real-time also.
The devices are "dumb" and there are no SDKs. Seemingly, the devices were never intended to be connected to any external visualizer or any other device. All we have to work with is the raw stream of data - the output of a device is not even a file but a stream of 256 vectors. This stream needs to be captured, written to a predefined buffer/series of buffers (how to determine size of such buffer to be generic enough to satisfy every device in the box?), and then translated into some format that Android tablet can visualize.
Is my understanding of the required architecture correct? What language shall this software be written in? can it be done in something truly cross-platform like Python? Does there exist any open-source functionality for capturing a stream (if so, please, kindly recommend)? Is it possible to have such a software generic so that changing a device/tablet/OS could be accommodated without an excruciating pain?
I am developing an android application in which I have to record an audio from an android phone and find the power of a specific frequency using Goertzel algorithm. Using that power value, I am making some decisions regarding some secret messages sent through the audio.
In order for this application to be used in a variety of android phones, I need to make sure that the microphone gains of all the phones are the same. But unfortunately, different phones have different gains.
(Sony phones have very small gains and Samsung, HTC phones have very large gains). Is there a way to set a common gain to the microphones through android?
One other option is, I can record the audio to the full length and normalize it. However, I cannot do this because, I am doing this processing in real time, i.e. I have to do the processing for each and every frame received from the audiorecorder object and calculate the power value, the app cannot wait until the end of the audio to decide the message.
So, if possible, please suggest me some methods I can try to overcome this.
I'm looking in to making a pH tester for my Android phone. I've found a pH electrode that will send a milliVolt signal which I can then use to convert into a pH reading (59.2 mV per pH unit # 25° C). The question I'm having is would it be possible to connect the electrode to the headphone jack and directly read the milliVolt reading or would I need to convert the analog signal to digital first and then plug it in via USB? I'm not a big electronics guy but I'm doing this project on the side and hoping to learn from it.
I was thinking perhaps getting the mV reading from the headphone jack would be possible with the GetMaxAmplitude function like from this thread here: Range of values for GetMaxAmplitude. Although, from what I understand the lowest reading possible with this function is 0 and there are negative mV values that can be read when testing for pH.
Any help is greatly appreciated, thanks!
This should be asked in the electrical engineering site. But the best way is to use a Bluetooth-to-serial converter, ($5 off ebay) and a PIC microcontroller with USART and A/D converter, ($1), you could program the PIC quite easily in C with the 'MPLAB' IDE and 'HI-TECH' C compiler. The tools you'll need are a PIC programmer ($20) and something with a serial port if you want to configure the Bluetooth-serial converter, like a desktop PC or a USB-serial converter. You might need an op-amp circuit to amplify the signal so it's readable by the PIC. You'd then use code from Google's BluetoothChat example to get your phone connecting to your bluetooth system, and receiving data from it.
Using a microphone for input would be tricky, for one reason, because it will be filtered to accept only AC. One way to get round that would be to modulate an oscillator's output so its amplitude is proportional to the DC signal you're measuring, then you could measure the magnitude by analysing the data from the microphone.
Interfacing with USB is more difficult than it sounds, it would be harder to build something which would interface with that and measure millivolts, than with bluetooth, because the PIC processor you use for analog to digital sampling and USB client would in fact have to either act as USB host or USB OTG on a phone, which is far more complicated than being a USB peripheral.
I think you would have the most consistent operation across a range of android devices if you built a circuit which uses the voltage from the sensor to control the frequency of an audio oscillator, and measures the frequency with software on the phone.
It's not impossible that a direct connection and reading the amplitude would work, but the two problems are that the signal path may not be good all the way down to DC - there may be a minimum frequency that it can pass making it unsuitable for measuring constant voltages. And second, that the gain of the input channel may not be consistent from device to device or even over time, temperature, etc. There are possible workarounds such as circuits which alternately send the voltage upright and inverted, effectively modulating it to overcome minimum frequency limitations, or even alternate the actual reading with a reference voltage to help model the input gain.
But I'd probably recommend either the frequency modulation approach, or using a $20 embedded bluetooth module and going wireless. Either way, the sensor system is going to need its own small battery pack.
You can extract some power from the headphone jack by telling android to make some sound (and, I suppose, rectifying the output and storing it in a capacitor) - I've seen a bunch of jack-powered things do this. I wonder if the 2 ideas could go together? What if you modulated some audio out through the headphone jacks, through the sensor, then back into the mic? The pH reading should mess with the received sound in some kind of measurable way I'd expect?