Is there any API (Provided by Android ) to measure the frequency of sound. I searched about this but didn't get any appropriate API to do that.
for example: I saw a Screen Lock in some device which opens whenever user blow a wind in a particular direction,which gives me an idea about this.
Related
My application is measuring volume of captured audio as a function of samples absolute amplitude.
I've noticed and unexpected behavior in android.media.AudioRecord of Android SDK. Let's assume following flow:
Application is launched
Audio volume is being measured
Phone call is answered/dialed
Audio volume is being measured
The noise around the microphone is produced by TV with constant volume setting. Values measured for point 2 are in range [55-65] and values measured for point 4 are in range [15-25] (please see the audio visualization for 2. and 4. below).
I understand that there must be some volume adjustment going on when phone call occurs. Is it possible to monitor those adjustments or to get rid of them?
I've tried AutomaticGainControl but it is not supported on my Nexus 5 and I do not want to use it since the target devices might not support it as well.
Update
This volume adjustment is happening not only after phone call. I've just noticed the same behavior when phone was just lying on the table measuring volume.
In regard to resources that your app is sharing with other apps (the microphone in this matter) - you should always treat them carefully and assume there configuration may change during a phone call or other usage like switching to another app that's using the microphone.
Instead of trying to monitor for changes in gain , which may require root or other elevated privileges - I would take another approach and do the following:
Save the mic sensitivity level to a configuration file (shared_prefs) in your recording activity when the user is recording . do the same if the user changes it of course during usage of your app.
In the onResume proc of your recording activity - load the mic sensitivity level value and set it before continuing to record (How to adjust microphone sensitivity while recording audio in android - Solved)
If there is a phone call that throws your app to the background and takes control of the mic, once it's finished - your app will return to the foreground and retain the mic sensitivity level for further recording.
Hope this helps.
For the Dutch movie "App" (http://www.imdb.com/title/tt2536436), a second screen app was developed. This app synchronizes with the audio of the movie to give extra details about some scenes and some other movie fragments from other angles. It seems like it is synchronized with the audio of the movie.
For a school project, a similar app has to be developed, so we want to achieve the same result. Does anybody know of any way to synchronize app content with an external audio source? We know we have to account for environmental audio to be filtered out, but have no idea where to start. It seems that MPEG2-TS has some kind of time coding via a protocol called SMPTE, but we don't know how to "listen" to this time coding in our android app.
Does anybody have any idea? Any external libraries to be used?
Here's an article briefly explaining some Automated Content Recognition (ACR) techniques:
Second screen apps use the microphone on your phone or tablet to listen to the TV and identify the channel, show or ad you are watching, and the precise location within it, based on one of two techniques:
Watermarking
Audio watermarking requires a series of inaudible “watermarks” to be encoded into the broadcast TV signal, normally using a hardware encoder in the playout suite or OB truck. Watermarks can be repeated regularly throughout the broadcast, providing a timecode, or used to trigger specific events such as questions or voting windows. The second screen app uses the device’s microphone to listen for each watermark, and decode the “payload” which uniquely identifies the channel and timecode or event.
Fingerprinting
Audio fingerprinting does not require the broadcast content to be modified. Instead, the TV content is analyzed before it is broadcast (or sometimes while it is being broadcast) and broken up into a sequence of “fingerprints” which are as unique as their name suggests. The second screen app uses an API to record a short segment of audio, and generates its own fingerprint, which is then compared to the “target” set of fingerprints, and if a match is found, the channel, show and timecode are identified.
Thank you for the response. We used watermarking techniques for solving this issue; we added in high-frequency audio watermarks which were detected by the smartphone using FFT analysis of the heard audio. When we detected the TV show that was being watched, we fetched the information of the show over the internet.
I am developing an android application in which I have to record an audio from an android phone and find the power of a specific frequency using Goertzel algorithm. Using that power value, I am making some decisions regarding some secret messages sent through the audio.
In order for this application to be used in a variety of android phones, I need to make sure that the microphone gains of all the phones are the same. But unfortunately, different phones have different gains.
(Sony phones have very small gains and Samsung, HTC phones have very large gains). Is there a way to set a common gain to the microphones through android?
One other option is, I can record the audio to the full length and normalize it. However, I cannot do this because, I am doing this processing in real time, i.e. I have to do the processing for each and every frame received from the audiorecorder object and calculate the power value, the app cannot wait until the end of the audio to decide the message.
So, if possible, please suggest me some methods I can try to overcome this.
I am writing an application that will behave similar to the existing Voice recognition but will be sending the sound data to a proprietary web service to perform the speech recognition part. I am using the standard MediaRecord (which is AMR-NB encoded) which seems to be perfect to speech recognition. The only data provided by this is the Amplitude via the getMaxAmplitude() method.
I am trying to detect when the person starts to talk so that when the person stops talking for about 2 seconds I can proceed to send the sound data to the web service. Right now I am using a threshold for the amplitude that if its goes over a value (i.e. 1500) then I assume the person is speaking. My concern is that the amplitude levels may vary by device (i.e. Nexus One v Droid), so I am looking for a more standard approach to this that can be derived from the amplitude values.
P.S.
I looked at graphing-amplitude but it doesn't provide a way to do it with just the amplitude.
Well, this might not be of much help but how about starting by measuring the offset noise captured by the microphone of the device by the application, and apply the threshold dynamically based on that? That way you would make it adaptable to the different devices' microphones and also to the environment the user is using it at, at a given time.
1500 is too low of a number. Measuring the change in amplitude will work better.
However, it will still result in miss detections.
I fear the only way to solve this problem is to figure out how to recognize a simple word or tone rather than simply detect noise.
There are now multiple VAD library designed for Android. One of these are:
https://github.com/gkonovalov/android-vad
Most of the smartphones come with a proximity sensor. Android has API for using these sensors. This would be adequate for the job you described. When the user moves the phone near to his ear, you can code the app to start recording. It should be easy enough.
Sensor class for android
I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing.
I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level.
Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software?
Thanks,
/George
It seems there's an AGC (Automatic Gain Control) filter in place. You should also be able to identify the shot by its frequency characteristics. I would expect it to show up across most of the audible spectrum, but get a spectrum analyzer (there are a few on the app market, like SpectralView) and try identifying the event by its frequency "signature" and amplitude. If you clap your hands what do you get for max amplitude? You could also try covering the phone with something to muffle the sound like a few layers of cloth
It seems like AGC is in the media recorder. When I use AudioRecord I can detect shots using the amplitude even though it sometimes reacts on sounds other than shots. This is not a problem since the shooter usually doesn't make any other noise while shooting.
But I will do some FFT too to get it perfect :-)
Sounds like you figured out your agc problem. One further suggestion: I'm not sure the FFT is the right tool for the job. You might have better detection and lower CPU use with a sliding power estimator.
e.g.
signal => square => moving average => peak detection
All of the above can be implemented very efficiently using fixed point math, which fits well with mobile android platforms.
You can find more info by searching for "Parseval's Theorem" and "CIC filter" (cascaded integrator comb)
Sorry for the late response; I didn't see this question until I started searching for a different problem...
I have started an application to do what I think you're attempting. It's an audio-based lap timer (button to start/stop recording, and loud audio noises for lap setting). It' not finished, but might provide you with a decent base to get started.
Right now, it allows you to monitor the signal volume coming from the mic, and set the ambient noise amount. It's also using the new BSD license, so feel free to check out the code here: http://code.google.com/p/audio-timer/. It's set up to use the 1.5 API to include as many devices as possible.
It's not finished, in that it has two main issues:
The audio capture doesn't currently work for emulated devices because of the unsupported frequency requested
The timer functionality doesn't work yet - was focusing on getting the audio capture first.
I'm looking into the frequency support, but Android doesn't seem to have a way to find out which frequencies are supported without trial and error per-device.
I also have on my local dev machine some extra code to create a layout for the listview items to display "lap" information. Got sidetracked by the frequency problem though. But since the display and audio capture are pretty much done, using the system time to fill in the display values for timing information should be relatively straightforward, and then it shouldn't be too difficult to add the ability to export the data table to a CSV on the SD card.
Let me know if you want to join this project, or if you have any questions.