Is it possible to make flash blink on ringtone, according to the beats of a song set as ringtone on phone in android.If yes then please give me a hint.
Yes and no. I've once written a FFT for acceleration sensor data of the devices. This was to find out what step frequence a user is running.
You can do FFT on an Android phone. I used the Apache Commons Math library for it.
Easiest starting point is to have WAV or FLAC Audio files as they do not have compressions or the like in it. You can straight do the FFT on them.
In my running app it takes about 1-2 seconds(!) to calculate one FFT for 512 input values, utilizing the CPU for 100% and draining the battery correspondingly. So I do the FFT only once a minute.
I can assume, that the user would see this, when the telephone rings:
No blinking light, just sound for the first view seconds (while you do the FFT)
The blinking light with the frequence of the beats but not in sync with them (you're showing the beats of a past time period)
But I must admit, that I really like the idea. Let me know, when you're out with your App :-)
Related
I really fail at FFT and now I'm in need to communicate from the headphone jack of my Android to the Arduino there's currently a library for Arduino (talks about it in the blog post Real-time spectrum analyzer powered by Arduino) and one for Android too!
How should I start? How should I build audio signals which ultimately can be turned into FFTs and the Arduino can analyse the same using the library and I can actuate anything?
You are asking a very fuzzy question: "How should I build audio signals which ultimately can be turned into FFTs and the Arduino can analyse the same using the library and I can actuate anything?". I am going to help you think through the problem - asking yourself the right questions is essential to get any answers.
Presumably, your audio signals are "coming from somewhere" - i.e. they are sound. This means that you need to convert them into a stream of numbers first.
problem #1: converting audio signal into a stream of numbers
This breaks down into three separate sub problems:
Getting the signal to the right amplitude
Choosing the sampling rate needed
Digitizing and storing the data for later processing
Items (1) and (3) are related, since you need to know how you are going to digitize the signal before you can choose the right amplitude. For example, if you have a microphone as your sound input source, you will need to amplify the signal (and maybe add some automatic gain control) before feeding it into an ADC (analog to digital converter) that has a 5 V input range, since the microphone may have an output in the mV range. Without more information about the hardware you are using, there's not a lot to add here. It sounds from your tag that you are trying to do that inside an Android device - in which case I wonder how you intend to move the digital signal to the Arduino (over USB?).
The second point, "choosing the sampling rate", is actually very important. A sound signal contains many different frequencies - think of them as keys on the piano. In order to detect a high frequency, you need to sample the signal "faster than it is changing". There is a formal theorem called "Nyquist's Theorem" that states that you have to sample at 2x the highest frequency that is present in your signal. Note - it's not just "that you are interested in", but "that is present". If you sample a high frequency signal with a low frequency sample clock, it will appear "aliased" - it wil show up in your output as something completely different. So before you digitize a signal you have to decide what the frequencies of interest are, and remove all higher frequencies with a filter. Let's say you are interested in frequencies up to 500 Hz (about 1 octave above middle C on a piano). To give your filter a chance to work, you might choose to cut off all frequencies above 1 kHz (filters "roll off" - i.e. they increase in strength over a range of frequencies), and would sample at 2 kHz. This means you get 2000 samples per second, and you need to figure out where to put them on your Arduino (memory fills up quickly on the little board.)
Problem #2: analyzing the signal
Assuming that you have somehow captured a digital signal, your next task is analyzing it. The FFT is basicaly some clever math that tells you, for a given sound sample, "what keys on the piano were hit, and how hard". It breaks the sound signal into a series of frequency "bins", and determines how much energy is in each bin (it also computes the phase, but let's keep it simple). So if the input of a FFT algorithm is a sound sample, the output is an array of values telling you what frequencies were present in the signal. This is approximate, since it will find the "nearest bin". Sticking with the same analogy - if you were hitting a piano that's out of tune, the algorithm won't return "out of tune", but rather "a bit of C, and a bit of C sharp", since it cannot actually measure anything in between. The accuracy of an FFT is determined by the sampling frequency (which gives you the upper limit on the frequency you can detect) and the sample length: the longer you "listen" so the sample, the more subtle the differences you can "hear". So you have another trade-off to consider: if your audio signal changes rapidly, you have to sample for a short time (to capture the quick changes); but if you need an accurate frequency, you have to sample for a long time. For example if you are writing a Morse decoder, your sampling has to be short compared to a pause between "dits" and "dashes" - or they will slur together. Figuring out that a morse tone is present is pretty easy though, since there will be a single tone (one bin in the FFT) that is much larger than the others.
Exactly how you implement these things depends on your application. The third step, "doing something with it", requires you to decide what is a meaningful signal. Again, if you are making a Morse decoder, you would perhaps turn an LED ON when a single tone is present (one or two bins in the FFT have much bigger value than the mean of the others), and OFF when it is not (all noise - lots of bins with approximately the same size). But without a LOT more information from you, there's not much more one can say to help you.
You might learn a lot from reading the following articles:
http://www.arduinoos.com/2010/10/sound-capture/
http://www.arduinoos.com/2010/10/fast-fourier-transform-fft/
http://interface.khm.de/index.php/lab/experiments/frequency-measurement-library/
The idea is Phone A sends a sound signal and bluetooth signal at the same time and Phone B will calculate the delay between the two signals.
In practice I am getting inconsistent results with delays from 90ms-160ms.
I tried optimizing both ends as much as possible.
On the output end:
Tone is generated once
Bluetooth and audio output each have their own thread
Bluetooth only outputs after AudioTrack.write and AudioTrack is in streaming mode so it should start outputting before the write is even completed.
On the receiving end:
Again two separate threads
System time is recorded before each AudioRecord.read
Sampling specs:
44.1khz
Reading entire buffer
Sampling 100 samples at a time using fft
Taking into account how many samples transformed since initial read()
Your method relies on basically zero latency throughout the whole pipeline, which is realistically impossible. You just can't synchronize it with that degree of accuracy. If you could get the delays down to 5-6ms, it might be possible, but you'll beat your head into your keyboard before that happens. Even then, it could only possibly be accurate to 1.5 meters or so.
Consider the lower end of the delays you're receiving. In 90ms, sound can travel slightly over 30m. That's the very end of the marketed bluetooth range, without even considering that you'll likely be in non-ideal transmitting conditions.
Here's a thread discussing low latency audio in Android. TL;DR is that it sucks, but is getting better. With the latest APIs and recent devices, you may be able to get it down to 30ms or so, assuming you run some hand-tuned audio functions. No simple AudioTrack here. Even then, that's still a good 10-meter circular error probability.
Edit:
A better approach, assuming you can synchronize the devices' clocks, would be to embed a timestamp into the audio signal, using a simple am/fm modulation or pulse train. Then you could decode it at the other end and know when it was sent. You still have to deal with the latency problem, but it simplifies the whole thing nicely. There's no need for bluetooth at all, since it isn't really a reliable clock anyway, since it can be regarded as having latency problems of its own.
This gives you a pretty good approach
http://netscale.cse.nd.edu/twiki/pub/Main/Projects/Analyze_the_frequency_and_strength_of_sound_in_Android.pdf
You have to create an 1 kHz sound with some amplitude (measure in dB) and try to measure the amplitude of the sound arrived to the other device. From the sedation you might be able to measure the distance.
As I remember: a0 = 20*log (4*pi*distance/lambda) where a0 is the sedation and lambda is given (you can count it from the 1kHz)
But in such a sensitive environment, the noise might spoil the whole thing, just an idea, how I would do if I were you.
I have been scratching my head for the past week to do this effect on the text. http://www.youtube.com/watch?v=gB2PL33DMFs&feature=related
Would be great if someone can give me some tips or guidance or tutorial on how to do this.
thankz for reading and answering =D
If all you want is to display a movie with video and sound, a MediaPlayer can do that easily.
So I assume that you're actually talking about synchronizing some sort of animated display with a sound file being played separately. We did this using a MediaPlayer and polling getCurrentPosition from within an animation loop. This more or less works, but there are serious problems that need to be overcome. (All this deals with playing mp3 files; we didn't try any other audio formats).
First, your mp3 must be recorded at 44,100 Hz sampling rate. Otherwise the value returned by getCurrentPosition is way off. (We think it's scaled by the ratio of the actual sampling rate to 44,100, but we didn't verify this hypothesis.) A bit rate of 128,000 seems to work best.
Second, and more serious, is that the values returned by getCurrentPosition seem to drift away over time from the sound coming out of the device. After about 45 seconds, this starts to be quite noticeable. What's worse is that this drift is significantly different (but always present) in different OS levels, and perhaps from device to device. (We tested this in 2.1 and 2.2 on both emulators and real devices, and 3.0 on an emulator.) We suspected some sort of buffering problem, but couldn't really diagnose it. Our work-around was to break up longer mp3 files into short segments and chain their playback. Lots of bookkeeping aggravation. This is still under test, but so far it seems to have worked.
Ted Hopp: time drifting on MP3 files is likely caused by those MP3 files being VBR. I've been developing Karaoke apps for a while, and pretty much every toolkit - from Qt Phonon to ffmpeg - had problems reporting correct audio position on variable MP3 files. I assume this is because they all try to calculate the current audio position by using the number of decoded frames, which makes it unreliable for VBR MP3s. I described it in a user-friendly way in the Karaoke Lyrics Editor FAQ
Unfortunately the only solution I found is to recode MP3s to CBR. Another was to ditch the current position completely, and rely only on system clocks. That actually produced a better result for VBR MP3s, but still not as good as recoding them into CBR.
Hello I was wondering if using the android tone generator class would it be possible to create a tone in one device and listen for this same tone in another device. If this is possible I do have a few other questions.
Taking backround noise into consideration is it possible to listen for only this specific tone?
Would this process be resource intensive?
Could I use a tone that would be inaudable to the human ear or close to it?
Lastly could I use a tone that could only be heard with a couple of feet from the sending device?
Thanks very much for yer time guys and girls :)
Edit >
Thanks For adding the audio processing tag sabastian. Much better discription.
It would be CPU intensive, yes.
The way to it is quite simple: you need a permanent recorder which puts the received data into a FFT (fast fourier transform). FFT basically does one thing: splits the audio into a frequency/power-scale. With this "background noise cleaned" result you can check things like "was there a tone with 1000Hz playing for at least 2 seconds" - and act accordingly.
There is a reasonable speed FFT implementation here: http://www.badlogicgames.com/wordpress/?p=449
FFT can also be used (actually, IS used) for detection of dualtone dialing (DTMF) - 2 frequencies at same time is much better than just using one (as the error rate drop significantly and you can go to shorter duration for the tone sending/detecting).
"Inaudible" won't be possible, as (a) the speaker can not produce such sounds (b) you are limited in sampling rate - so also limited in both producing and recording such high frequencies.
"couple of feet" will be naturally imposed (not very loud speaker, not very good microphone).
Have a look at this other question: "Android: Need to record mic input". I think you can modify that for your task, then with sound bytes you can have filtering or FFT.
Hope it helps
I'm trying to build a gadget that detects pistol shots using Android. It's a part of a training aid for pistol shooters that tells how the shots are distributed in time and I use a HTC Tattoo for testing.
I use the MediaRecorder and its getMaxAmplitude method to get the highest amplitude during the last 1/100 s but it does not work as expected; speech gives me values from getMaxAmplitude in the range from 0 to about 25000 while the pistol shots (or shouting!) only reaches about 15000. With a sampling frequency of 8kHz there should be some samples with considerably high level.
Anyone who knows how these things work? Are there filters that are applied before registering the max amplitude. If so, is it hardware or software?
Thanks,
/George
It seems there's an AGC (Automatic Gain Control) filter in place. You should also be able to identify the shot by its frequency characteristics. I would expect it to show up across most of the audible spectrum, but get a spectrum analyzer (there are a few on the app market, like SpectralView) and try identifying the event by its frequency "signature" and amplitude. If you clap your hands what do you get for max amplitude? You could also try covering the phone with something to muffle the sound like a few layers of cloth
It seems like AGC is in the media recorder. When I use AudioRecord I can detect shots using the amplitude even though it sometimes reacts on sounds other than shots. This is not a problem since the shooter usually doesn't make any other noise while shooting.
But I will do some FFT too to get it perfect :-)
Sounds like you figured out your agc problem. One further suggestion: I'm not sure the FFT is the right tool for the job. You might have better detection and lower CPU use with a sliding power estimator.
e.g.
signal => square => moving average => peak detection
All of the above can be implemented very efficiently using fixed point math, which fits well with mobile android platforms.
You can find more info by searching for "Parseval's Theorem" and "CIC filter" (cascaded integrator comb)
Sorry for the late response; I didn't see this question until I started searching for a different problem...
I have started an application to do what I think you're attempting. It's an audio-based lap timer (button to start/stop recording, and loud audio noises for lap setting). It' not finished, but might provide you with a decent base to get started.
Right now, it allows you to monitor the signal volume coming from the mic, and set the ambient noise amount. It's also using the new BSD license, so feel free to check out the code here: http://code.google.com/p/audio-timer/. It's set up to use the 1.5 API to include as many devices as possible.
It's not finished, in that it has two main issues:
The audio capture doesn't currently work for emulated devices because of the unsupported frequency requested
The timer functionality doesn't work yet - was focusing on getting the audio capture first.
I'm looking into the frequency support, but Android doesn't seem to have a way to find out which frequencies are supported without trial and error per-device.
I also have on my local dev machine some extra code to create a layout for the listview items to display "lap" information. Got sidetracked by the frequency problem though. But since the display and audio capture are pretty much done, using the system time to fill in the display values for timing information should be relatively straightforward, and then it shouldn't be too difficult to add the ability to export the data table to a CSV on the SD card.
Let me know if you want to join this project, or if you have any questions.