I am using this code I found to record mic audio, I am not writing the data into the PCM file, I simply write the data to a ByteArrayOutputStream variable: Android AudioRecord example
When I finish the record, I have a byte array. How should I analyze the byte array using FFT? Simply speaking, I am trying to analyze a FSK sound wave.
I think that I understood your question now. You want to know how to decode a FSK Signal with android? And to do this you want to use the FFT?
You can achieve this without the FFT.
Android Demo App
There is the source code of an Android App that implements FSK Modulation/Demodulation.
The FSKModule.java implements a decoder that uses 3150Hz/1575Hz as mark/space. The important methods are findStartBit, parseBits, processSound and countPeaks. The implementation simply counts the number of peaks within a time slice. With the number of peaks you can infer to the corresponding frequency and decode the signal. This is easier and faster than using FFT.
Why not using FFT?
There is a nice blog that describes when to (not) use the FFT. (e.g. Why EQ Is Done In the Time Domain and other interesting things)
Related
I need to record 10 seconds of audio and than to perform convolution with some other signal. I need to record audio with sampling rate of 512hz. As my phone(i guess it's hard that any phone supports sample rate of 512hz) doesn't support that sampling rate, i need to record audio in higher sampling rate and than downsample to 512hz. For recording audio I use AudioRecord and the only frequency that is guaranteed to work is 44100hz. Every lib or code that i found perform downsampling by reading and writing to file. As i need it to be very fast, i need to perform this action couple times in one second(at least two), is any way to perform dowsampling on raw PCM data written in byte array and to be very fast?
Any reason you are aiming for a sample rate of 512hz? Seems a bit low to be useful!
I am not sure what language you are using but I use libsoxr within C++. A quick google comes up with libresample4j for java. Both these will let you do the resampling in real time, without have to save a file first.
i'm try to get pcm data by class AudioRecord, the source of audio from headset,it linked a device ,the device will send some wave to my app(i hope you can understand what i say).![difference device's wave],the picture at http://i.stack.imgur.com/5iZnY.png
we see the picture, wave 1 and wave 2,i can get the right result,because i can calculate the point of one cycle, but using sony xl36h, i received wave not closeness real wave, device actually send signal closeness to wave 1.
my question is what caused this phenomenon, how to get the closeness wave such like wave1? i think it maybe Sony optimize the bottom layer of audio ,if that ,should i use NDK to avoid that?
should i use NDK to avoid that?
No, you will get the same results with the NDK.
AudioRecord provides already access to the raw PCM data. The difference between the devices occures because they use different audio modules. The modules have different hardware features (low pass filters/ sensibility) and you can not disable them through software. The reason behind that is that these features reduce noise.
I want to develop an android app that takes in the audio signal and generates the time and frequency domain values.
I have the audio in a buffer which is obtained from the android MIC. I want to use this buffer values to generate a frequency domain graph. Is there some code that would help me find the FFT of an audio signal??
I have seen Moonblink's Audalyzer code and there are some missing components in the code. I could find lots of missing modules. I want to find a better logic that would take in audio and perform some calculations on it.
I found these for you using duckduckgo:
Android audio FFT to retrieve specific frequency magnitude using audiorecord
http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/
This should help
I want to develop a bass effect for media player.Can any body help me regarding this..
First step is to isolate the low-frequency components. You will need an FFT to do this... there's one linked to this older S/O question.
After you've got the frequency representation of your audio data, alter the magnitudes of the low-frequency components as you see it, run the inverse transform to convert it back to audio samples, and pass it to the audio hardware for playback.
is there a way to filter audio in android system?
I am interested to get only the audio of a fixed frequency.
If you get your audio via the AudioRecord class, then you'll have raw audio data that you could filter using whatever audio filter algorithm you have. Or are you looking for an audio filtering library?
As Dave says, the AudioRecord class is the easiest way. Often times, I will extend an AsyncTask that implements AudioRecord to read the audio buffer and push the buffer back to the UI thread for visualization (if needed).
Regarding the filtering of the audio, Android does not provide any built-in way of doing this. The way you would filter out certain frequencies is by taking the Fourier Transform of the sampled audio and then pulling out the frequencies of interest.
If you want more details on filtering, you should probably post a new SO question or wiki the Fast Fourier Transform (FFT). You might also look into JTransforms which is a great open source FFT library that runs wonderfully on Android.
Please do NOT use the FFT for this. That's amateur hour. (The FFT is a great tool with many great use-cases, but this is not one of them.) You most likely want to solve this problem using time domain filters. Here's a post to get you started writing your own filters in the time domain as easy as can be. However, a lot of people stumble on that since it's a bit specialized, so if you want to use something built-in to android, start with the equalizer audio effect.