I am trying to extract the SMPTE timecode (wikipedia) from an audio input stream in android.
As mentioned here https://stackoverflow.com/a/2099226 first step is to scan the input stream byte sequence for 0011111111111101 to synchronize. But how to do this with the AudioRecord class?
That answer isn't really correct. The audio signal you are getting is a modulated carrier wave, and extracting SMPTE bits from it is a multi-step process: The raw data you get through the mike or audio in isn't going to correspond to SMPTE timecode. Therefore, you need to decode the audio, which is not at all simple.
The first step is to convert your audio signal from biphase mark code. I haven't implemented a SMPTE reader myself, but you know the clock rate from the SMPTE standard, so the first thing I would do is filter carefully to get rid of background noise, since it sounds like you are taking the audio in from the mike. A gentle high-pass to remove any DC offset should do and a gentle lowpass for HF noise should also help. (You could instead use a broad bandpass)
Then, you need to find the start of each clock cycle. You could do something fancy like an autocorrelation or PLL algorithm, but I suspect that knowing the approximate clock rate from from the SMPTE standard and being able to adjust a few percent up and down is good enough -- maybe better. So, just look for repeating transitions according to the spec. Doing something fancy will help if you suspect your timecode is highly warped (which might be the case if you have a really old tape deck or you want to sync at very high/low speeds, but LTC isn't really designed for this. That's more VTC's domain.).
Once you've identified the clock, you need to determine, for each clock tick, if a transition in the signal occurred at the start of the clock cycle. Each clock tick will have a transition in the middle, but a transition at the start indicates a 0 bit. That's how BMC transmits both clock and data in a single stream. This allows you to create a new stream of your actual SMPTE data.
Now you've decoded the BMC into a SMPTE stream. The next step is to look for the sync code. Looking at the spec on Wikipedia and from what I remember of SMPTE, I would assert that it is not enough to find a single sync code, which may happen by accident or coincidence elsewhere in the 80-bit block. Instead, you must find several in a row at the right interval. Then you can read your data into 80-bit SMPTE blocks, and, as you read, you must continue to verify the sync codes. If you don't see one where you expected it, start the search from scratch.
Finally, once you've decoded, you'll have to come up with some way to "flywheel" because you will almost certainly not read all data correctly all the time (no checksums!). That is the nature of the beast.
Related
I am using noise meter to read noise in decibels. When I run the app it is recording almost 120 readings per second. I don't want those many recordings. Is there any way to specify that I want only one or two recordings per second like that. Thanks in advance. noise_meter package.
I am using code from git hub which is already written using noise_meter github repo noise_meter example
I tried to calculate no. of samples using sample rate which is 40100 in the package. but I can't understand it.
As you see in the source code , audio streamer uses a fixed size buffer of a new thousand and an audio sample rate of 41000, and includes this comment Uses a buffer array of size 512. Whenever buffer is full, the content is sent to Flutter. So, small audio blocks will arrive at the consumer frequently (as you might expect from a streamer). It doesn't seem possible to adjust this.
The noise meter package simply takes each block of audio and calculates the noise level, so the rate of arrival of those is exactly the same as rate of arrival of audio blocks from the underlying package.
Given the simplicity of the noise meter calculation, you could replace it with your own code directly on top of audio streamer. You just need to collect multiple blocks of audio together before performing the simple decibel calculation.
Alternatively you could simply discard N out of each N+1 samples.
I'm making an application in which I record a direct audio from the microphone of the cell phone, I save that recording and I need to compare it with some audio already stored in the device
The audios are of "noise" of motors, the idea is that from the recorded recording it indicates us to which case of the saved ones it seems
that is, I have two cases, a good engine and a damaged engine, when I finish recording it must say "this audio belongs to a damaged engine"
Reading I find that it has to be done through artificial intelligence, which is really complex, I have read that you can "decompose" the audio into a vector of numbers or make comparisons by FFT, however I do not find much information about it, really I'd appreciate your help.
the file type saved is .wav
It's nontrivial task to compare audio signals.
The audio is just a sequence of values (numbers) where index is just a "time" and value is a loudness of sound (amplitude).
If you compare audio data like two arrays (sequences) element by element, iterating through the index - it will be luck to get something reasonable. Though you need some transformation of this array to get aggregated info about this sequence of numbers as a whole (for example - spectre of signal).
There are some mathematical tools for this task, for example, mentioned by you well-known Fourier Transform and statistical tool Autocorrelation (it finds "kindness" of sequence of numbers).
The autocorrelation method can be relatively simple - you just iterate comparing arrays of data and calculate the autocorrelation. But you will pay for simplicity in case of initial quality (or preparation/normalization) of signals - they should have similar duration. The value of resulted correlation function will show how differ two sequences, i.e. 0 - is absolutely different and 1 - is almost the same.
To implement Fourier Transform (FFT) is not a problem too, you could take well described algo and implement it itself on any language without using third party libs. It does the job very well.
FT will help you get a spectrum of the signal i.e. another set of values: set of amplitudes per frequency (roughly, frequency as array index instead of time in case of input raw signal) and now you can compare this given spectrums almost like two arrays iterating through an index (frequency) and then decide on their similarity - calculate deltas and see whether it hit into some acceptance interval (or you can use more correct statistical methods e.g. correlation function).
As for noised signal, the noise is usually subtracted from the given data set (but here you should know the sort of noise type).
It is all related to signal processing area and if you're working on such project you need to learn more about this.
Bonus: a book for example
I'd like to analyze a piece of a recorded sound sample and find it's properties like pitch and so.
I have tried to analyze the recorded bytes of the buffer with no success.
How it can be done?
You will have to look into FFM.
Then do something like this pseudocode indicates :
Complex in[1024];
Complex out[1024];
Copy your signal into in
FFT(in, out)
for every member of out compute sqrt(a^2+b^2)
To find frequency with highest power scan for the maximum value in the first 512 points in out
Check also out the original post of the buddy here because it is probably a duplicate.
Use fast Fourier transform.. Libraries available for most languages. Bytes are no good, can be mp3 encoded or wav/pcm.. You need to decide then analyze.
DG
I really fail at FFT and now I'm in need to communicate from the headphone jack of my Android to the Arduino there's currently a library for Arduino (talks about it in the blog post Real-time spectrum analyzer powered by Arduino) and one for Android too!
How should I start? How should I build audio signals which ultimately can be turned into FFTs and the Arduino can analyse the same using the library and I can actuate anything?
You are asking a very fuzzy question: "How should I build audio signals which ultimately can be turned into FFTs and the Arduino can analyse the same using the library and I can actuate anything?". I am going to help you think through the problem - asking yourself the right questions is essential to get any answers.
Presumably, your audio signals are "coming from somewhere" - i.e. they are sound. This means that you need to convert them into a stream of numbers first.
problem #1: converting audio signal into a stream of numbers
This breaks down into three separate sub problems:
Getting the signal to the right amplitude
Choosing the sampling rate needed
Digitizing and storing the data for later processing
Items (1) and (3) are related, since you need to know how you are going to digitize the signal before you can choose the right amplitude. For example, if you have a microphone as your sound input source, you will need to amplify the signal (and maybe add some automatic gain control) before feeding it into an ADC (analog to digital converter) that has a 5 V input range, since the microphone may have an output in the mV range. Without more information about the hardware you are using, there's not a lot to add here. It sounds from your tag that you are trying to do that inside an Android device - in which case I wonder how you intend to move the digital signal to the Arduino (over USB?).
The second point, "choosing the sampling rate", is actually very important. A sound signal contains many different frequencies - think of them as keys on the piano. In order to detect a high frequency, you need to sample the signal "faster than it is changing". There is a formal theorem called "Nyquist's Theorem" that states that you have to sample at 2x the highest frequency that is present in your signal. Note - it's not just "that you are interested in", but "that is present". If you sample a high frequency signal with a low frequency sample clock, it will appear "aliased" - it wil show up in your output as something completely different. So before you digitize a signal you have to decide what the frequencies of interest are, and remove all higher frequencies with a filter. Let's say you are interested in frequencies up to 500 Hz (about 1 octave above middle C on a piano). To give your filter a chance to work, you might choose to cut off all frequencies above 1 kHz (filters "roll off" - i.e. they increase in strength over a range of frequencies), and would sample at 2 kHz. This means you get 2000 samples per second, and you need to figure out where to put them on your Arduino (memory fills up quickly on the little board.)
Problem #2: analyzing the signal
Assuming that you have somehow captured a digital signal, your next task is analyzing it. The FFT is basicaly some clever math that tells you, for a given sound sample, "what keys on the piano were hit, and how hard". It breaks the sound signal into a series of frequency "bins", and determines how much energy is in each bin (it also computes the phase, but let's keep it simple). So if the input of a FFT algorithm is a sound sample, the output is an array of values telling you what frequencies were present in the signal. This is approximate, since it will find the "nearest bin". Sticking with the same analogy - if you were hitting a piano that's out of tune, the algorithm won't return "out of tune", but rather "a bit of C, and a bit of C sharp", since it cannot actually measure anything in between. The accuracy of an FFT is determined by the sampling frequency (which gives you the upper limit on the frequency you can detect) and the sample length: the longer you "listen" so the sample, the more subtle the differences you can "hear". So you have another trade-off to consider: if your audio signal changes rapidly, you have to sample for a short time (to capture the quick changes); but if you need an accurate frequency, you have to sample for a long time. For example if you are writing a Morse decoder, your sampling has to be short compared to a pause between "dits" and "dashes" - or they will slur together. Figuring out that a morse tone is present is pretty easy though, since there will be a single tone (one bin in the FFT) that is much larger than the others.
Exactly how you implement these things depends on your application. The third step, "doing something with it", requires you to decide what is a meaningful signal. Again, if you are making a Morse decoder, you would perhaps turn an LED ON when a single tone is present (one or two bins in the FFT have much bigger value than the mean of the others), and OFF when it is not (all noise - lots of bins with approximately the same size). But without a LOT more information from you, there's not much more one can say to help you.
You might learn a lot from reading the following articles:
http://www.arduinoos.com/2010/10/sound-capture/
http://www.arduinoos.com/2010/10/fast-fourier-transform-fft/
http://interface.khm.de/index.php/lab/experiments/frequency-measurement-library/
I have searched for this online, but am still a bit confused (as I'm sure others will be if they think of something like this). I'd like to preface by saying that this is not for homework and/or profit.
I wanted to create an app that could listen to your microwave as you prepare popcorn. It would work by sounding an alarm when there's a certain time interval between pops (say 5-6 seconds). Again, this is simply a project to keep me occupied - not for a class.
Either way, I'm having trouble trying to figure out how to analyze the audio intake in real-time. That is, I need a way to log the time when a "pop" occurs. So that you guys don't think I didn't do any research into the matter, I've checked out this SO question and have extensively searched the AudioRecord function list.
I'm thinking that I will probably have to do something with one of the versions of read() and then compare the recorded audio every 2 seconds or so to the recorded audio of a "pop" (i.e. if 70% or more of the byte[] audioData array is the same as that of a popping sound, then log the time). Can anyone with Android audio input experience let me know if I'm at least on the right track? This is not a question of me wanting you to code anything for me, but a question as to whether I'm on the correct track, and, if not, which direction I should head instead.
I think I have an easier way.
You could use the MediaRecorder 's getMaxAmplitude method.
Anytime your recorder detects a big jump in amplitude, you have detected a corn pop!
Check out this code (ignore the playback part): Playing back sound coming from microphone in real-time
Basically the idea is that you will have to take the value of each 16-bit sample (which corresponds to the value of the wave at that time). Using the sampling rate, you can calculate the time between peaks in volume. I think that might accomplish what you want.
this may be a bit overkill, but there is a framework from MIT media labs called funf: http://code.google.com/p/funf-open-sensing-framework/
They already created classes for audio input and some analysis (FFT and the like), also saving to files or uploading is implemented as far as I've seen, and they handle most of the sensors available on the phone.
You can also get inspired from the code they wrote, which I think is pretty good.