Low pass android PCM audio data - android

I'm working on a guitar tuner app, by recording audio, getting an FFT of the audio, and finding the peak magnitude to find the base freq. So far results show my code works, and will give back an accurate frequency when played pure tones, especially at 500+hz, however with the low frequencies of guitar, and the loud harmonics, results are kind of messy.
I believe i need to introduce a window function, as well as a low pass filter to refine my results and help my app detect the right peak, and not a harmonic, but i'm not too sure
I have implemented a window function, although i'm not sure it's affecting final results, and i'm totally stuck on how to implement a low-pass filter.
byte data[] = new byte[bufferSize]; //the audio data read in
...
double[] window = new double[bufferSize]; //window array
//my window function, not sure if correct
for(int i = 0; i< bufferSize-1; ++i){
window[i] = ((1 - Math.cos(i*2*Math.PI/bufferSize-1))/2);
data[i] = (byte) (data[i] * window[i]);
}
DoubleFFT_1D fft1d = new DoubleFFT_1D(bufferSize);
double[] fftBuffer = new double[bufferSize*2];
for(int i = 0; i < bufferSize-1; ++i){
fftBuffer[2*i] = data[i];
fftBuffer[2*i+1] = 0;
}
fft1d.complexForward(fftBuffer);
//create/populate power spectrum
double[] magnitude = new double[bufferSize/2];
maxVal = 0;
for(int i = 0; i < (bufferSize/2)-1; ++i) {
double real = fftBuffer[2*i];
double imaginary = fftBuffer[2*i + 1];
magnitude[i] = Math.sqrt( real*real + imaginary*imaginary );
Log.i("mag",String.valueOf(magnitude[i]) + " " + i);
//find peak magnitude
for(int i = 0; i < (bufferSize/2)-1; ++i) {
if(magnitude[i] > maxVal){
maxVal = (int) magnitude[i];
binNo = i;
}
}
//results
freq = 8000 * binNo/(bufferSize/2);
Log.i("freq","Bin "+String.valueOf(binNo));
Log.i("freq",String.valueOf(freq) + "Hz");
So yeah, not entirely sure if the window function is doing much, power spectrum contains harmonic peaks regardless, and i'm not sure where to begin with using a low pass filter.

The Window Function can help increase a bit your results.
The purpose of the window is to decrease the amplitude component of the ends of the window, in order to avoid the appearance of spurious high frequency, this is necessary because the Fourier transform assumes the signal to be infinite, so in case of a window, it is repeated countless times for both sides, causing a discontinuity at the borders!
If you apply one window, this problem is minimized, but still occur to some degree.
If are you working with guitar build a low-pass to filter the highest tuned frequency expected, you need Low-pass before apply your Window Function!
you need to consider the Frequency Response from the microphone, I believe it is not easy for these mobile microphones capture low-frequency of a tuned guitar, we are talk about 82.4Hz
Finding the peak of FFT is not a good idea to do tuners !

The FFT can be thought of as a series of band-pass filters, with the magnitude of each bin being the power averaged over the window. A LPF upstream of a FFT isn't going to get you very much - you can just discard higher order FFT bins instead - unless you have a requirement for a particularly steep response.
The approach of implementing a guitar tuner with an FFT is problematic (although
having implementing a successful tuner this way, they aren't insurmountable).
Finding the peak bin is a naive approach and won't give you precise results. Ever. Each bin is a band-pass filter, so you make the assumption that the measured result is the bin centre frequency. Here's what's wrong:
The frequencies of semitones in equal temperament are a geometric progression (the ratio is ~1.06), yet the spacing of FFT bins is linear. If we assume Fs is 44.1k and a 1024 point FFT is used, the bin spacing 44.1Hz. As E2 (the bottom string of a guitar is ~82Hz #A440), it's clear this a tuner using this approach will largely useless. Even trading an extremely large window size for real-time response (and a lot of processing), it's still not very accurate. You're totally screwed trying to tune an electric bass (bottom string: E1, ~41Hz)
What happens with frequencies that straddle bins? As it happens the frequencies of C in all octaves are not far from a power of 2. B - a note a guitar tuner does need to perform well on is close too. For these notes, the fundamental's energy is split almost evenly between two bands. It's likely not the largest any more.
Is the fundamental even the peak frequency? (Hint: it's often not).
Bins overlap. Just how much depends on the window function used.
If you want to persist with FFT solutions, you probably want to be use the STFT. There's a good description of how to do it from DSPDimension.com. The one piece of information missing from that page is that the definition of frequency is as the rate-of-change of phase:
F = dPhi/dt
Consequently, it is possible to estimate F knowing the phase difference between two consecutive windows of results.
Note that windowing is sampling, so sampling theory and the Nyquist rate applies to frequency resolution achievable with it. You need at least 2048-point FFTs for a guitar tuner.

FFT peak magnitude frequency detection usually won't work for determining guitar pitch, since the peak frequency is often not the frequency of the note pitch. Try using a pitch detection or estimation algorithm instead. See More accurate estimating guitar pitch frequency based on FFT(already) result for some alternatives.

Related

How to find distance of displacement using accelerometer sensor in android smartphone?

I am having one android smart phone containing accelerator sensor, compass sensor and gyroscope sensor . i want to calculate the distance of displacement using this sensors.
I already tried with the basic method ie.,
final velocity = initial velocity + ( acceleration * time taken)
distance = time taken * speed
But i am unable to get the correct displacement. Every time i tried for same displacement i am gettng diffrent results.
The equation you may be looking for looking for is:
Velocity = (Gravity*Acceleration)/(2*PI*freq)
A correct use of units for this equation (metric) would be
Gravity = mm/s squared = 9806.65
Acceleration = average acceleration over 1 second
Frequency = Hz (of the acceleration waveform over 1 second)
For example, if you gathered data from all 3 axes of the accelerometer, you would do the following to get a acceleration waveform (in raw values) for a 3D space:
inputArray[i] = sqrt(X*X + Y*Y + Z*Z);
Once the data is collected, only use the amount of samples in the waveform that would have been collected (if there is a 1ms delay between values only use 1000 values).
Add the values together and divide by the amount of samples to get your average (you may need to make all values positive if the accelerometer data have minus values) you could use this algorithm to do this before finding the average.
for(i = 0; i < 1000; i++){
if(inputArray[i] < 0){
inputArray[i] = inputArray[i] - (inputArray[i]*2);
}
}
Once you have the acceleration average output you need to perform the equation above.
static double PI = 3.1415926535897932384626433832795;
static double gravity = 9806.65;
double Accel2mms(double accel, double freq){
double result = 0;
result = (gravity*accel)/(2*PI*freq);
return result;
}
An example could be that the average acceleration is 3 gs over 1 second in a swing:
NOTE: This calculation is based on a sinusoidal waveform, so the frequency would be representative of the physical movement of the accelerometer not the frequency of the sampling rate
Accel2mms(3, 1);
3 gs over 1 second with a frequency of 1 (1 swing in one direction) = 4682.330468 mm/s or 4.7 meters.
Hope this is something like what you're looking for.
Bear in mind this calculation is based on a sinusoidal waveform but is being adapted to calculate based on a single movement (frequency 1) so it may not be very accurate. But in theory should work.
As #rIHaNJiTHiN mentioned in the comments, there is no reliable way to get displacement from 2nd and 3rd order sensors (sensors that measure derivatives of displacement like velocity and acceleration).
GPS is the only way to measure absolute displacement, though its precision and accuracy are not so high at short distances and short time periods (an in certain places with a bad signal).

Constant and probably inaccurate Frame Rate

I am using the following code to calculate Frame Rate in Unity3d 4.0. It's applied on a NGUI label.
void Update () {
timeleft -= Time.deltaTime;
accum += Time.timeScale/Time.deltaTime;
++frames;
// Interval ended - update GUI text and start new interval
if( timeleft <= 0.0 )
{
// display two fractional digits (f2 format)
float fps = accum/frames;
string format = System.String.Format("{0:F2} FPS",fps);
FPSLabel.text = format;
timeleft = updateInterval;
accum = 0.0F;
frames = 0;
}
}
It was working previously, or at least seemed to be working.Then I was having some problem with physics, so I changed the fixed timestep to 0.005 and the Max timestep to 0.017 . Yeah I know it's too low, but my game is working fine on that.
Now the problem is the above FPS code returns 58.82 all the time. I've checked on separate devices (Android). It just doesn't budge. I thought it might be correct, but when I saw profiler, I can clearly see ups and downs there. So obviously it's something fishy.
Am I doing something wrong? I copied the code from somewhere (must be from the script wiki). Is there any other way to know the correct FPS?
By taking cues from this questions, I've tried all methods in the first answer. Even the following code is returning a constant 58.82 FPS. It's happening in the android device only. In the editor I can see fps difference.
float fps = 1.0f/Time.deltaTime;
So I checked the value of Time.deltaTime and it's 0.017 constant in the device. How can this be possible :-/
It seems to me that the fps counter is correct and the FPS of 58.82 is caused by the changes in your physics time settings. The physics engine probably cannot finish its computation in the available timestep (0.005, which is very low), and that means it will keep computing until it reaches the maximum timestep, which in your case is 0.017. That means all frames will take 0.017 plus any other overhead from rendering / scripts you may have. And 1 / 0.017 equals exactly 58.82.
Maybe you can fix any problems you have with the physics in other ways, without lowering the fixed timestep so much.

How to calculate sound frequency in android?

I want to develop app to calculate Sound frequency in Android. Android Device will take
Sound from microphone (i.e. out side sound) and I have one color background screen in app.
on sound frequency changes i have to change background color of screen .
So my question is "How can i get sound frequency"?
is there any android API available?
Please help me out of this problem.
Your problem was solved here EDIT: archived here. Also you can analyze the frequency by using FFT.
EDIT: FFTBasedSpectrumAnalyzer (example code, the link from the comment)
Thanks for Reply I have done this by using sample on
http://som-itsolutions.blogspot.in/2012/01/fft-based-simple-spectrum-analyzer.html
Just modify your code for to calculate sound frequency by using below method
// sampleRate = 44100
public static int calculate(int sampleRate, short [] audioData){
int numSamples = audioData.length;
int numCrossing = 0;
for (int p = 0; p < numSamples-1; p++)
{
if ((audioData[p] > 0 && audioData[p + 1] <= 0) ||
(audioData[p] < 0 && audioData[p + 1] >= 0))
{
numCrossing++;
}
}
float numSecondsRecorded = (float)numSamples/(float)sampleRate;
float numCycles = numCrossing/2;
float frequency = numCycles/numSecondsRecorded;
return (int)frequency;
}
The other answers show how to display a spectrogram. I think the question is how to detect a change in fundamental frequency. This is asked so often on Stack Exchange I wrote a blog entry (with code!) about it:
http://blog.bjornroche.com/2012/07/frequency-detection-using-fft-aka-pitch.html
Admittedly, the code is in C, but I think you'll find it easy to port.
In short, you must:
low-pass the input signal so that higher frequency overtones are not mistaken for the fundamental frequency (this may not appear to be an issue in your application, since you are just looking for a change in pitch, but I recommend doing it anyway for reasons that are too complex to go into here).
window the signal, using a proper windowing function. To get the most responsive output, you should overlap the windows, which I don't do in my sample code.
Perform an FFT on the data in each window, and calculate the frequency using the index of maximum absolute peak value.
Keep in mind for your application where you probably want to detect the change in pitch accurately and quickly, the FFT method I describe may not be sufficient. You have two options:
There are techniques for increasing the specificity of the pitch tracking using phase information, not just the absolute peak.
Use a time-domain method based on autocorrelation. Yin is an excellent choice. (google for "yin pitch tracking")
Here is a link to the code mentioned. There's also some other useful code there.
https://github.com/gast-lib/gast-lib/blob/master/library/src/root/gast/audio/processing/ZeroCrossing.java
Here's the deal with ZeroCrossings:
It is inaccurate at determining frequency precisely based on recorded audio on an Android. That said, it is still useful for giving your app a general sense that the sound it is hearing is a constant singing tone, versus just noise.
The code here seems to work quite well for determining frequency, (if you can translate it from C# to java)
http://code.google.com/p/yaalp/

Detecting periodic data from the phone's accelerometer

I am developing an Android app and I am need to detect user context (if walking or driving at minimal)
I am using accelerometer and sum of all axes to detect the accleration vector. It is working pretty well in the way I can see some periodics values while walking. But I need to detect these poeriods programmatically.
Please is there any kind of math function to detect period in set of values? I heard Fourier transformation is usable for that, but I really dont know how to implement it. It looks pretty complicated :)
Please help
The simplest way to detect periodicity of data is autocorrelation. This is also fairly simple to implement. To get the autocorrelation at i you simply multiply each data point of your data with each data point shifted by i. Here is some pseudocode:
for i = 0 to length( data ) do
autocorrel[ i ] = 0;
for j = 0 to length( data ) do
autocorrel[ i ] += data( j ) * data( ( j + i ) mod length( data ) )
done
done
This will give you an array of values. The highest "periodicity" is at the index with the highes value. This way you can extract any periodic parts (there usually is more than one).
Also I would suggest you do not try implement your own FFT in an application. Although this algorithm is very good for learning, there is much one can do wrong which is hard to test and it is also likely that your implementation will be much slower than those which are already available. If it is possible on your system I would suggest you use the FFTW which is impossible to beat in any respect, when it comes to FFT implementations.
EDIT:
Explanation, why this works even on values which do not repeat exactely:
The usual and fully correct way to calculate the autocorrelation is, to substract the mean from your data. Let's say you have [1, 2, 1.2, 1.8 ]. Then you could extract 1.5 from each sample leaving you with [-.5, .5, -.3, .3 ]. Now if you multiply this with itself at an ofset of zero, negatives will be multiplied by negatives and positives by positives, yielding (-.5)^2 + (.5)^2 + (-.3)^2 + (.3)^2=.68. At an offset of one negatives will be multiplied with positives yielding (-.5)*(.5) + (.5)*(-.3) + (-.3)*(.3) + (.3)*(-.5)=-.64. At an offset of two again negatives will be multiplied by negatives and positives by positives. At offset of three something similar to the situation for an offset of one happens again. As you can see, you get positive values at offsets of 0 and 2 (the periods) and negative values at 1 and 4.
Now to only detect the period it is not necessary to substract the mean. If you just leave the samples as-is, the suqared mean will be added at each addition. Since the same value will be added for each calculated coefficient, the comparison will yield the same results as if you first subtracted the mean. At worst either your datatype might run over (in case you use some kind of integral type), or you might get round off errors when the values start getting to big (in case you use float, usually this is not a problem). In case this happens first substract the mean and try if your results get better.
The strongest drawback of using autocorrelation vs. some kind of fast fourier transformation is the the speed. Autocorelation takes O(n^2) where as a FFT only takes O(n log(n)). In case you need to calculate the period of very long sequences very often, autocorelation might not work in your case.
If you want to know how the fourier transformation works, and what all this stuff about real part, and imaginary part, magnitude and phase (have a look at the code posted by Manu for example) means, I suggest you have a look at this book.
EDIT2:
In most cases data is neither fully periodic nor fully chaotic and aperiodic. Usually your data will be composed of several periodic compenents, with varying strength. A period is a time difference by which you can shift your data to make it similar to itself. The autocorrelation calculates how similar the data is, if you shift it by a certain amount. Thus it gives you the strength of all possible periods. This means, there is not "index of repeating value", because when the data is perfectly periodic, all indexes will repeat. The index with the strongest value, gives you the shift, at which the data is most similar to itself. Thus this index gives a time offset, not an index into your data. In order to understand this, it is important to understand, how a time series can be thought of as being made up of the sum of perfectly periodic functions (sinusoidal base functions).
If you need to detect this for very long time series, it is usually also best to slide a window over your data and just check for the period of this smaller data frame. However you have to be aware that your window will add additional periods to your data, of which you have to be aware.
More in the link I posted in the last edit.
There is also a way to compute the autocorrelation of your data using FFT which reduces the complexity from O(n^2) to O(n log n). The basic idea is you take your periodic sample data, transform it using an FFT, then compute the power spectrum by multiplying each FFT coefficient by its complex conjugate, then take the inverse FFT of the power spectrum. You can find pre-existing code to compute the power spectrum without much difficulty. For example, look at the Moonblink android library. This library contains a JAVA translation of FFTPACK (a good FFT library) and it also has some DSP classes for computing power spectra. An autocorrelation method I have used with success is the McLeod Pitch Method (MPM), the java source code for which is available here. I have edited a method in the class McLeodPitchMethod which allows it to compute the pitch using the FFT-optimized autocorrelation algorithm:
private void normalizedSquareDifference(final double[] data) {
int n = data.length;
// zero-pad the data so we get a number of autocorrelation function (acf)
// coefficients equal to the window size
double[] fft = new double[2*n];
for(int k=0; k < n; k++){
fft[k] = data[k];
}
transformer.ft(fft);
// the output of fft is 2n, symmetric complex
// multiply first n outputs by their complex conjugates
// to compute the power spectrum
double[] acf = new double[n];
acf[0] = fft[0]*fft[0]/(2*n);
for(int k=1; k <= n-1; k++){
acf[k] = (fft[2*k-1]*fft[2*k-1] + fft[2*k]*fft[2*k])/(2*n);
}
// inverse transform
transformerEven.bt(acf);
// the output of the ifft is symmetric real
// first n coefficients are positive lag acf coefficients
// now acf contains acf coefficients
double[] divisorM = new double[n];
for (int tau = 0; tau < n; tau++) {
// subtract the first and last squared values from the previous divisor to get the new one;
double m = tau == 0 ? 2*acf[0] : divisorM[tau-1] - data[n-tau]*data[n-tau] - data[tau-1]*data[tau-1];
divisorM[tau] = m;
nsdf[tau] = 2*acf[tau]/m;
}
}
Where transformer is a private instance of the FFTTransformer class from the java FFTPACK translation, and transformerEven is a private instance of the FFTTransformer_Even class.
A call to McLeodPitchMethod.getPitch() with your data will give a very efficient estimate of the frequency.
Here is an example of calculating the Fourier Transform android using the FFT class from libgdx:
package com.spec.example;
import android.app.Activity;
import android.os.Bundle;
import com.badlogic.gdx.audio.analysis.FFT;
import java.lang.String;
import android.util.FloatMath;
import android.widget.TextView;
public class spectrogram extends Activity {
/** Called when the activity is first created. */
float[] array = {1, 6, 1, 4, 5, 0, 8, 7, 8, 6, 1,0, 5 ,6, 1,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
float[] array_hat,res=new float[array.length/2];
float[] fft_cpx,tmpr,tmpi;
float[] mod_spec =new float[array.length/2];
float[] real_mod = new float[array.length];
float[] imag_mod = new float[array.length];
double[] real = new double[array.length];
double[] imag= new double[array.length];
double[] mag = new double[array.length];
double[] phase = new double[array.length];
int n;
float tmp_val;
String strings;
FFT fft = new FFT(32, 8000);
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv = new TextView(this);
fft.forward(array);
fft_cpx=fft.getSpectrum();
tmpi = fft.getImaginaryPart();
tmpr = fft.getRealPart();
for(int i=0;i<array.length;i++)
{
real[i] = (double) tmpr[i];
imag[i] = (double) tmpi[i];
mag[i] = Math.sqrt((real[i]*real[i]) + (imag[i]*imag[i]));
phase[i]=Math.atan2(imag[i],real[i]);
/****Reconstruction****/
real_mod[i] = (float) (mag[i] * Math.cos(phase[i]));
imag_mod[i] = (float) (mag[i] * Math.sin(phase[i]));
}
fft.inverse(real_mod,imag_mod,res);
}
}
More info here: http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/

How to detect walking with Android accelerometer

I'm writing an application and my aim is to detect when a user is walking.
I'm using a Kalman filter like this:
float kFilteringFactor=0.6f;
gravity[0] = (accelerometer_values[0] * kFilteringFactor) + (gravity[0] * (1.0f - kFilteringFactor));
gravity[1] = (accelerometer_values[1] * kFilteringFactor) + (gravity[1] * (1.0f - kFilteringFactor));
gravity[2] = (accelerometer_values[2] * kFilteringFactor) + (gravity[2] * (1.0f - kFilteringFactor));
linear_acceleration[0] = (accelerometer_values[0] - gravity[0]);
linear_acceleration[1] = (accelerometer_values[1] - gravity[1]);
linear_acceleration[2] = (accelerometer_values[2] - gravity[2]);
float magnitude = 0.0f;
magnitude = (float)Math.sqrt(linear_acceleration[0]*linear_acceleration[0]+linear_acceleration[1]*linear_acceleration[1]+linear_acceleration[2]*linear_acceleration[2]);
magnitude = Math.abs(magnitude);
if(magnitude>0.2)
//walking
The array gravity[] is initialized with 0s.
I can detect when a user is walking or not (looking at the value of the magnitude of the acceleration vector), but my problem is that when a user is not walking and he moves the phones, it seems that he is walking.
Am I using the right filter?
Is it right to watch only the magnitude of the vector or have I to look at the single values ??
Google provides an API for this called DetectedActivity that can be obtained using the ActivityRecognitionApi. Those docs can be accessed here and here.
DetectedActivity has the method public int getType() to get the current activity of the user and also public int getConfidence() which returns a value from 0 to 100. The higher the value returned by getConfidence(), the more certain the API is that the user is performing the returned activity.
Here is a constant summary of what is returned by getType():
int IN_VEHICLE The device is in a vehicle, such as a car.
int ON_BICYCLE The device is on a bicycle.
int ON_FOOT The device is on a user who is walking or running.
int RUNNING The device is on a user who is running.
int STILL The device is still (not moving).
int TILTING The device angle relative to gravity changed significantly.
int UNKNOWN Unable to detect the current activity.
int WALKING The device is on a user who is walking.
My first intuition would be to run an FFT analysis on the sensor history, and see what frequencies have high magnitudes when walking.
It's essentially seeing what walking "sounds like", treating the accelerometer sensor inputs like a microphone and seeing the frequencies that are loud when walking (in other words, at what frequency is the biggest acceleration happening).
I'd guess you'd be looking for a high magnitude at some low frequency (like footstep rate) or maybe something else. It would be interesting to see the data.
My guess is you run the FFT and look for the magnitude at some frequency to be greater than some threshold, or the difference between magnitudes of two of the frequencies is more than some amount. Again, the actual data would determine how you attempt to detect it.
For walking detection I use the derivative applied to the smoothed signal from accelerometer. When the derivative is greater than threshold value I can suggest that it was a step. But I guess that it's not best practise, furthermore it only works when the phone is placed in a pants pocket.
The following code was used in this app https://play.google.com/store/apps/details?id=com.tartakynov.robotnoise
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER){
return;
}
final float z = smooth(event.values[2]); // scalar kalman filter
if (Math.abs(z - mLastZ) > LEG_THRSHOLD_AMPLITUDE)
{
mInactivityCount = 0;
int currentActivity = (z > mLastZ) ? LEG_MOVEMENT_FORWARD : LEG_MOVEMENT_BACKWARD;
if (currentActivity != mLastActivity){
mLastActivity = currentActivity;
notifyListeners(currentActivity);
}
} else {
if (mInactivityCount > LEG_THRSHOLD_INACTIVITY) {
if (mLastActivity != LEG_MOVEMENT_NONE){
mLastActivity = LEG_MOVEMENT_NONE;
notifyListeners(LEG_MOVEMENT_NONE);
}
} else {
mInactivityCount++;
}
}
mLastZ = z;
}
EDIT: I don't think it's accurate enough since when walking normally the average acceleration would be near 0. The most you could do measuring acceleration is detect when someone starts walking or stops (But as you said, it's difficult to filter it from the device moved by someone standing at one place)
So... what I wrote earlier, probably wouldn't work anyway:
You can "predict" whether the user is moving by discarding when the user is not moving (obvious), And first two options coming to my mind are:
Check whether the phone is "hidden", using proximity and light sensor (optional). This method is less accurate but easier.
Controlling the continuity of the movement, if the phone is moving for more than... 10 seconds and the movement is not despicable, then you consider he is walking. I know is not perfet either, but it's difficult wihout using any kind of positioning, by the way... why don't you just use LocationManager?
Try detecting the up and down oscillations, the fore and aft oscillations and the frequency of each and make sure they stay aligned within bounds on average, because you would detect walking and specifically that person's gait style which should remain relatively constant for several steps at once to qualify as moving.
As long as the last 3 oscillations line up within reason then conclude walking is occurring as long as this also is true:-
You measure horizontal acceleration and update a velocity value with it. Velocity will drift with time, but you need to keep a moving average of velocity smoothed over the time of a step, and as long as it doesn't drift more than say half of walking speed per 3 oscillations then it's walking but only if it initially rose to walking speed within a short time ie half a second or 2 oscillations perhaps.
All of that should just about cover it.
Of course, a little ai would help make things simpler or just as complex but amazingly accurate if you considered all of these as inputs to a NN. Ie preprocessing.

Categories

Resources