Detecting periodic data from the phone's accelerometer - android

I am developing an Android app and I am need to detect user context (if walking or driving at minimal)
I am using accelerometer and sum of all axes to detect the accleration vector. It is working pretty well in the way I can see some periodics values while walking. But I need to detect these poeriods programmatically.
Please is there any kind of math function to detect period in set of values? I heard Fourier transformation is usable for that, but I really dont know how to implement it. It looks pretty complicated :)
Please help

The simplest way to detect periodicity of data is autocorrelation. This is also fairly simple to implement. To get the autocorrelation at i you simply multiply each data point of your data with each data point shifted by i. Here is some pseudocode:
for i = 0 to length( data ) do
autocorrel[ i ] = 0;
for j = 0 to length( data ) do
autocorrel[ i ] += data( j ) * data( ( j + i ) mod length( data ) )
done
done
This will give you an array of values. The highest "periodicity" is at the index with the highes value. This way you can extract any periodic parts (there usually is more than one).
Also I would suggest you do not try implement your own FFT in an application. Although this algorithm is very good for learning, there is much one can do wrong which is hard to test and it is also likely that your implementation will be much slower than those which are already available. If it is possible on your system I would suggest you use the FFTW which is impossible to beat in any respect, when it comes to FFT implementations.
EDIT:
Explanation, why this works even on values which do not repeat exactely:
The usual and fully correct way to calculate the autocorrelation is, to substract the mean from your data. Let's say you have [1, 2, 1.2, 1.8 ]. Then you could extract 1.5 from each sample leaving you with [-.5, .5, -.3, .3 ]. Now if you multiply this with itself at an ofset of zero, negatives will be multiplied by negatives and positives by positives, yielding (-.5)^2 + (.5)^2 + (-.3)^2 + (.3)^2=.68. At an offset of one negatives will be multiplied with positives yielding (-.5)*(.5) + (.5)*(-.3) + (-.3)*(.3) + (.3)*(-.5)=-.64. At an offset of two again negatives will be multiplied by negatives and positives by positives. At offset of three something similar to the situation for an offset of one happens again. As you can see, you get positive values at offsets of 0 and 2 (the periods) and negative values at 1 and 4.
Now to only detect the period it is not necessary to substract the mean. If you just leave the samples as-is, the suqared mean will be added at each addition. Since the same value will be added for each calculated coefficient, the comparison will yield the same results as if you first subtracted the mean. At worst either your datatype might run over (in case you use some kind of integral type), or you might get round off errors when the values start getting to big (in case you use float, usually this is not a problem). In case this happens first substract the mean and try if your results get better.
The strongest drawback of using autocorrelation vs. some kind of fast fourier transformation is the the speed. Autocorelation takes O(n^2) where as a FFT only takes O(n log(n)). In case you need to calculate the period of very long sequences very often, autocorelation might not work in your case.
If you want to know how the fourier transformation works, and what all this stuff about real part, and imaginary part, magnitude and phase (have a look at the code posted by Manu for example) means, I suggest you have a look at this book.
EDIT2:
In most cases data is neither fully periodic nor fully chaotic and aperiodic. Usually your data will be composed of several periodic compenents, with varying strength. A period is a time difference by which you can shift your data to make it similar to itself. The autocorrelation calculates how similar the data is, if you shift it by a certain amount. Thus it gives you the strength of all possible periods. This means, there is not "index of repeating value", because when the data is perfectly periodic, all indexes will repeat. The index with the strongest value, gives you the shift, at which the data is most similar to itself. Thus this index gives a time offset, not an index into your data. In order to understand this, it is important to understand, how a time series can be thought of as being made up of the sum of perfectly periodic functions (sinusoidal base functions).
If you need to detect this for very long time series, it is usually also best to slide a window over your data and just check for the period of this smaller data frame. However you have to be aware that your window will add additional periods to your data, of which you have to be aware.
More in the link I posted in the last edit.

There is also a way to compute the autocorrelation of your data using FFT which reduces the complexity from O(n^2) to O(n log n). The basic idea is you take your periodic sample data, transform it using an FFT, then compute the power spectrum by multiplying each FFT coefficient by its complex conjugate, then take the inverse FFT of the power spectrum. You can find pre-existing code to compute the power spectrum without much difficulty. For example, look at the Moonblink android library. This library contains a JAVA translation of FFTPACK (a good FFT library) and it also has some DSP classes for computing power spectra. An autocorrelation method I have used with success is the McLeod Pitch Method (MPM), the java source code for which is available here. I have edited a method in the class McLeodPitchMethod which allows it to compute the pitch using the FFT-optimized autocorrelation algorithm:
private void normalizedSquareDifference(final double[] data) {
int n = data.length;
// zero-pad the data so we get a number of autocorrelation function (acf)
// coefficients equal to the window size
double[] fft = new double[2*n];
for(int k=0; k < n; k++){
fft[k] = data[k];
}
transformer.ft(fft);
// the output of fft is 2n, symmetric complex
// multiply first n outputs by their complex conjugates
// to compute the power spectrum
double[] acf = new double[n];
acf[0] = fft[0]*fft[0]/(2*n);
for(int k=1; k <= n-1; k++){
acf[k] = (fft[2*k-1]*fft[2*k-1] + fft[2*k]*fft[2*k])/(2*n);
}
// inverse transform
transformerEven.bt(acf);
// the output of the ifft is symmetric real
// first n coefficients are positive lag acf coefficients
// now acf contains acf coefficients
double[] divisorM = new double[n];
for (int tau = 0; tau < n; tau++) {
// subtract the first and last squared values from the previous divisor to get the new one;
double m = tau == 0 ? 2*acf[0] : divisorM[tau-1] - data[n-tau]*data[n-tau] - data[tau-1]*data[tau-1];
divisorM[tau] = m;
nsdf[tau] = 2*acf[tau]/m;
}
}
Where transformer is a private instance of the FFTTransformer class from the java FFTPACK translation, and transformerEven is a private instance of the FFTTransformer_Even class.
A call to McLeodPitchMethod.getPitch() with your data will give a very efficient estimate of the frequency.

Here is an example of calculating the Fourier Transform android using the FFT class from libgdx:
package com.spec.example;
import android.app.Activity;
import android.os.Bundle;
import com.badlogic.gdx.audio.analysis.FFT;
import java.lang.String;
import android.util.FloatMath;
import android.widget.TextView;
public class spectrogram extends Activity {
/** Called when the activity is first created. */
float[] array = {1, 6, 1, 4, 5, 0, 8, 7, 8, 6, 1,0, 5 ,6, 1,8,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0};
float[] array_hat,res=new float[array.length/2];
float[] fft_cpx,tmpr,tmpi;
float[] mod_spec =new float[array.length/2];
float[] real_mod = new float[array.length];
float[] imag_mod = new float[array.length];
double[] real = new double[array.length];
double[] imag= new double[array.length];
double[] mag = new double[array.length];
double[] phase = new double[array.length];
int n;
float tmp_val;
String strings;
FFT fft = new FFT(32, 8000);
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
TextView tv = new TextView(this);
fft.forward(array);
fft_cpx=fft.getSpectrum();
tmpi = fft.getImaginaryPart();
tmpr = fft.getRealPart();
for(int i=0;i<array.length;i++)
{
real[i] = (double) tmpr[i];
imag[i] = (double) tmpi[i];
mag[i] = Math.sqrt((real[i]*real[i]) + (imag[i]*imag[i]));
phase[i]=Math.atan2(imag[i],real[i]);
/****Reconstruction****/
real_mod[i] = (float) (mag[i] * Math.cos(phase[i]));
imag_mod[i] = (float) (mag[i] * Math.sin(phase[i]));
}
fft.inverse(real_mod,imag_mod,res);
}
}
More info here: http://www.digiphd.com/android-java-reconstruction-fast-fourier-transform-real-signal-libgdx-fft/

Related

How to find distance of displacement using accelerometer sensor in android smartphone?

I am having one android smart phone containing accelerator sensor, compass sensor and gyroscope sensor . i want to calculate the distance of displacement using this sensors.
I already tried with the basic method ie.,
final velocity = initial velocity + ( acceleration * time taken)
distance = time taken * speed
But i am unable to get the correct displacement. Every time i tried for same displacement i am gettng diffrent results.
The equation you may be looking for looking for is:
Velocity = (Gravity*Acceleration)/(2*PI*freq)
A correct use of units for this equation (metric) would be
Gravity = mm/s squared = 9806.65
Acceleration = average acceleration over 1 second
Frequency = Hz (of the acceleration waveform over 1 second)
For example, if you gathered data from all 3 axes of the accelerometer, you would do the following to get a acceleration waveform (in raw values) for a 3D space:
inputArray[i] = sqrt(X*X + Y*Y + Z*Z);
Once the data is collected, only use the amount of samples in the waveform that would have been collected (if there is a 1ms delay between values only use 1000 values).
Add the values together and divide by the amount of samples to get your average (you may need to make all values positive if the accelerometer data have minus values) you could use this algorithm to do this before finding the average.
for(i = 0; i < 1000; i++){
if(inputArray[i] < 0){
inputArray[i] = inputArray[i] - (inputArray[i]*2);
}
}
Once you have the acceleration average output you need to perform the equation above.
static double PI = 3.1415926535897932384626433832795;
static double gravity = 9806.65;
double Accel2mms(double accel, double freq){
double result = 0;
result = (gravity*accel)/(2*PI*freq);
return result;
}
An example could be that the average acceleration is 3 gs over 1 second in a swing:
NOTE: This calculation is based on a sinusoidal waveform, so the frequency would be representative of the physical movement of the accelerometer not the frequency of the sampling rate
Accel2mms(3, 1);
3 gs over 1 second with a frequency of 1 (1 swing in one direction) = 4682.330468 mm/s or 4.7 meters.
Hope this is something like what you're looking for.
Bear in mind this calculation is based on a sinusoidal waveform but is being adapted to calculate based on a single movement (frequency 1) so it may not be very accurate. But in theory should work.
As #rIHaNJiTHiN mentioned in the comments, there is no reliable way to get displacement from 2nd and 3rd order sensors (sensors that measure derivatives of displacement like velocity and acceleration).
GPS is the only way to measure absolute displacement, though its precision and accuracy are not so high at short distances and short time periods (an in certain places with a bad signal).

Low pass android PCM audio data

I'm working on a guitar tuner app, by recording audio, getting an FFT of the audio, and finding the peak magnitude to find the base freq. So far results show my code works, and will give back an accurate frequency when played pure tones, especially at 500+hz, however with the low frequencies of guitar, and the loud harmonics, results are kind of messy.
I believe i need to introduce a window function, as well as a low pass filter to refine my results and help my app detect the right peak, and not a harmonic, but i'm not too sure
I have implemented a window function, although i'm not sure it's affecting final results, and i'm totally stuck on how to implement a low-pass filter.
byte data[] = new byte[bufferSize]; //the audio data read in
...
double[] window = new double[bufferSize]; //window array
//my window function, not sure if correct
for(int i = 0; i< bufferSize-1; ++i){
window[i] = ((1 - Math.cos(i*2*Math.PI/bufferSize-1))/2);
data[i] = (byte) (data[i] * window[i]);
}
DoubleFFT_1D fft1d = new DoubleFFT_1D(bufferSize);
double[] fftBuffer = new double[bufferSize*2];
for(int i = 0; i < bufferSize-1; ++i){
fftBuffer[2*i] = data[i];
fftBuffer[2*i+1] = 0;
}
fft1d.complexForward(fftBuffer);
//create/populate power spectrum
double[] magnitude = new double[bufferSize/2];
maxVal = 0;
for(int i = 0; i < (bufferSize/2)-1; ++i) {
double real = fftBuffer[2*i];
double imaginary = fftBuffer[2*i + 1];
magnitude[i] = Math.sqrt( real*real + imaginary*imaginary );
Log.i("mag",String.valueOf(magnitude[i]) + " " + i);
//find peak magnitude
for(int i = 0; i < (bufferSize/2)-1; ++i) {
if(magnitude[i] > maxVal){
maxVal = (int) magnitude[i];
binNo = i;
}
}
//results
freq = 8000 * binNo/(bufferSize/2);
Log.i("freq","Bin "+String.valueOf(binNo));
Log.i("freq",String.valueOf(freq) + "Hz");
So yeah, not entirely sure if the window function is doing much, power spectrum contains harmonic peaks regardless, and i'm not sure where to begin with using a low pass filter.
The Window Function can help increase a bit your results.
The purpose of the window is to decrease the amplitude component of the ends of the window, in order to avoid the appearance of spurious high frequency, this is necessary because the Fourier transform assumes the signal to be infinite, so in case of a window, it is repeated countless times for both sides, causing a discontinuity at the borders!
If you apply one window, this problem is minimized, but still occur to some degree.
If are you working with guitar build a low-pass to filter the highest tuned frequency expected, you need Low-pass before apply your Window Function!
you need to consider the Frequency Response from the microphone, I believe it is not easy for these mobile microphones capture low-frequency of a tuned guitar, we are talk about 82.4Hz
Finding the peak of FFT is not a good idea to do tuners !
The FFT can be thought of as a series of band-pass filters, with the magnitude of each bin being the power averaged over the window. A LPF upstream of a FFT isn't going to get you very much - you can just discard higher order FFT bins instead - unless you have a requirement for a particularly steep response.
The approach of implementing a guitar tuner with an FFT is problematic (although
having implementing a successful tuner this way, they aren't insurmountable).
Finding the peak bin is a naive approach and won't give you precise results. Ever. Each bin is a band-pass filter, so you make the assumption that the measured result is the bin centre frequency. Here's what's wrong:
The frequencies of semitones in equal temperament are a geometric progression (the ratio is ~1.06), yet the spacing of FFT bins is linear. If we assume Fs is 44.1k and a 1024 point FFT is used, the bin spacing 44.1Hz. As E2 (the bottom string of a guitar is ~82Hz #A440), it's clear this a tuner using this approach will largely useless. Even trading an extremely large window size for real-time response (and a lot of processing), it's still not very accurate. You're totally screwed trying to tune an electric bass (bottom string: E1, ~41Hz)
What happens with frequencies that straddle bins? As it happens the frequencies of C in all octaves are not far from a power of 2. B - a note a guitar tuner does need to perform well on is close too. For these notes, the fundamental's energy is split almost evenly between two bands. It's likely not the largest any more.
Is the fundamental even the peak frequency? (Hint: it's often not).
Bins overlap. Just how much depends on the window function used.
If you want to persist with FFT solutions, you probably want to be use the STFT. There's a good description of how to do it from DSPDimension.com. The one piece of information missing from that page is that the definition of frequency is as the rate-of-change of phase:
F = dPhi/dt
Consequently, it is possible to estimate F knowing the phase difference between two consecutive windows of results.
Note that windowing is sampling, so sampling theory and the Nyquist rate applies to frequency resolution achievable with it. You need at least 2048-point FFTs for a guitar tuner.
FFT peak magnitude frequency detection usually won't work for determining guitar pitch, since the peak frequency is often not the frequency of the note pitch. Try using a pitch detection or estimation algorithm instead. See More accurate estimating guitar pitch frequency based on FFT(already) result for some alternatives.

Approximate indoor positioning using the integration of the linear acceleration

I am trying to calculate the approximate position of an Android phone in a room. I tried with different methods such as location (wich is terrible in indoors) and gyroscope+compass. I only need to know the approximate position after walking during 5-10seconds so I think the integration of linear acceleration could be enough. I know the error is terrible because of the propagation of the error but maybe it will work in my setup. I only need the approximate position to point a camera to the Android phone.
I coded the double integration but I am doing sth wrong. IF the phone is static on a table the position (x,y,z) always keep increasing. What is the problem?
static final float NS2S = 1.0f / 1000000000.0f;
float[] last_values = null;
float[] velocity = null;
float[] position = null;
float[] acceleration = null;
long last_timestamp = 0;
SensorManager mSensorManager;
Sensor mAccelerometer;
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_LINEAR_ACCELERATION)
return;
if(last_values != null){
float dt = (event.timestamp - last_timestamp) * NS2S;
acceleration[0]=(float) event.values[0] - (float) 0.0188;
acceleration[1]=(float) event.values[1] - (float) 0.00217;
acceleration[2]=(float) event.values[2] + (float) 0.01857;
for(int index = 0; index < 3;++index){
velocity[index] += (acceleration[index] + last_values[index])/2 * dt;
position[index] += velocity[index] * dt;
}
}
else{
last_values = new float[3];
acceleration = new float[3];
velocity = new float[3];
position = new float[3];
velocity[0] = velocity[1] = velocity[2] = 0f;
position[0] = position[1] = position[2] = 0f;
}
System.arraycopy(acceleration, 0, last_values, 0, 3);
last_timestamp = event.timestamp;
}
These are the positions I get when the phone is on the table (no motion). The (x,y,z) values are increasing but the phone is still.
And these are the positions after calculate the moving average for each axis and substract from each measurement. The phone is also still.
How to improve the code or another method to get the approximate position inside a room?
There are unavoidable measurement errors in the accelerometer. These are caused by tiny vibrations in the table, imperfections in the manufacturing, etc. etc. Accumulating these errors over time results in a Random Walk. This is why positioning systems can only use accelerometers as a positioning aid through some filter. They still require some form of dead reckoning such as GPS (which doesn't work well in doors).
There is a great deal of current research for indoor positioning systems. Some areas of research into systems that can take advantage of existing infrastructure are WiFi and LED lighting positioning. There is no obvious solution yet, but I'm sure we'll need a dedicated solution for accurate, reliable indoor positioning.
You said the position always keeps increasing. Do you mean the x, y, and z components only ever become positive, even after resetting several times? Or do you mean the position keeps drifting from zero?
If you output the raw acceleration measurements when the phone is still you should see the measurement errors. Put a bunch of these measurements in an Excel spreadsheet. Calculate the mean and the standard deviation. The mean should be zero for all axes. If not there is a bias that you can remove in your code with a simple averaging filter (calculate a running average and subtract that from each result). The standard deviation will show you how far you can expect to drift in each axis after N time steps as standard_deviation * sqrt(N). This should help you mathematically determine the expected accuracy as a function of time (or N time steps).
Brian is right, there are already deployed indoor positioning systems that work with infrastructure that you can easily find in (almost) any room.
One of the solutions that has proven to be most reliable is WiFi fingerprinting. I recommend you take a look at indoo.rs - www.indoo.rs - they are pioneers in the industry and have a pretty developed system already.
This may not be the most elegant or reliable solution, but in my case it serves the purpose.
Note In my case, I am grabbing a location before the user can even enter the activity that needs indoor positioning.. and I am only concerned with a rough estimate of how much they have moved around.
I have a sensor manager that is creating a rotation matrix based on the device orientation. (using Sensor.TYPE_ROTATION_VECTOR) That obviously doesn't give me movement forward, backward, or side to side, but instead only the device orientation. With that device orientation i have a good idea of the user's bearing in degrees (which way they are facing) and using the Sensor_Step_Detector available in KitKat 4.4, I make the assumption that a step is 1 meter in the direction the user is facing..
Again, I know this is not full proof or very accurate, but depending on your purpose this too might be a simple solution..
everytime a step is detected i basically call this function:
public void computeNewLocationByStep() {
Location newLocal = new Location("");
double vAngle = getBearingInDegrees(); // returns my users bearing
double vDistance = 1 / g.kEarthRadiusInMeters; //kEarthRadiusInMeters = 6353000;
vAngle = Math.toRadians(vAngle);
double vLat1 = Math.toRadians(_location.getLatitude());
double vLng1 = Math.toRadians(_location.getLongitude());
double vNewLat = Math.asin(Math.sin(vLat1) * Math.cos(vDistance) +
Math.cos(vLat1) * Math.sin(vDistance) * Math.cos(vAngle));
double vNewLng = vLng1 + Math.atan2(Math.sin(vAngle) * Math.sin(vDistance) * Math.cos(vLat1),
Math.cos(vDistance) - Math.sin(vLat1) * Math.sin(vNewLat));
newLocal.setLatitude(Math.toDegrees(vNewLat));
newLocal.setLongitude(Math.toDegrees(vNewLng));
stepCount =0;
_location = newLocal;
}

Does having variations of gestures in gesture library improve recognition?

I'm working on implementing gesture recognition in my app, using the Gestures Builder to create a library of gestures. I'm wondering if having multiple variations of a gesture will help or hinder recognition (or performance). For example, I want to recognize a circular gesture. I'm going to have at least two variations - one for a clockwise circle, and one for counterclockwise, with the same semantic meaning so that the user doesn't have to think about it. However, I'm wondering if it would be desirable to save several gestures for each direction, for example, of various radii, or with different shapes that are "close enough" - like egg shapes, ellipses, etc., including different angular rotations of each. Anybody have experience with this?
OK, after some experimenation and reading of the android source, I've learned a little... First, it appears that I don't necessarily have to worry about creating different gestures in my gesture library to cover different angular rotations or directions (clockwise/counterclockwise) of my circular gesture. By default, a GestureStore uses a sequence type of SEQUENCE_SENSITIVE (meaning that the starting point and ending points matter), and an orientation style of ORIENTATION_SENSITIVE (meaning that the rotational angle matters). However, these defaults can be overridden with 'setOrientationStyle(ORIENTATION_INVARIANT)' and setSequenceType(SEQUENCE_INVARIANT).
Furthermore, to quote from the comments in the source... "when SEQUENCE_SENSITIVE is used, only single stroke gestures are currently allowed" and "ORIENTATION_SENSITIVE and ORIENTATION_INVARIANT are only for SEQUENCE_SENSITIVE gestures".
Interestingly, ORIENTATION_SENSITIVE appears to mean more than just "orientation matters". It's value is 2, and the comments associated with it and some related (undocumented) constants imply that you can request different levels of sensitivity.
// at most 2 directions can be recognized
public static final int ORIENTATION_SENSITIVE = 2;
// at most 4 directions can be recognized
static final int ORIENTATION_SENSITIVE_4 = 4;
// at most 8 directions can be recognized
static final int ORIENTATION_SENSITIVE_8 = 8;
During a call to GestureLibary.recognize(), the orientation type value (1, 2, 4, or 8) is passed through to GestureUtils.minimumCosineDistance() as the parameter numOrientations, whereupon some calculations are performed that are above my pay grade (see below). If someone can explain this, I'm interested. I get that it is calculating the angular difference between two gestures, but I don't understand the way it's using the numOrientations parameter. My expectation is that if I specify a value of 2, it finds the minimum distance between gesture A and two variations of gesture B -- one variation being "normal B", and the other being B spun around 180 degrees. Thus, I would expect a value of 8 would consider 8 variations of B, spaced 45 degrees apart. However, even though I don't fully understand the math below, it doesn't look to me like a numOrientations value of 4 or 8 is used directly in any calculations, although values greater than 2 do result in a distinct code path. Maybe that's why those other values are undocumented.
/**
* Calculates the "minimum" cosine distance between two instances.
*
* #param vector1
* #param vector2
* #param numOrientations the maximum number of orientation allowed
* #return the distance between the two instances (between 0 and Math.PI)
*/
static float minimumCosineDistance(float[] vector1, float[] vector2, int numOrientations) {
final int len = vector1.length;
float a = 0;
float b = 0;
for (int i = 0; i < len; i += 2) {
a += vector1[i] * vector2[i] + vector1[i + 1] * vector2[i + 1];
b += vector1[i] * vector2[i + 1] - vector1[i + 1] * vector2[i];
}
if (a != 0) {
final float tan = b/a;
final double angle = Math.atan(tan);
if (numOrientations > 2 && Math.abs(angle) >= Math.PI / numOrientations) {
return (float) Math.acos(a);
} else {
final double cosine = Math.cos(angle);
final double sine = cosine * tan;
return (float) Math.acos(a * cosine + b * sine);
}
} else {
return (float) Math.PI / 2;
}
}
Based on my reading, I theorized that the simplest and best approach would be to have one stored circular gesture, setting the sequence type and orientation to invariant. That way, anything circular should match pretty well, regardless of direction or orientation. So I tried that, and it did return high scores (in the range of about 25 to 70) for pretty much anything remotely resembling a circle. However, it also returned scores of 20 or so for gestures that were not even close to circular (horizontal lines, V shapes, etc.). So, I didn't feel good about the separation between what should be matches and what should not. What seems to be working best is to have two stored gestures, one in each direction, and using SEQUENCE_SENSITIVE in conjunction with ORIENTATION_INVARIANT. That's giving me scores of 2.5 or higher for anything vaguely circular, but scores below 1 (or no matches at all) for gestures that are not circular.

How to detect walking with Android accelerometer

I'm writing an application and my aim is to detect when a user is walking.
I'm using a Kalman filter like this:
float kFilteringFactor=0.6f;
gravity[0] = (accelerometer_values[0] * kFilteringFactor) + (gravity[0] * (1.0f - kFilteringFactor));
gravity[1] = (accelerometer_values[1] * kFilteringFactor) + (gravity[1] * (1.0f - kFilteringFactor));
gravity[2] = (accelerometer_values[2] * kFilteringFactor) + (gravity[2] * (1.0f - kFilteringFactor));
linear_acceleration[0] = (accelerometer_values[0] - gravity[0]);
linear_acceleration[1] = (accelerometer_values[1] - gravity[1]);
linear_acceleration[2] = (accelerometer_values[2] - gravity[2]);
float magnitude = 0.0f;
magnitude = (float)Math.sqrt(linear_acceleration[0]*linear_acceleration[0]+linear_acceleration[1]*linear_acceleration[1]+linear_acceleration[2]*linear_acceleration[2]);
magnitude = Math.abs(magnitude);
if(magnitude>0.2)
//walking
The array gravity[] is initialized with 0s.
I can detect when a user is walking or not (looking at the value of the magnitude of the acceleration vector), but my problem is that when a user is not walking and he moves the phones, it seems that he is walking.
Am I using the right filter?
Is it right to watch only the magnitude of the vector or have I to look at the single values ??
Google provides an API for this called DetectedActivity that can be obtained using the ActivityRecognitionApi. Those docs can be accessed here and here.
DetectedActivity has the method public int getType() to get the current activity of the user and also public int getConfidence() which returns a value from 0 to 100. The higher the value returned by getConfidence(), the more certain the API is that the user is performing the returned activity.
Here is a constant summary of what is returned by getType():
int IN_VEHICLE The device is in a vehicle, such as a car.
int ON_BICYCLE The device is on a bicycle.
int ON_FOOT The device is on a user who is walking or running.
int RUNNING The device is on a user who is running.
int STILL The device is still (not moving).
int TILTING The device angle relative to gravity changed significantly.
int UNKNOWN Unable to detect the current activity.
int WALKING The device is on a user who is walking.
My first intuition would be to run an FFT analysis on the sensor history, and see what frequencies have high magnitudes when walking.
It's essentially seeing what walking "sounds like", treating the accelerometer sensor inputs like a microphone and seeing the frequencies that are loud when walking (in other words, at what frequency is the biggest acceleration happening).
I'd guess you'd be looking for a high magnitude at some low frequency (like footstep rate) or maybe something else. It would be interesting to see the data.
My guess is you run the FFT and look for the magnitude at some frequency to be greater than some threshold, or the difference between magnitudes of two of the frequencies is more than some amount. Again, the actual data would determine how you attempt to detect it.
For walking detection I use the derivative applied to the smoothed signal from accelerometer. When the derivative is greater than threshold value I can suggest that it was a step. But I guess that it's not best practise, furthermore it only works when the phone is placed in a pants pocket.
The following code was used in this app https://play.google.com/store/apps/details?id=com.tartakynov.robotnoise
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER){
return;
}
final float z = smooth(event.values[2]); // scalar kalman filter
if (Math.abs(z - mLastZ) > LEG_THRSHOLD_AMPLITUDE)
{
mInactivityCount = 0;
int currentActivity = (z > mLastZ) ? LEG_MOVEMENT_FORWARD : LEG_MOVEMENT_BACKWARD;
if (currentActivity != mLastActivity){
mLastActivity = currentActivity;
notifyListeners(currentActivity);
}
} else {
if (mInactivityCount > LEG_THRSHOLD_INACTIVITY) {
if (mLastActivity != LEG_MOVEMENT_NONE){
mLastActivity = LEG_MOVEMENT_NONE;
notifyListeners(LEG_MOVEMENT_NONE);
}
} else {
mInactivityCount++;
}
}
mLastZ = z;
}
EDIT: I don't think it's accurate enough since when walking normally the average acceleration would be near 0. The most you could do measuring acceleration is detect when someone starts walking or stops (But as you said, it's difficult to filter it from the device moved by someone standing at one place)
So... what I wrote earlier, probably wouldn't work anyway:
You can "predict" whether the user is moving by discarding when the user is not moving (obvious), And first two options coming to my mind are:
Check whether the phone is "hidden", using proximity and light sensor (optional). This method is less accurate but easier.
Controlling the continuity of the movement, if the phone is moving for more than... 10 seconds and the movement is not despicable, then you consider he is walking. I know is not perfet either, but it's difficult wihout using any kind of positioning, by the way... why don't you just use LocationManager?
Try detecting the up and down oscillations, the fore and aft oscillations and the frequency of each and make sure they stay aligned within bounds on average, because you would detect walking and specifically that person's gait style which should remain relatively constant for several steps at once to qualify as moving.
As long as the last 3 oscillations line up within reason then conclude walking is occurring as long as this also is true:-
You measure horizontal acceleration and update a velocity value with it. Velocity will drift with time, but you need to keep a moving average of velocity smoothed over the time of a step, and as long as it doesn't drift more than say half of walking speed per 3 oscillations then it's walking but only if it initially rose to walking speed within a short time ie half a second or 2 oscillations perhaps.
All of that should just about cover it.
Of course, a little ai would help make things simpler or just as complex but amazingly accurate if you considered all of these as inputs to a NN. Ie preprocessing.

Categories

Resources