I came to the conclusion that it is impossible to determine the velocity given the accelerometer in an android device. Assuming my initial velocity to be zero the integration schema employed to get the velocity would be
v_ib_b = old_v_ib_b + f_ib_b * dt ,
where v_ib_b is the velocity in the inertial body frame resolved along the body axes and f_ib_b is the measured force by the accelerometer.
If I want to compute the velocity of the smartphone in the local navigation frame ENU the following formula can be used (simplified Bortz equation neglecting coriolis/sculling and transport rate):
v_eb_n = old_v_eb_n + (C_b_n * (f_ib_b + 0.5 * cross(omega_ib_b, f_ib_b)) - g) * dt
where C_b_n is the rotation matrix body to ENU.
I attempted to use the Sensor.TYPE_ROTATION_VECTOR, MadgwickAHRS and my own sensor fusion to obtain the orientation and hence the rotation matrix in the ENU frame, respectively. No matter which one was used this is an ouput of integrating the velocity over a few minutes while riding with a bike (smartphone was mounted to the handlebar facing sky):
and this is the raw data integrated:
I assume the accelerometer to perform poor, being noisy and cancel out any measurements over time hence not being able to obtain the velocity. Any ideas?
The relatively very small changes in acceleration will be mixed in with the noise. Is something blocking the use of the GPS data? Another direction would be combining the gyroscope data with the accelerometer data for better noise reduction, creating a full IMU.
Related
How can I get real time exercise count and angle using ML kit? Here, I check https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html for push up and squat exercise count.
I am getting angle by following method :
fun getAngle(firstPoint: PoseLandmark, midPoint: PoseLandmark, lastPoint: PoseLandmark): Double {
var result = Math.toDegrees(atan2(lastPoint.getPosition().y - midPoint.getPosition().y,
lastPoint.getPosition().x - midPoint.getPosition().x)
- atan2(firstPoint.getPosition().y - midPoint.getPosition().y,
firstPoint.getPosition().x - midPoint.getPosition().x))
result = Math.abs(result) // Angle should never be negative
if (result > 180) {
result = 360.0 - result // Always get the acute representation of the angle
}
return result
}
I have added logic from my side but still want help if any proper way I got. What I am doing checking angle every time.
I want to display count and feedback based on user doing exercise.
I made a simple demo about squat count https://www.youtube.com/watch?v=XKrZV864rEQ
I just made three simple logical judgments
The height of the elbow is higher than the shoulder, otherwise the prompt "please hold your hands behind your head"
Standing straight is judged by the angle of the thigh and calf, and the effect is currently not good.
Compare the distance between the legs and the shoulders, and the legs should have a certain proportion of the shoulder width, otherwise the prompt "Please spread your feet and shoulder width" Both standing and squatting are judged by taking the human leg length/5 as the minimum movement unit, and the minimum distance from the last coordinate. Because the distance between the person standing and the camera will affect the coordinate ratio
My english is poor, most sentences tanslated by google translate
Here are several things you could try:
(1) You need to ask your users to face the camera in a certain way, e.g. side way might be the easiest for detecting squat and frontal would be the hardiest. You could try something in-between. Also how high the camera is (on the ground, head level, etc..) could also affect the angle.
(2) Then you can calculate and track the angle between body and thigh and the angle between thigh and calf to determine whether a squat is done.
(3) About feedback, you may set some expected angles, and if the user's angle is smaller than that, you could say "squat deeper"...
(4) To get the expected angles, you would need to find some sample images and run the detector on it to get them.
I am currently implementing an speedometer by receiving orientation data from my phone. I am using
SensorManager.getRotationMatrix(R, I, gravity, geomagnetic);
float orientation[] = new float[3];
SensorManager.getOrientation(R, orientation);
float azimuth = orientation[0];
double azimuthD = Math.toDegrees(azimuth);
if(azimuthD < 0) azimuthD = 360 + azimuthD;
With this i am able to receive the rotation data from my phone, such as azimuth etc..
Anyway, this works fine while the device is placed on a table or something. But when rotating around a certain point (in my case the device is fixed on a wheel and rotating at a certain speed) the values are far away from being accurate. I believe, since I am using gravity and the geomagnetic sensor, there could be an conflict with forces that influence these sensors, while rotating. As the wheel turns, the rotation changes relative to a point, but the local device rotation stays the same.
How can I access the orientation of the device while it's turning without running into a lot of noisy data?
I read some about the ´Sensor.TYPE_ROTATION_VECTOR´ property, but couldn't quite figure out how it works. Also I read about the possibility to remap the coordination system, but how is that supposed to help, since my phone is never not vertical to the floor more like with an angle of 5°-10°.
I would appreciate any help.
Cheers,
viehlieb
I guess i found my answer.
The solution was to throw away all the code i posted above and use the gyroscope, obviously.
The gyroscope values measure angular velocity of the device's rotation. The coordinate system used is the devices own coordinate system. In my case the relevant value was the rotation around the z-axis.
Values are in radiant per second, which can be mapped to m/s if you figure multiply the wheel's circumference. So the trick was in the OnSensorChanged method:
if(sensorEvent.sensor.getType() == Sensor.TYPE_GYROSCOPE){
gyroscope = sensorEvent.values;
double rotZ = gyroscope[2];
double degrees = Math.toDegrees(gyroscope[2]);
//calculate the speed with circumference = 2.23m
float speed = (float) degrees/ 360.0f * 2.23f * 3.6f;
}
If now you'd like to have accurate values, you could store them in an array and calculate the average. Remember to clear the array every 20th time (or so) the onSensorChanged method is called. With SENSOR_DELAY_GAME registered there's sufficient data over which you could build the average.
I am having one android smart phone containing accelerator sensor, compass sensor and gyroscope sensor . i want to calculate the distance of displacement using this sensors.
I already tried with the basic method ie.,
final velocity = initial velocity + ( acceleration * time taken)
distance = time taken * speed
But i am unable to get the correct displacement. Every time i tried for same displacement i am gettng diffrent results.
The equation you may be looking for looking for is:
Velocity = (Gravity*Acceleration)/(2*PI*freq)
A correct use of units for this equation (metric) would be
Gravity = mm/s squared = 9806.65
Acceleration = average acceleration over 1 second
Frequency = Hz (of the acceleration waveform over 1 second)
For example, if you gathered data from all 3 axes of the accelerometer, you would do the following to get a acceleration waveform (in raw values) for a 3D space:
inputArray[i] = sqrt(X*X + Y*Y + Z*Z);
Once the data is collected, only use the amount of samples in the waveform that would have been collected (if there is a 1ms delay between values only use 1000 values).
Add the values together and divide by the amount of samples to get your average (you may need to make all values positive if the accelerometer data have minus values) you could use this algorithm to do this before finding the average.
for(i = 0; i < 1000; i++){
if(inputArray[i] < 0){
inputArray[i] = inputArray[i] - (inputArray[i]*2);
}
}
Once you have the acceleration average output you need to perform the equation above.
static double PI = 3.1415926535897932384626433832795;
static double gravity = 9806.65;
double Accel2mms(double accel, double freq){
double result = 0;
result = (gravity*accel)/(2*PI*freq);
return result;
}
An example could be that the average acceleration is 3 gs over 1 second in a swing:
NOTE: This calculation is based on a sinusoidal waveform, so the frequency would be representative of the physical movement of the accelerometer not the frequency of the sampling rate
Accel2mms(3, 1);
3 gs over 1 second with a frequency of 1 (1 swing in one direction) = 4682.330468 mm/s or 4.7 meters.
Hope this is something like what you're looking for.
Bear in mind this calculation is based on a sinusoidal waveform but is being adapted to calculate based on a single movement (frequency 1) so it may not be very accurate. But in theory should work.
As #rIHaNJiTHiN mentioned in the comments, there is no reliable way to get displacement from 2nd and 3rd order sensors (sensors that measure derivatives of displacement like velocity and acceleration).
GPS is the only way to measure absolute displacement, though its precision and accuracy are not so high at short distances and short time periods (an in certain places with a bad signal).
Is there a way to get the velocity without GPS in Android? I don't need the accurate values.
Well, sort of, but you will need to do alot of processing..
You can make frequent accelerometer readings and integrate the values once to get velocity. This won't get you an accurate starting velocity but after a while it will probably work (unless you start when the phone is driving along in a car). See also this post.
Now, some pseudo code:
We start at t=0 and measure acceleration in all three axis.
a = get_acceleration()
vx = vx + a.x - gravity.x;
vy = vy + a.y - gravity.y;
vz = vz + a.z - gravity.z;
After doing this for a few seconds, the sum of all the acceleration values (if you sample frequently, ie, 50Hz) should be velocity. You will also need to work out which way up your device is and therefore how much of the acceleration components you are reading is due to gravity and compensate.
I'm writing an application and my aim is to detect when a user is walking.
I'm using a Kalman filter like this:
float kFilteringFactor=0.6f;
gravity[0] = (accelerometer_values[0] * kFilteringFactor) + (gravity[0] * (1.0f - kFilteringFactor));
gravity[1] = (accelerometer_values[1] * kFilteringFactor) + (gravity[1] * (1.0f - kFilteringFactor));
gravity[2] = (accelerometer_values[2] * kFilteringFactor) + (gravity[2] * (1.0f - kFilteringFactor));
linear_acceleration[0] = (accelerometer_values[0] - gravity[0]);
linear_acceleration[1] = (accelerometer_values[1] - gravity[1]);
linear_acceleration[2] = (accelerometer_values[2] - gravity[2]);
float magnitude = 0.0f;
magnitude = (float)Math.sqrt(linear_acceleration[0]*linear_acceleration[0]+linear_acceleration[1]*linear_acceleration[1]+linear_acceleration[2]*linear_acceleration[2]);
magnitude = Math.abs(magnitude);
if(magnitude>0.2)
//walking
The array gravity[] is initialized with 0s.
I can detect when a user is walking or not (looking at the value of the magnitude of the acceleration vector), but my problem is that when a user is not walking and he moves the phones, it seems that he is walking.
Am I using the right filter?
Is it right to watch only the magnitude of the vector or have I to look at the single values ??
Google provides an API for this called DetectedActivity that can be obtained using the ActivityRecognitionApi. Those docs can be accessed here and here.
DetectedActivity has the method public int getType() to get the current activity of the user and also public int getConfidence() which returns a value from 0 to 100. The higher the value returned by getConfidence(), the more certain the API is that the user is performing the returned activity.
Here is a constant summary of what is returned by getType():
int IN_VEHICLE The device is in a vehicle, such as a car.
int ON_BICYCLE The device is on a bicycle.
int ON_FOOT The device is on a user who is walking or running.
int RUNNING The device is on a user who is running.
int STILL The device is still (not moving).
int TILTING The device angle relative to gravity changed significantly.
int UNKNOWN Unable to detect the current activity.
int WALKING The device is on a user who is walking.
My first intuition would be to run an FFT analysis on the sensor history, and see what frequencies have high magnitudes when walking.
It's essentially seeing what walking "sounds like", treating the accelerometer sensor inputs like a microphone and seeing the frequencies that are loud when walking (in other words, at what frequency is the biggest acceleration happening).
I'd guess you'd be looking for a high magnitude at some low frequency (like footstep rate) or maybe something else. It would be interesting to see the data.
My guess is you run the FFT and look for the magnitude at some frequency to be greater than some threshold, or the difference between magnitudes of two of the frequencies is more than some amount. Again, the actual data would determine how you attempt to detect it.
For walking detection I use the derivative applied to the smoothed signal from accelerometer. When the derivative is greater than threshold value I can suggest that it was a step. But I guess that it's not best practise, furthermore it only works when the phone is placed in a pants pocket.
The following code was used in this app https://play.google.com/store/apps/details?id=com.tartakynov.robotnoise
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER){
return;
}
final float z = smooth(event.values[2]); // scalar kalman filter
if (Math.abs(z - mLastZ) > LEG_THRSHOLD_AMPLITUDE)
{
mInactivityCount = 0;
int currentActivity = (z > mLastZ) ? LEG_MOVEMENT_FORWARD : LEG_MOVEMENT_BACKWARD;
if (currentActivity != mLastActivity){
mLastActivity = currentActivity;
notifyListeners(currentActivity);
}
} else {
if (mInactivityCount > LEG_THRSHOLD_INACTIVITY) {
if (mLastActivity != LEG_MOVEMENT_NONE){
mLastActivity = LEG_MOVEMENT_NONE;
notifyListeners(LEG_MOVEMENT_NONE);
}
} else {
mInactivityCount++;
}
}
mLastZ = z;
}
EDIT: I don't think it's accurate enough since when walking normally the average acceleration would be near 0. The most you could do measuring acceleration is detect when someone starts walking or stops (But as you said, it's difficult to filter it from the device moved by someone standing at one place)
So... what I wrote earlier, probably wouldn't work anyway:
You can "predict" whether the user is moving by discarding when the user is not moving (obvious), And first two options coming to my mind are:
Check whether the phone is "hidden", using proximity and light sensor (optional). This method is less accurate but easier.
Controlling the continuity of the movement, if the phone is moving for more than... 10 seconds and the movement is not despicable, then you consider he is walking. I know is not perfet either, but it's difficult wihout using any kind of positioning, by the way... why don't you just use LocationManager?
Try detecting the up and down oscillations, the fore and aft oscillations and the frequency of each and make sure they stay aligned within bounds on average, because you would detect walking and specifically that person's gait style which should remain relatively constant for several steps at once to qualify as moving.
As long as the last 3 oscillations line up within reason then conclude walking is occurring as long as this also is true:-
You measure horizontal acceleration and update a velocity value with it. Velocity will drift with time, but you need to keep a moving average of velocity smoothed over the time of a step, and as long as it doesn't drift more than say half of walking speed per 3 oscillations then it's walking but only if it initially rose to walking speed within a short time ie half a second or 2 oscillations perhaps.
All of that should just about cover it.
Of course, a little ai would help make things simpler or just as complex but amazingly accurate if you considered all of these as inputs to a NN. Ie preprocessing.