Android accelerometer calibration? - android

TL;DR
How come the accelerometer values I get from Sensor.TYPE_ACCELEROMETER are slightly offset? I don't mean by gravity, but by some small error that varies from axis to axis and phone to phone.
Can I calibrate the accelerometer? Or is there a standard way of compensating for these errors?
I'm developing an app that has a need for as precise acceleration measurements as possible (mainly vertical acceleration, i.e. same direction as gravity).
I've been doing A LOT of testing, and it turns out that the raw values I get from Sensor.TYPE_ACCELEROMETER are off. If I let the phone rest at a perfectly horizontal surface with the screen up, the accelerometer shows a Z-value of 9.0, where it should be about 9.81. Likewise, if I put the phone in portrait or landscape mode, the X- and Y- accelerometer values show about 9.6. instead of 9.81.
This of course affects my vertical acceleration, as I'm using SensorManager.getRotationMatrixFromVector(), to calculate the vertical acceleration, resulting in a vertical acceleration that is off by a different amount depending on the rotation of the device.
Now, before anyone jumps the gun and mentions that I should try using Sensor.TYPE_LINEAR_ACCELERATION instead, I must point out that I'm actually doing that as well, parallel to the TYPE_ACCELERATION. By using the gravity sensor I then calculate the vertical acceleration (as described in this answer). The funny thing is that I get EXACTLY the same result as the method that uses the raw accelerometer, SensorManager.getRotationMatrixFromVector() and matrix multiplication (and finally subtracting gravity).
The only way I'm able to get almost exactly zero vertical acceleration for a stationary phone in any rotation is to get the raw accelerometer values, add an offset (from earlier observations, i.e. X+0.21, Y+0.21 and Z+0.81) and then performing the rotation matrix stuff to get the world coordinate system accelerations. Note that since it's not just the calculated vertical acceleration that is wrong - it's actually the raw values from Sensor.TYPE_ACCELEROMETER, which I would think excludes other error sources like gyroscope sensor, etc?
I have tested this on two different phones (Samsung Galaxy S5 and Sony Xperia Z3 compact), and both have these accelerometer value deviances - but of course not the same values on both phones.
How come the the values of Sensor.TYPE_ACCELEROMETER are off, and is there a better way of "calibrating" the accelerometer than simply observing how much they deviate from gravity and adding the difference to the values before using them?

You should calibrate gains, offsets, and angle of the 3 accelerometers.
Unfortunately it's not possible to deepen the whole topic here.
I'll write a small introduction, describing the basic concept, and then I'll post a link to the code of a simple Clinometer that implements the calibration.
The calibration routine could be done with 7 misurations (calculate the mean value of a good number of samples) in different ortogonal positions at your choice, in order to have all +-0 and +-g values of your accelerometers. For example:
STEP 1 = Lay flat
STEP 2 = Rotate 180°
STEP 3 = Lay on the left side
STEP 4 = Rotate 180°
STEP 5 = Lay vertical
STEP 6 = Rotate 180° upside-down
STEP 7 = Lay face down
Then you can use the 7 measurements mean[][] to calculate offsets and gains:
calibrationOffset[0] = (mean[0][2] + mean[0][3]) / 2;
calibrationOffset[1] = (mean[1][4] + mean[1][5]) / 2;
calibrationOffset[2] = (mean[2][0] + mean[2][6]) / 2;
calibrationGain[0] = (mean[0][2] - mean[0][3]) / (STANDARD_GRAVITY * 2);
calibrationGain[1] = (mean[1][4] - mean[1][5]) / (STANDARD_GRAVITY * 2);
calibrationGain[2] = (mean[2][0] - mean[2][6]) / (STANDARD_GRAVITY * 2);
using the values of mean[axis][step], where STANDARD_GRAVITY = 9.81.
Then apply the Gain and Offset Corrections to measurements:
for (int i = 0; i < 7; i++) {
mean[0][i] = (mean[0][i] - calibrationOffset[0]) / calibrationGain[0];
mean[1][i] = (mean[1][i] - calibrationOffset[1]) / calibrationGain[1];
mean[2][i] = (mean[2][i] - calibrationOffset[2]) / calibrationGain[2];
}
and finally calculates the correction angles:
for (int i = 0; i < 7; i++) {
angle[0][i] = (float) (Math.toDegrees(Math.asin(mean[0][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[1][i] = (float) (Math.toDegrees(Math.asin(mean[1][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[2][i] = (float) (Math.toDegrees(Math.asin(mean[2][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
}
calibrationAngle[2] = (angle[0][0] + angle[0][1])/2; // angle 0 = X axis
calibrationAngle[1] = -(angle[1][0] + angle[1][1])/2; // angle 1 = Y axis
calibrationAngle[0] = -(angle[1][3] - angle[1][2])/2; // angle 2 = Z axis
You can find a simple but complete implementation of a 3-axis calibration in this opensource Clinometer app: https://github.com/BasicAirData/Clinometer.
There is also the APK and the link of the Google Play Store if you want to try it.
You can find the calibration routine in CalibrationActivity.java;
The calibration parameters are applied in ClinometerActivity.java.
Furthermore, you can find a very good technical article that deepens the 3-axis calibration here: https://www.digikey.it/it/articles/using-an-accelerometer-for-inclination-sensing.

Related

MATLAB when integrating from acceleration to velocity to position I am getting very high y values

I am getting raw acceleration data from an accelerometer and am trying to double integrate it in order to get the position.
The android phone used to get the data is set on a flat surface for 3 seconds to diminish drift. I take the mean of acceleration over the resting period to zero out the beginning. This worked out fine, but when we integrate to velocity and position (using cumtrapz) we are getting unrealistically high y values (meters/s for velocity and meters for position.)
The raw data is waving the phone at a certain tempo.
Does anyone have ideas on why the position gets such high values?
Below are the graphs showing what I described as well as my code.
Edit: Even when the phones is not rotated, the values are unrealistic and not indicative of how the phone moved. In the attached pictures, the phone was moved in the shape of a box on a flat surface with no rotation involved.
%VarName2 = accelerometer values in X direction
%VarName3 = accelerometer values in Y direction
%VarName4 = accelerometer values in Z direction
%elapsedArray = time values for each sample of accelerometer data
ddx = VarName2 - mean(VarName2(1:limit));
ddx = ddx(1:length(ddx)-200);
elapsedArray = elapsedArray(1:length(elapsedArray)-200);
ddy = VarName3 - mean(VarName3(1:limit));
ddy = ddy(1:length(ddy)-200);
ddz = VarName4 - mean(VarName4(1:limit));
ddz = ddz(1:length(ddz)-200);
velX = cumtrapz(ddx .* elapsedArray);
velY = cumtrapz(ddy .* elapsedArray);
velZ = cumtrapz(ddz .* elapsedArray);
dx = velX - mean(velX(1:limit));
dy = velY - mean(velY(1:limit));
dz = velZ - mean(velZ(1:limit));
posX = cumtrapz(dx .* elapsedArray);
posY = cumtrapz(dy .* elapsedArray);
posZ = cumtrapz(dz .* elapsedArray);
x = posX - mean(posX(1:limit));
y = posY - mean(posY(1:limit));
z = posZ - mean(posZ(1:limit));
figure;
plot(ddx);
title('Acceleration in X')
xlabel('Time (sec)')
ylabel('Acc (meters squared');
figure;
plot(dx);
title('Velocity in X')
xlabel('Time (sec)')
ylabel('Velocity (meters)');
figure;
plot(x);
title('Position X')
xlabel('Time (sec)')
ylabel('Position (meters)');
figure;
plot(y);
title('Position Y')
xlabel('Time (sec)')
ylabel('Position (meters)');
figure;
plot(z);
title('Position Z')
xlabel('Time (sec)')
ylabel('Position (meters)');
Acceleration in X direction
Velocity and Position in X direction
What you are seeing is the result of time drift. Let's assume that the accelerometer readings you are measuring have a very small error, dErr, at every time point. Once you integrate these values to get velocity, the error at each time point will be multiplied by a factor t. Integrating a second time to get position will cause the original error to be multiplied by a factor of t^2. Therefore, the error at each time point will propogate at dErr(t)*t^2.
In order to get a good estimate for position, you can try to incorporate prior information about position, but will likely have to use a combination of accelerometer and gyroscope data. You might also have to look into Kalman Filters.
Here is a Google Tech Talk explaining this issue:
https://youtu.be/C7JQ7Rpwn2k?t=23m33s

How to get the euler-angles from the rotation vector (Sensor.TYPE_ROTATION_VECTOR)

I rotated my android device in x direction (from -180 degree to 180 degree), see image below.
And I assume only Rotation vector x value is changed. Y and z maybe have some noise, but it should be not much difference among the values.
However, I receive this. Kindly see
https://docs.google.com/spreadsheets/d/1ZLoSKI8XNjI1v4exaXxsuMtzP0qWTP5Uu4C3YTwnsKo/edit?usp=sharing
I suspect my sensor has some problem.
Any idea? Thank you very much.
Jimmy
Your sensor is fine.Well, the rotation vector entries cannot simply be related to the rotation angle around a particular axis. The SensorEvent structure constitutes of timestamp, sensor, accuracy and values. Depending on the vector the float[] of values vary in size 1-5. The rotation vectors values are based on unit quaternions, all together forming a vector representing the orientation of this world frame relative to your smartphone fixed frame above
They are unitless and positive counter-clockwise.
The orientation of the phone is represented by the rotation necessary to align the East-North-Up coordinates with the phone's coordinates. That is, applying the rotation to the world frame (X,Y,Z) would align them with the phone coordinates (x,y,z).
If the vector would be a Rotation-Matrix one could write it as v_body = R_rot_vec * v_world (<--)pushing the world vector into a smartphone fixed description.
Furthermore about the vector:
The three elements of the rotation vector are equal to the last three components of a unit quaternion <cos(θ/2), xsin(θ/2), ysin(θ/2), z*sin(θ/2)>.
Q: So what to do with it? Depending on your Euler-angles convention (possible 24 sequences, valid 12 ones) you could calculate the corresponding angles u := [ψ,θ,φ] by e.g. applying the 123 sequence:
If you already have the rotation matrix entries get euler like so:
the 321 sequence:
with q1-3 always being the values[0-2] (Dont get confused by u_ijk as ref(Diebel) uses different conventions comp. to the standard)But wait, your linked table only does have 3 values, which is similar to what I get. This is oneSensorEvent of mine, the last three are printed from values[]
timestamp sensortype accuracy values[0] values[1] values[2]
23191581386897 11 -75 -0.0036907701 -0.014922042 0.9932963
4q - 3 values = 1q unknown. The first q0 is redundant info (also the doku says it should be there under values[3], depends on your API-level). So we can use the norm (=length) to calculate q0 from the other three. Set the equation ||q|| = 1 and solve for q0. Now all q0-3 are known.
Furthermore my android 4.4.2 does not have the fourth estimated heading Accuracy (in radians) inside value[4], so I evaluate the event.accuracy:
for (SensorEvent e : currentEvent) {
if (e != null) {
String toMsg = "";
for(int i = 0; i < e.values.length;i++) {
toMsg += " " + String.valueOf(e.values[i]);
}
iBinder.msgString(String.valueOf(e.timestamp) + " "+String.valueOf(e.sensor.getType()) + " " + String.valueOf(e.accuracy) + toMsg, 0);
}
}
Put those equations into code and you will get things sorted.
Here is a short conversion helper, converting Quats. using either XYZ or ZYX. It can be run from shell github. (BSD-licensed)
The relevant part for XYZ
/*quaternation to euler in XYZ (seq:123)*/
double* quat2eulerxyz(double* q) {
/*euler-angles*/
double psi = atan2( -2.*(q[2]*q[3] - q[0]*q[1]) , q[0]*q[0] - q[1]*q[1]- q[2]*q[2] + q[3]*q[3] );
double theta = asin( 2.*(q[1]*q[3] + q[0]*q[2]) );
double phi = atan2( 2.*(-q[1]*q[2] + q[0]*q[3]) , q[0]*q[0] + q[1]*q[1] - q[2]*q[2] - q[3]*q[3] );
/*save var. by simply pushing them back into the array and return*/
q[1] = psi;
q[2] = theta;
q[3] = phi;
return q;
}
Here some examples applying quats to euls:
**Q:** What do the sequence ijk stand for? Take two coordinate-frames A and B superposing each other(all axis within each other) and start rotating frame B through i-axis having angle `psi`, then j-axis having angle `theta` and last z-axis having `phi`. It could also be α, β, γ for i,j,k. *I don't pick up the numbers as they are confusing (Diebel vs other papers).*
R(psi,theta,phi) = R_z(phi)R_y(theta)R_x(psi) (<--)
The trick is elementary rotations are applied from right to left, although we read the sequence from left to right.
Those are the three elementary rotations youre going through to go from
A to B: *v_B = R(psi,theta,phi) v_A*
**Q:** So how to get the euler angles/quats turn from [0°,0°,0°] to eg. [0°,90°,0°]?First align both frames from the pictures, respective the known device frame B to the "invisible" worldframe A. Your done superposing when the angles all get to [0°,0°,0°]. Just figure out where is north, south and east where you are sitting right now and point the devices frame B into those directions. Now when you rotate around y-axis counter-clockwise 90° you will have the desired [0°,90°,0°], when converting the quaternion.
*Julian*
*kinematics source: [Source Diebel(Stanford)][11] with solid info on the mechanics background (careful: for Diebel XYZ is denoted u_321 (1,2,3) while ZYX is u_123 (3,2,1)), and [this][12] is a good starting point.

Approximate indoor positioning using the integration of the linear acceleration

I am trying to calculate the approximate position of an Android phone in a room. I tried with different methods such as location (wich is terrible in indoors) and gyroscope+compass. I only need to know the approximate position after walking during 5-10seconds so I think the integration of linear acceleration could be enough. I know the error is terrible because of the propagation of the error but maybe it will work in my setup. I only need the approximate position to point a camera to the Android phone.
I coded the double integration but I am doing sth wrong. IF the phone is static on a table the position (x,y,z) always keep increasing. What is the problem?
static final float NS2S = 1.0f / 1000000000.0f;
float[] last_values = null;
float[] velocity = null;
float[] position = null;
float[] acceleration = null;
long last_timestamp = 0;
SensorManager mSensorManager;
Sensor mAccelerometer;
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_LINEAR_ACCELERATION)
return;
if(last_values != null){
float dt = (event.timestamp - last_timestamp) * NS2S;
acceleration[0]=(float) event.values[0] - (float) 0.0188;
acceleration[1]=(float) event.values[1] - (float) 0.00217;
acceleration[2]=(float) event.values[2] + (float) 0.01857;
for(int index = 0; index < 3;++index){
velocity[index] += (acceleration[index] + last_values[index])/2 * dt;
position[index] += velocity[index] * dt;
}
}
else{
last_values = new float[3];
acceleration = new float[3];
velocity = new float[3];
position = new float[3];
velocity[0] = velocity[1] = velocity[2] = 0f;
position[0] = position[1] = position[2] = 0f;
}
System.arraycopy(acceleration, 0, last_values, 0, 3);
last_timestamp = event.timestamp;
}
These are the positions I get when the phone is on the table (no motion). The (x,y,z) values are increasing but the phone is still.
And these are the positions after calculate the moving average for each axis and substract from each measurement. The phone is also still.
How to improve the code or another method to get the approximate position inside a room?
There are unavoidable measurement errors in the accelerometer. These are caused by tiny vibrations in the table, imperfections in the manufacturing, etc. etc. Accumulating these errors over time results in a Random Walk. This is why positioning systems can only use accelerometers as a positioning aid through some filter. They still require some form of dead reckoning such as GPS (which doesn't work well in doors).
There is a great deal of current research for indoor positioning systems. Some areas of research into systems that can take advantage of existing infrastructure are WiFi and LED lighting positioning. There is no obvious solution yet, but I'm sure we'll need a dedicated solution for accurate, reliable indoor positioning.
You said the position always keeps increasing. Do you mean the x, y, and z components only ever become positive, even after resetting several times? Or do you mean the position keeps drifting from zero?
If you output the raw acceleration measurements when the phone is still you should see the measurement errors. Put a bunch of these measurements in an Excel spreadsheet. Calculate the mean and the standard deviation. The mean should be zero for all axes. If not there is a bias that you can remove in your code with a simple averaging filter (calculate a running average and subtract that from each result). The standard deviation will show you how far you can expect to drift in each axis after N time steps as standard_deviation * sqrt(N). This should help you mathematically determine the expected accuracy as a function of time (or N time steps).
Brian is right, there are already deployed indoor positioning systems that work with infrastructure that you can easily find in (almost) any room.
One of the solutions that has proven to be most reliable is WiFi fingerprinting. I recommend you take a look at indoo.rs - www.indoo.rs - they are pioneers in the industry and have a pretty developed system already.
This may not be the most elegant or reliable solution, but in my case it serves the purpose.
Note In my case, I am grabbing a location before the user can even enter the activity that needs indoor positioning.. and I am only concerned with a rough estimate of how much they have moved around.
I have a sensor manager that is creating a rotation matrix based on the device orientation. (using Sensor.TYPE_ROTATION_VECTOR) That obviously doesn't give me movement forward, backward, or side to side, but instead only the device orientation. With that device orientation i have a good idea of the user's bearing in degrees (which way they are facing) and using the Sensor_Step_Detector available in KitKat 4.4, I make the assumption that a step is 1 meter in the direction the user is facing..
Again, I know this is not full proof or very accurate, but depending on your purpose this too might be a simple solution..
everytime a step is detected i basically call this function:
public void computeNewLocationByStep() {
Location newLocal = new Location("");
double vAngle = getBearingInDegrees(); // returns my users bearing
double vDistance = 1 / g.kEarthRadiusInMeters; //kEarthRadiusInMeters = 6353000;
vAngle = Math.toRadians(vAngle);
double vLat1 = Math.toRadians(_location.getLatitude());
double vLng1 = Math.toRadians(_location.getLongitude());
double vNewLat = Math.asin(Math.sin(vLat1) * Math.cos(vDistance) +
Math.cos(vLat1) * Math.sin(vDistance) * Math.cos(vAngle));
double vNewLng = vLng1 + Math.atan2(Math.sin(vAngle) * Math.sin(vDistance) * Math.cos(vLat1),
Math.cos(vDistance) - Math.sin(vLat1) * Math.sin(vNewLat));
newLocal.setLatitude(Math.toDegrees(vNewLat));
newLocal.setLongitude(Math.toDegrees(vNewLng));
stepCount =0;
_location = newLocal;
}

Unit of measurement in a game physics engine

in my game I get the acceleration from the accelerometer.
Computing my calculation, I have to apply a coefficient to turn unit of measurementin pixel unit.
I apply the coefficient founded for an Android app (in a sample):
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
mXDpi = metrics.xdpi;
mYDpi = metrics.ydpi;
mMetersToPixelsX = mXDpi / 0.0254f;
mMetersToPixelsY = mYDpi / 0.0254f;
to my acceleration, getting pixels/s^2. in this way i can use pixel everywhere in my code instead of thinking all in meters.
It is right?
It's going to depend on what sort of physics you want to impose. (This assumes you want Newtonian mechanics.) If you want to track the motion of the device, then you need to integrate the acceleration to get velocity and then integrate the velocity to get position. Or I suppose, you could skip the intermediate step and translate from 'acceleration' to change in position by using 0.5*acceleration^2 (and then multiply that result by an appropriate scaling factor that you will probably need to determine by experiment). (That second method may not properly handle constant motion.) For each independent dimension, velocity and position would be a cumulative sum with these recurrence relations:
velocity[t] = acceleration[t] *(t -(t-1) ) + velocity[t-1]
position[t] = position[t-1] + velocity[t]*(t -(t-1) )

Complementary filter (Gyro + accel) with Android

Recently I have made some research to use both the accelerometer + Gyroscope to use those senser to track a smartphone without the help of the GPS (see this post)
Indoor Positioning System based on Gyroscope and Accelerometer
For that purpose I will need my orientation (angle (pitch, roll etc..)) so here what i have done so far:
public void onSensorChanged(SensorEvent arg0) {
if (arg0.sensor.getType() == Sensor.TYPE_ACCELEROMETER)
{
accel[0] = arg0.values[0];
accel[1] = arg0.values[1];
accel[2] = arg0.values[2];
pitch = Math.toDegrees(Math.atan2(accel[1], Math.sqrt(Math.pow(accel[2], 2) + Math.pow(accel[0], 2))));
tv2.setText("Pitch: " + pitch + "\n" + "Roll: " + roll);
} else if (arg0.sensor.getType() == Sensor.TYPE_GYROSCOPE )
{
if (timestamp != 0) {
final float dT = (arg0.timestamp - timestamp) * NS2S;
angle[0] += arg0.values[0] * dT;
filtered_angle[0] = (0.98f) * (filtered_angle[0] + arg0.values[0] * dT) + (0.02f)* (pitch);
}
timestamp = arg0.timestamp;
}
}
Here I'm trying to angle (just for testing) from my accelerometer (pitch), from integration the gyroscope_X trough time filtering it with a complementary filter
filtered_angle[0] = (0.98f) * (filtered_angle[0] + gyro_x * dT) + (0.02f)* (pitch)
with dT begin more or less 0.009 secondes
But I don't know why but my angle are not really accurate...when the device is position flat on the table (Screen facing up)
Pitch (angle fromm accel) = 1.5 (average)
Integrate gyro = 0 to growing (normal it's drifting)
filtered gyro angle = 1.2
and when I lift the phone of 90° (see the screen is facing the wall in front of me)
Pitch (angle fromm accel) = 86 (MAXIMUM)
Integrate gyro = he is out ok its normal
filtered gyro angle = 83 (MAXIMUM)
So the angles never reach 90 ??? Even if I try to lift the phone a bit more...
Why doesn't it going until 90° ? Are my calculation wrong? or is the quality of the sensor crap?
AN other thing that I'm wondering it is that: with Android I don't "read out" the value of the sensor but I'm notified when they change. The problem is that as you see in the code the Accel and Gyro share the same method.... so when I compute the filtered angle I will take the pitch of the accel measure 0.009 seconds before, no ? Is that maybe the source of my problem?
Thank you !
I can only repeat myself.
You get position by integrating the linear acceleration twice but the error is horrible. It is useless in practice. In other words, you are trying to solve the impossible.
What you actually can do is to track just the orientation.
Roll, pitch and yaw are evil, do not use them. Check in the video I already recommended, at 38:25.
Here is an excellent tutorial on how to track orientation with gyros and accelerometers.
Similar questions that you might find helpful:
track small movements of iphone with no GPS
What is the real world accuracy of phone accelerometers when used for positioning?
how to calculate phone's movement in the vertical direction from rest?
iOS: Movement Precision in 3D Space
How to use Accelerometer to measure distance for Android Application Development
Distance moved by Accelerometer
How can I find distance traveled with a gyroscope and accelerometer?
I wrote a tutorial on the use of the Complementary Filter for oriëntation tracking with gyroscope and accelerometer: http://www.pieter-jan.com/node/11 maybe it can help you.
I test your code and found that probably the scale factor is not consistent.
Convert the pitch to 0-pi gives better result.
In my test, the filtered result is ~90 degrees.
pitch = (float) Math.toDegrees(Math.atan2(accel[1], Math.sqrt(Math.pow(accel[2], 2) + Math.pow(accel[0], 2))));
pitch = pitch*PI/180.f;
filtered_angle = weight * (filtered_angle + event.values[0] * dT) + (1.0f-weight)* (pitch);
i tried and this will give you angle 90...
filtered_angle = (filtered_angle / 83) * 90;

Categories

Resources