I'm trying to rotate a model (terrain) situated at (X,Y,Z)=(0,0,0) of my OpenGL-World. +X is east, -Z is north, +Y is altitude. View is looking to -Y because the device Euler angles are (0°,0°,0°) while the device is lying on the table pointing north.
The Problem:
I need to switch the axes of my device to change the measured angle device -y to device z.
Currently rotation around the device x-axis works (results in rotation around World X-axis).
But rotation around device y leads to a rotation around World Y instead of World -Z and rotation around device z to rotation around World Z instead of World Y.
By now I'm running out of ideas how to solve this. Anybody who could give me a hint please?
(OpenGL) (Device)
^ +Y-axis ^ +z-axis
* *
* *
* * ^ +y-axis (North)
* * *
* * *
* * *
************> + X-axis ************> +x-axis
*
*
v +Z-axis (Minus North)
What I've tried so far:
Using Euler angles from SensorManager.getOrientation and switching angles works fine, though I get into gimbal lock close to pitch 90 deg. So I'm in search for another solution (the SensorManager rotation matrix or quaternions).
SensorManager.remapCoordinateSystem in almost every possible constellation -> doesn't help
Changing cols/rows of my rotation matrix from SensorManager.getRotationMatrix -> doesn't help
Well, I've found a pretty easy solution using quaternions.
The axes are beeing switched at the end.
See also:
Based on this solution
SensorEvent.values (--> Sensor.TYPE_ROTATION_VECTOR)
SensorManager.getQuaternionFromVector
Now I'm still wondering how to enable pitch angles above 90 deg (in portrait mode). Any ideas?
Activity:
private SensorManager sensorManager;
private Sensor rotationSensor;
public void onCreate(Bundle savedInstanceState) {
...
rotationSensor = sensorManager.getDefaultSensor(Sensor.TYPE_ROTATION_VECTOR);
...
}
// Register/unregister SensorManager Listener at onResume()/onPause() !
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR) {
// Set rotation vector
MyRenderer.setOrientationVector(event.values);
}
}
MyRenderer:
float[] orientationVector = new float[3];
public void setOrientationVector (float[] vector) {
// Rotation vector to quaternion
float[] quat = new float[4];
SensorManager.getQuaternionFromVector(quat, vector);
// Switch quaternion from [w,x,y,z] to [x,y,z,w]
float[] switchedQuat = new float[] {quat[1], quat[2], quat[3], quat[0]};
// Quaternion to rotation matrix
float[] rotationMatrix = new float[16];
SensorManager.getRotationMatrixFromVector(rotationMatrix, switchedQuat);
// Rotation matrix to orientation vector
SensorManager.getOrientation(rotationMatrix, orientationVector);
}
public void onDrawFrame(GL10 unused) {
...
// Rotate model matrix (note the axes beeing switched!)
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[1] * 180/Math.PI), 1, 0, 0);
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[0] * 180/Math.PI), 0, 1, 0);
Matrix.rotateM(modelMatrix, 0,
(float) (orientationVector[2] * 180/Math.PI), 0, 0, 1);
...
}
Related
I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}
float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.
This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )
i'm working in a AR application in android with the Epson Moverio BT-200.
I have a quaternion that change his values with my sensor fusion algorithm.
In my application i'm trying to move a 2D item changing his margin left and margin top values when I move my head.
I'd like to know how can I extract, from the quaternion values, only the "horizontal" and "vertical" movements.
I could extract from the quaternion the pitch and roll values, but I read that there are several problems with euler angle. Could I do this only working with quaternions?
This is my actual code. I solved the problem using the Quaternions for the algorithm, and at the end I extract the euler angles from the rotation matrix.
This is the algorithm for take the values from the sensors:
private static final float NS2S = 1.0f / 1000000000.0f;
private final Quaternion deltaQuaternion = new Quaternion();
private Quaternion quaternionGyroscope = new Quaternion();
private Quaternion quaternionRotationVector = new Quaternion();
private long timestamp;
private static final double EPSILON = 0.1f;
private double gyroscopeRotationVelocity = 0;
private boolean positionInitialised = false;
private int panicCounter;
private static final float DIRECT_INTERPOLATION_WEIGHT = 0.005f;
private static final float OUTLIER_THRESHOLD = 0.85f;
private static final float OUTLIER_PANIC_THRESHOLD = 0.65f;
private static final int PANIC_THRESHOLD = 60;
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR) {
// Process rotation vector (just safe it)
float[] q = new float[4];
// Calculate angle. Starting with API_18, Android will provide this value as event.values[3], but if not, we have to calculate it manually.
SensorManager.getQuaternionFromVector(q, event.values);
// Store in quaternion
quaternionRotationVector.setXYZW(q[1], q[2], q[3], -q[0]);
if (!positionInitialised) {
// Override
quaternionGyroscope.set(quaternionRotationVector);
positionInitialised = true;
}
} else if (event.sensor.getType() == Sensor.TYPE_GYROSCOPE) {
// Process Gyroscope and perform fusion
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
gyroscopeRotationVelocity = Math.sqrt(axisX * axisX + axisY * axisY + axisZ * axisZ);
// Normalize the rotation vector if it's big enough to get the axis
if (gyroscopeRotationVelocity > EPSILON) {
axisX /= gyroscopeRotationVelocity;
axisY /= gyroscopeRotationVelocity;
axisZ /= gyroscopeRotationVelocity;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
double thetaOverTwo = gyroscopeRotationVelocity * dT / 2.0f;
double sinThetaOverTwo = Math.sin(thetaOverTwo);
double cosThetaOverTwo = Math.cos(thetaOverTwo);
deltaQuaternion.setX((float) (sinThetaOverTwo * axisX));
deltaQuaternion.setY((float) (sinThetaOverTwo * axisY));
deltaQuaternion.setZ((float) (sinThetaOverTwo * axisZ));
deltaQuaternion.setW(-(float) cosThetaOverTwo);
// Move current gyro orientation
deltaQuaternion.multiplyByQuat(quaternionGyroscope, quaternionGyroscope);
// Calculate dot-product to calculate whether the two orientation sensors have diverged
// (if the dot-product is closer to 0 than to 1), because it should be close to 1 if both are the same.
float dotProd = quaternionGyroscope.dotProduct(quaternionRotationVector);
// If they have diverged, rely on gyroscope only (this happens on some devices when the rotation vector "jumps").
if (Math.abs(dotProd) < OUTLIER_THRESHOLD) {
// Increase panic counter
if (Math.abs(dotProd) < OUTLIER_PANIC_THRESHOLD) {
panicCounter++;
}
// Directly use Gyro
setOrientationQuaternionAndMatrix(quaternionGyroscope);
} else {
// Both are nearly saying the same. Perform normal fusion.
// Interpolate with a fixed weight between the two absolute quaternions obtained from gyro and rotation vector sensors
// The weight should be quite low, so the rotation vector corrects the gyro only slowly, and the output keeps responsive.
Quaternion interpolate = new Quaternion();
quaternionGyroscope.slerp(quaternionRotationVector, interpolate, DIRECT_INTERPOLATION_WEIGHT);
// Use the interpolated value between gyro and rotationVector
setOrientationQuaternionAndMatrix(interpolate);
// Override current gyroscope-orientation
quaternionGyroscope.copyVec4(interpolate);
// Reset the panic counter because both sensors are saying the same again
panicCounter = 0;
}
if (panicCounter > PANIC_THRESHOLD) {
Log.d("Rotation Vector",
"Panic counter is bigger than threshold; this indicates a Gyroscope failure. Panic reset is imminent.");
if (gyroscopeRotationVelocity < 3) {
Log.d("Rotation Vector",
"Performing Panic-reset. Resetting orientation to rotation-vector value.");
// Manually set position to whatever rotation vector says.
setOrientationQuaternionAndMatrix(quaternionRotationVector);
// Override current gyroscope-orientation with corrected value
quaternionGyroscope.copyVec4(quaternionRotationVector);
panicCounter = 0;
} else {
Log.d("Rotation Vector",
String.format(
"Panic reset delayed due to ongoing motion (user is still shaking the device). Gyroscope Velocity: %.2f > 3",
gyroscopeRotationVelocity));
}
}
}
timestamp = event.timestamp;
}
}
private void setOrientationQuaternionAndMatrix(Quaternion quaternion) {
Quaternion correctedQuat = quaternion.clone();
// We inverted w in the deltaQuaternion, because currentOrientationQuaternion required it.
// Before converting it back to matrix representation, we need to revert this process
correctedQuat.w(-correctedQuat.w());
synchronized (syncToken) {
// Use gyro only
currentOrientationQuaternion.copyVec4(quaternion);
// Set the rotation matrix as well to have both representations
SensorManager.getRotationMatrixFromVector(currentOrientationRotationMatrix.matrix, correctedQuat.ToArray());
}
}
And this is how I take the euler angles rotation values:
/**
* #return Returns the current rotation of the device in the Euler-Angles
*/
public EulerAngles getEulerAngles() {
float[] angles = new float[3];
float[] remappedOrientationMatrix = new float[16];
SensorManager.remapCoordinateSystem(currentOrientationRotationMatrix.getMatrix(), SensorManager.AXIS_X,
SensorManager.AXIS_Z, remappedOrientationMatrix);
SensorManager.getOrientation(remappedOrientationMatrix, angles);
return new EulerAngles(angles[0], angles[1], angles[2]);
}
I solved my problem with this solution. Now won't be difficult to move my 2d Object with this sensors values. Sorry for lenght of my answer, but I hope that it could be useful for someone :)
The official development documentation suggests the following way of obtaining the quaternion from the 3D rotation rate vector (wx, wy, wz).
// Create a constant to convert nanoseconds to seconds.
private static final float NS2S = 1.0f / 1000000000.0f;
private final float[] deltaRotationVector = new float[4]();
private float timestamp;
public void onSensorChanged(SensorEvent event) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ);
// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude;
axisY /= omegaMagnitude;
axisZ /= omegaMagnitude;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
float thetaOverTwo = omegaMagnitude * dT / 2.0f;
float sinThetaOverTwo = sin(thetaOverTwo);
float cosThetaOverTwo = cos(thetaOverTwo);
deltaRotationVector[0] = sinThetaOverTwo * axisX;
deltaRotationVector[1] = sinThetaOverTwo * axisY;
deltaRotationVector[2] = sinThetaOverTwo * axisZ;
deltaRotationVector[3] = cosThetaOverTwo;
}
timestamp = event.timestamp;
float[] deltaRotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
}
My question is:
It is quite different from the acceleration case, where computing the resultant acceleration using the accelerations ALONG the 3 axes makes sense.
I am really confused why the resultant rotation rate can also be computed with the sub-rotation rates AROUND the 3 axes. It does not make sense to me.
Why would this method - finding the composite rotation rate magnitude - even work?
Since your title does not really match your questions, I'm trying to answer as much as I can.
Gyroscopes don't give an absolute orientation (as the ROTATION_VECTOR) but only rotational velocities around those axis they are built to 'rotate' around. This is due to the design and construction of a gyroscope. Imagine the construction below. The golden thing is rotating and due to the laws of physics it does not want to change its rotation. Now you can rotate the frame and measure these rotations.
Now if you want to obtain something as the 'current rotational state' from the Gyroscope, you will have to start with an initial rotation, call it q0 and constantly add those tiny little rotational differences that the gyroscope is measuring around the axis to it: q1 = q0 + gyro0, q2 = q1 + gyro1, ...
In other words: The Gyroscope gives you the difference it has rotated around the three constructed axis, so you are not composing absolute values but small deltas.
Now this is very general and leaves a couple of questions unanswered:
Where do I get an initial position from? Answer: Have a look at the Rotation Vector Sensor - you can use the Quaternion obtained from there as an initialisation
How to 'sum' q and gyro?
Depending on the current representation of a rotation: If you use a rotation matrix, a simple matrix multiplication should do the job, as suggested in the comments (note that this matrix-multiplication implementation is not efficient!):
/**
* Performs naiv n^3 matrix multiplication and returns C = A * B
*
* #param A Matrix in the array form (e.g. 3x3 => 9 values)
* #param B Matrix in the array form (e.g. 3x3 => 9 values)
* #return A * B
*/
public float[] naivMatrixMultiply(float[] B, float[] A) {
int mA, nA, mB, nB;
mA = nA = (int) Math.sqrt(A.length);
mB = nB = (int) Math.sqrt(B.length);
if (nA != mB)
throw new RuntimeException("Illegal matrix dimensions.");
float[] C = new float[mA * nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]);
return C;
}
To use this method, imagine that mRotationMatrix holds the current state, these two lines do the job:
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix);
// Apply rotation matrix in OpenGL
gl.glMultMatrixf(mRotationMatrix, 0);
If you chose to use Quaternions, imagine again that mQuaternion contains the current state:
// Perform Quaternion multiplication
mQuaternion.multiplyByQuat(deltaRotationVector);
// Apply Quaternion in OpenGL
gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ());
Quaternion multiplication is described here - equation (23). Make sure, you apply the multiplication correctly, since it is not commutative!
If you want to simply know rotation of your device (I assume this is what you ultimately want) I strongly recommend the ROTATION_VECTOR-Sensor. On the other hand Gyroscopes are quite precise for measuring rotational velocity and have a very good dynamic response, but suffer from drift and don't give you an absolute orientation (to magnetic north or according to gravity).
UPDATE: If you want to see a full example, you can download the source-code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo.
Makes sense to me. Acceleration sensors typically work by having some measurable quantity change when force is applied to the axis being measured. E.g. if gravity is pulling down on the sensor measuring that axis, it conducts electricity better. So now you can tell how hard gravity, or acceleration in some direction, is pulling. Easy.
Meanwhile gyros are things that spin (OK, or bounce back and forth in a straight line like a tweaked diving board). The gyro is spinning, now you spin, the gyro is going to look like it is spinning faster or slower depending on the direction you spun. Or if you try to move it, it will resist and try to keep going the way it is going. So you just get a rotation change out of measuring it. Then you have to figure out the force from the change by integrating all the changes over the amount of time.
Typically none of these things are one sensor either. They are often 3 different sensors all arranged perpendicular to each other, and measuring a different axis. Sometimes all the sensors are on the same chip, but they are still different things on the chip measured separately.
I have implemented listener for both Rotation Vector and Orientation Vector though i know it's depreciated i wanted to test both.
I know Rotation Vector is a fusion sensor & recommended but according to it the NORTH (the value[0] returned by getOrientation(rotationMatrix,value) geaving bearing 0) doesn't matches with NORTH from Orientation sensor.
I've also tallied from different apps from playstore, orientation sensor values seem to be more close to them.
Moreover many times my azimuth value[0] from Rotation_Vector then getOrientation just shoots up and keep oscillating between -180 to 180
P.S "getRotationMatrix(float[] R, float[] I, float[] gravity, float[] geomagnetic)" also gives same result as Rotation Vector.
public final void onSensorChanged(SensorEvent event)
{
float rotationMatrix[];
switch(event.sensor.getType())
{
.
.
.
case Sensor.TYPE_ROTATION_VECTOR:
rotationMatrix=new float[16];
mSensorManager.getRotationMatrixFromVector(rotationMatrix,event.values);
determineOrientation(rotationMatrix);
break;
case Sensor.TYPE_ORIENTATION:
sensorZValue.setText(""+event.values[0]); //rotation about geographical z axis
sensorXValue.setText(""+event.values[1]); //rotation about geographical x axis
sensorYValue.setText(""+event.values[2]); //rotation about geographical y axis
}//switch case ends
}
private void determineOrientation(float[] rotationMatrix)
{
float[] orientationValues = new float[3];
SensorManager.getOrientation(rotationMatrix, orientationValues);
double azimuth = Math.toDegrees(orientationValues[0]);
double pitch = Math.toDegrees(orientationValues[1]);
double roll = Math.toDegrees(orientationValues[2]);
sensorZValue.setText(String.valueOf(azimuth)); //rotation about geographical z axis
sensorXValue.setText(String.valueOf(pitch)); //rotation about geographical x axis
sensorYValue.setText(String.valueOf(roll)); //rotation about geographical y axis
}
I want to determine the angle between phone's Y axis and the Vector pointing North so that was my initial implementation.
Please suggest.
I think this will help...
Android Compass that can Compensate for Tilt and Pitch
This calculates North using more reliable sources.
Hope this helps.
An image speaks always more than a ton of text, here's what I'm trying to do :
What is in the center of the circle is the user's phone position (origin). The app displays a custom camera view and it also shows an OpenGL scene (depending where you are looking). The OpenGL scene is only composed of a simple cube and when the user is looking in the right direction the cube is rendered.
I'm pretty new to OpenGL and I achieved to display the cube in front of the camera but it's static : I can't move around the 360° view.
I get from the sensors the orientation of the device :
int type = event.sensor.getType();
float[] data;
if (type == Sensor.TYPE_ACCELEROMETER) {
mGData = data;
} else if (type == Sensor.TYPE_MAGNETIC_FIELD) {
mMData = data;
} else {
// we should not be here.
return;
}
for (int i=0 ; i<3 ; i++)
data[i] = event.values[i];
SensorManager.getRotationMatrix(mR, mI, mGData, mMData);
SensorManager.getOrientation(mR, mOrientation);
As far as I understand, the 3 simultaneous orthogonal rotation angles are stored in mOrientation. But what then ? I wanted to make something like GLU.lookAt(0, 0, 0, X, Y, Z, ?, ?, ?) in the onDrawFrame method but it didn't work. I want to make something like that guy said he couldn't do (see the last paragraph here : https://stackoverflow.com/a/9114246/1304830).
Here's the code I used in onDrawFram :
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Look in a direction (with the sensors)
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0.0f, 0.0f, 0.0f, ?, ?, ?, ?, ?, ?); // Where I need help
gl.glPushMatrix();
// Place the cube in the scene
gl.glTranslatef(0, 0, -5);
mCube.draw(gl);
Thanks for your help
This will do what that person was trying to do, but the problem with his idea is the up-vector is always pointing up on the y-axis. So if you roll the phone the camera isn't going to roll with it.
float pi = (float) Math.PI;
float rad2deg = 180/pi;
// Get the pitch, yaw and roll from the sensor.
float yaw = orientation[0] * rad2deg;
float pitch = orientation[1] * rad2deg;
float roll = orientation[2] * rad2deg;
// Convert pitch, yaw and roll to a vector
float x = (float)(Math.cos( yaw ) * Math.cos( pitch ));
float y = (float)(Math.sin( yaw ) * Math.cos( pitch ));
float z = (float)(Math.sin( pitch ));
GLU.gluLookAt( gl, 0.0f, 0.0f, 0.0f, x, y, z, 0.0f, 1.0f, 0.0f );
Using the three glRotates is a better option IMO, unless you want to lock the roll for some reason.
Note: I'm not sure which direction Android calls up in relation to the phone's screen so I may have got yaw, pitch and roll misconfigured.