So this is my second question today, I might be pushing my luck
In short making a 3D first Person, where you can move about and look around.
In My OnDrawFrame I am using
Matrix.setLookAtM(mViewMatrix, 0, eyeX , eyeY, eyeZ , lookX , lookY , lookZ , upX, upY, upZ);
To move back, forth, sidestep left etc I use something like this(forward code listed)
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
mRenderer.eyeX += v[0] * SPEED_MOVE;
mRenderer.eyeZ += v[2] * SPEED_MOVE;
mRenderer.lookX += v[0] * SPEED_MOVE;
mRenderer.lookZ += v[2] * SPEED_MOVE;
This works
Now I want to look around and I tried to port my iPhone openGL 1.0 code. This is left/right
float v[] = {mRenderer.lookX - mRenderer.eyeX,mRenderer.lookY - mRenderer.eyeY, mRenderer.lookZ - mRenderer.eyeZ};
if (x > mPreviousX )
{
mRenderer.lookX += ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2]));
mRenderer.lookZ += ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2]));
}
else
{
mRenderer.lookX -= (Math.cos(SPEED_TURN / 2) *v[0] - Math.sin(SPEED_TURN / 2) * v[2]);
mRenderer.lookZ -= (Math.sin(SPEED_TURN / 2) *v[0] + Math.cos(SPEED_TURN / 2) * v[2]);
}
This works for like 35 degrees and then goes mental?
Any ideas?
First of all I would suggest not to trace the look vector but rather forward vector, then in lookAt method use eye+forward to generate look vector. This way you can loose the update on the look completely when moving, and you don't need to compute the v vector (mRenderer.eyeX += forward.x * SPEED_MOVE;...)
To make things more simple I suggest that you normalize the vectors forward and up whenever you change them (and I will consider as you did in following methods).
Now as for rotation there are 2 ways. Either use right and up vectors to move the forward (and up) which is great for small turning (I'd say about up to 10 degrees and is capped at 90 degrees) or compute the current angle, add any angle you want and recreate the vectors.
The first mentioned method on rotating is quite simple:
vector forward = forward
vector up = up
vector right = cross(forward, up) //this one might be the other way around as up, forward :)
//going left or right:
forward = normalized(forward + right*rotationSpeedX)
//going up or down:
forward = normalized(forward + up*rotationSpeedY)
vector right = cross(forward, up) //this one might be the other way around
vector up = normalized(cross(forward, right)) //this one might be the other way around
//tilt left or right:
up = normalized(up + right*rotationZ)
The second method needs a bit trigonometry:
Normally to compute an angle you could just call atan(forward.z/forward.x) and add some if statements since the produced result is only in 180 degrees angle (I am sure you will be able to find some answers on the web to get rotation from vector though). The same goes with up vector for getting the vertical rotation. Then after you get the angles you can easily just add some degrees to the angles and recreate the vectors with sin and cos. There is a catch though, if you rotate the camera in such way, that forward faces straight up(0,1,0) you need to get the first rotation from up vector and the second from forward vector but you can avoid all that if you cap the maximum vertical angle to something like +- 85 degrees (and there are many games that actually do that). The second thing is if you use this approach your environment must support +-infinitive or this atan(forward.z/forward.x) will brake if forward.x == 0.
And some addition about the first approach. Since I see you are trying to move around the 2D space your forward vector to use with movement speed should be normalized(forward.x, 0, forward.z), it is important to normalize it or you will be moving slower if camera tilts up or down more.
Second thing is when you rotate left/right you might want to force up vector to (0,1,0) + normalize right vector and lastly recreate the up vector from forward and right. Again you then should cap the vertical rotation (up.z should be larger then some small value like .01)
It turned out my rotation code was wrong
if (x > mPreviousX )
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(SPEED_TURN / 2) * v[0]) - (Math.sin(SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(SPEED_TURN / 2) * v[0]) + (Math.cos(SPEED_TURN / 2) * v[2])));
}
else
{
mRenderer.lookX = (float) (mRenderer.eyeX + ((Math.cos(-SPEED_TURN / 2) * v[0]) - (Math.sin(-SPEED_TURN / 2) * v[2])));
mRenderer.lookZ = (float) (mRenderer.eyeZ + ((Math.sin(-SPEED_TURN / 2) * v[0]) + (Math.cos(-SPEED_TURN / 2) * v[2])));
}
Related
TL;DR
How come the accelerometer values I get from Sensor.TYPE_ACCELEROMETER are slightly offset? I don't mean by gravity, but by some small error that varies from axis to axis and phone to phone.
Can I calibrate the accelerometer? Or is there a standard way of compensating for these errors?
I'm developing an app that has a need for as precise acceleration measurements as possible (mainly vertical acceleration, i.e. same direction as gravity).
I've been doing A LOT of testing, and it turns out that the raw values I get from Sensor.TYPE_ACCELEROMETER are off. If I let the phone rest at a perfectly horizontal surface with the screen up, the accelerometer shows a Z-value of 9.0, where it should be about 9.81. Likewise, if I put the phone in portrait or landscape mode, the X- and Y- accelerometer values show about 9.6. instead of 9.81.
This of course affects my vertical acceleration, as I'm using SensorManager.getRotationMatrixFromVector(), to calculate the vertical acceleration, resulting in a vertical acceleration that is off by a different amount depending on the rotation of the device.
Now, before anyone jumps the gun and mentions that I should try using Sensor.TYPE_LINEAR_ACCELERATION instead, I must point out that I'm actually doing that as well, parallel to the TYPE_ACCELERATION. By using the gravity sensor I then calculate the vertical acceleration (as described in this answer). The funny thing is that I get EXACTLY the same result as the method that uses the raw accelerometer, SensorManager.getRotationMatrixFromVector() and matrix multiplication (and finally subtracting gravity).
The only way I'm able to get almost exactly zero vertical acceleration for a stationary phone in any rotation is to get the raw accelerometer values, add an offset (from earlier observations, i.e. X+0.21, Y+0.21 and Z+0.81) and then performing the rotation matrix stuff to get the world coordinate system accelerations. Note that since it's not just the calculated vertical acceleration that is wrong - it's actually the raw values from Sensor.TYPE_ACCELEROMETER, which I would think excludes other error sources like gyroscope sensor, etc?
I have tested this on two different phones (Samsung Galaxy S5 and Sony Xperia Z3 compact), and both have these accelerometer value deviances - but of course not the same values on both phones.
How come the the values of Sensor.TYPE_ACCELEROMETER are off, and is there a better way of "calibrating" the accelerometer than simply observing how much they deviate from gravity and adding the difference to the values before using them?
You should calibrate gains, offsets, and angle of the 3 accelerometers.
Unfortunately it's not possible to deepen the whole topic here.
I'll write a small introduction, describing the basic concept, and then I'll post a link to the code of a simple Clinometer that implements the calibration.
The calibration routine could be done with 7 misurations (calculate the mean value of a good number of samples) in different ortogonal positions at your choice, in order to have all +-0 and +-g values of your accelerometers. For example:
STEP 1 = Lay flat
STEP 2 = Rotate 180°
STEP 3 = Lay on the left side
STEP 4 = Rotate 180°
STEP 5 = Lay vertical
STEP 6 = Rotate 180° upside-down
STEP 7 = Lay face down
Then you can use the 7 measurements mean[][] to calculate offsets and gains:
calibrationOffset[0] = (mean[0][2] + mean[0][3]) / 2;
calibrationOffset[1] = (mean[1][4] + mean[1][5]) / 2;
calibrationOffset[2] = (mean[2][0] + mean[2][6]) / 2;
calibrationGain[0] = (mean[0][2] - mean[0][3]) / (STANDARD_GRAVITY * 2);
calibrationGain[1] = (mean[1][4] - mean[1][5]) / (STANDARD_GRAVITY * 2);
calibrationGain[2] = (mean[2][0] - mean[2][6]) / (STANDARD_GRAVITY * 2);
using the values of mean[axis][step], where STANDARD_GRAVITY = 9.81.
Then apply the Gain and Offset Corrections to measurements:
for (int i = 0; i < 7; i++) {
mean[0][i] = (mean[0][i] - calibrationOffset[0]) / calibrationGain[0];
mean[1][i] = (mean[1][i] - calibrationOffset[1]) / calibrationGain[1];
mean[2][i] = (mean[2][i] - calibrationOffset[2]) / calibrationGain[2];
}
and finally calculates the correction angles:
for (int i = 0; i < 7; i++) {
angle[0][i] = (float) (Math.toDegrees(Math.asin(mean[0][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[1][i] = (float) (Math.toDegrees(Math.asin(mean[1][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
angle[2][i] = (float) (Math.toDegrees(Math.asin(mean[2][i]
/ Math.sqrt(mean[0][i] * mean[0][i] + mean[1][i] * mean[1][i] + mean[2][i] * mean[2][i]))));
}
calibrationAngle[2] = (angle[0][0] + angle[0][1])/2; // angle 0 = X axis
calibrationAngle[1] = -(angle[1][0] + angle[1][1])/2; // angle 1 = Y axis
calibrationAngle[0] = -(angle[1][3] - angle[1][2])/2; // angle 2 = Z axis
You can find a simple but complete implementation of a 3-axis calibration in this opensource Clinometer app: https://github.com/BasicAirData/Clinometer.
There is also the APK and the link of the Google Play Store if you want to try it.
You can find the calibration routine in CalibrationActivity.java;
The calibration parameters are applied in ClinometerActivity.java.
Furthermore, you can find a very good technical article that deepens the 3-axis calibration here: https://www.digikey.it/it/articles/using-an-accelerometer-for-inclination-sensing.
Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help.
I did the perspective projection which is explained in wikipedia.
What do I have to do with the result of the perspective projection to get the resulting screenpoint?
The Wikipedia article also confused me when I read it some time ago. Here is my attempt to explain it differently:
The Situation
Let's simplify the situation. We have:
Our projected point D(x,y,z) - what you call relativePositionX|Y|Z
An image plane of size w * h
A half-angle of view α
... and we want:
The coordinates of B in the image plane (let's call them X and Y)
A schema for the X-screen-coordinates:
E is the position of our "eye" in this configuration, which I chose as origin to simplify.
The focal length f can be estimated knowing that:
tan(α) = (w/2) / f (1)
A bit of Geometry
You can see on the picture that the triangles ECD and EBM are similar, so using the Side-Splitter Theorem, we get:
MB / CD = EM / EC <=> X / x = f / z (2)
With both (1) and (2), we now have:
X = (x / z) * ( (w / 2) / tan(α) )
If we go back to the notation used in the Wikipedia article, our equation is equivalent to:
b_x = (d_x / d_z) * r_z
You can notice we are missing the multiplication by s_x / r_x. This is because in our case, the "display size" and the "recording surface" are the same, so s_x / r_x = 1.
Note: Same reasoning for Y.
Practical Use
Some remarks:
Usually, α = 45deg is used, which means tan(α) = 1. That's why this term doesn't appear in many implementations.
If you want to preserve the ratio of the elements you display, keep f constant for both X and Y, ie instead of calculating:
X = (x / z) * ( (w / 2) / tan(α) ) and Y = (y / z) * ( (h / 2) / tan(α) )
... do:
X = (x / z) * ( (min(w,h) / 2) / tan(α) ) and Y = (y / z) * ( (min(w,h) / 2) / tan(α) )
Note: when I said that "the "display size" and the "recording
surface" are the same", that wasn't quite true, and the min
operation is here to compensate this approximation, adapting the
square surface r to the potentially-rectangular surface s.
Note 2: Instead of using min(w,h) / 2, Appunta uses screenRatio=
(getWidth()+getHeight())/2 as you noticed. Both solutions preserve the elements
ratio. The focal, and thus the angle of view, will simply be a bit different,
depending on the screen's own ratio. You can actually use any function you want to
define f.
As you may have noticed on the picture above, the screen coordinates are here defined between [-w/2 ; w/2] for X and [-h/2 ; h/2] for Y, but you probably want [0 ; w] and [0 ; h] instead. X += w/2 and Y += h/2 - Problem solved.
Conclusion
I hope this will answer your questions. I'll stay near if it needs editions.
Bye!
< Self-promotion Alert > I actually made some time ago an article
about 3D projection and rendering. The implementation is in
Javascript, but it should be quite easy to translate.
Something seems funny about the way that Android's frustumM works. If I check the OpenGL red book, the matrix generated looks like this:
(source: glprogramming.com)
Songho.ca seems to agree with this:
(source: songho.ca)
However, one component is multiplied by 2 with Android's frustumM, and not in the other example matrices. Here's what it seems to be doing:
Everything seems to functionally match up, except the first row, third column. Why is that being multiplied by two? Here's the lines of code from android.opengl.Matrix's frustumM method that generate the first three elements of the third column:
final float A = 2.0f * ((right + left) * r_width);
final float B = (top + bottom) * r_height;
final float C = (far + near) * r_depth;
With r_width, r_height, r_depth defined as:
final float r_width = 1.0f / (right - left);
final float r_height = 1.0f / (top - bottom);
final float r_depth = 1.0f / (near - far);
The line starting with "final float A" appears to be mistakenly multiplying by 2.
Is this a mistake in Android's code, or am I just missing something? I know that the term cancels out if the frustum is symmetrical. When running the code with an asymmetrical frustum, the generated matrices actually are different and so are the resulting vectors when the same vector is multiplied with those differing matrices.
It's a bug with Android. Please see http://code.google.com/p/android/issues/detail?id=35646
((I'd preffer just to comment but I'm not allowed.))
Thank you guys for the insight. I just had to add
mMyMatrix[8] /= 2f;
after
Matrix.frustrumM(mMyMatrix, ...)
To solve my aspect ratio problems :)
yes, if you call the function with a (-ratio, ratio, -1, 1, 1, 10) parameters set, it does not cause the probelm, but if you call with (right != -1 * left), it makes thing different.
I find this issue when i check the source code. sigh.
This is a very basic issue, but I just can't find a complete answer anywhere.
Consider an object is moving along the z axis with a given SPEED. (Ex: -0.2 opengl units)
Now I rotate the object around its local axis with rotationX , Y and Z angles.
Question: what is the next position of my object?
I am using the following equations (which I know are wrong, but I just can't make them right)
positionX += -SPEED * Math.sin(rotationY * Utils.DEG)* Math.cos(rotationX * Utils.DEG);
positionY += SPEED * Math.sin(rotationX * Utils.DEG);
positionZ += -SPEED * Math.cos(rotationX * Utils.DEG)* Math.cos(rotationY * Utils.DEG);
Where is my mistake?
I would store a vector that represents the orientation of the object.
On rotation, rotate the orientation vector.
When moving,
positionX += SPEED * orientation.X
positionY += SPEED * orientation.Y
etc.
I'm working on implementing gesture recognition in my app, using the Gestures Builder to create a library of gestures. I'm wondering if having multiple variations of a gesture will help or hinder recognition (or performance). For example, I want to recognize a circular gesture. I'm going to have at least two variations - one for a clockwise circle, and one for counterclockwise, with the same semantic meaning so that the user doesn't have to think about it. However, I'm wondering if it would be desirable to save several gestures for each direction, for example, of various radii, or with different shapes that are "close enough" - like egg shapes, ellipses, etc., including different angular rotations of each. Anybody have experience with this?
OK, after some experimenation and reading of the android source, I've learned a little... First, it appears that I don't necessarily have to worry about creating different gestures in my gesture library to cover different angular rotations or directions (clockwise/counterclockwise) of my circular gesture. By default, a GestureStore uses a sequence type of SEQUENCE_SENSITIVE (meaning that the starting point and ending points matter), and an orientation style of ORIENTATION_SENSITIVE (meaning that the rotational angle matters). However, these defaults can be overridden with 'setOrientationStyle(ORIENTATION_INVARIANT)' and setSequenceType(SEQUENCE_INVARIANT).
Furthermore, to quote from the comments in the source... "when SEQUENCE_SENSITIVE is used, only single stroke gestures are currently allowed" and "ORIENTATION_SENSITIVE and ORIENTATION_INVARIANT are only for SEQUENCE_SENSITIVE gestures".
Interestingly, ORIENTATION_SENSITIVE appears to mean more than just "orientation matters". It's value is 2, and the comments associated with it and some related (undocumented) constants imply that you can request different levels of sensitivity.
// at most 2 directions can be recognized
public static final int ORIENTATION_SENSITIVE = 2;
// at most 4 directions can be recognized
static final int ORIENTATION_SENSITIVE_4 = 4;
// at most 8 directions can be recognized
static final int ORIENTATION_SENSITIVE_8 = 8;
During a call to GestureLibary.recognize(), the orientation type value (1, 2, 4, or 8) is passed through to GestureUtils.minimumCosineDistance() as the parameter numOrientations, whereupon some calculations are performed that are above my pay grade (see below). If someone can explain this, I'm interested. I get that it is calculating the angular difference between two gestures, but I don't understand the way it's using the numOrientations parameter. My expectation is that if I specify a value of 2, it finds the minimum distance between gesture A and two variations of gesture B -- one variation being "normal B", and the other being B spun around 180 degrees. Thus, I would expect a value of 8 would consider 8 variations of B, spaced 45 degrees apart. However, even though I don't fully understand the math below, it doesn't look to me like a numOrientations value of 4 or 8 is used directly in any calculations, although values greater than 2 do result in a distinct code path. Maybe that's why those other values are undocumented.
/**
* Calculates the "minimum" cosine distance between two instances.
*
* #param vector1
* #param vector2
* #param numOrientations the maximum number of orientation allowed
* #return the distance between the two instances (between 0 and Math.PI)
*/
static float minimumCosineDistance(float[] vector1, float[] vector2, int numOrientations) {
final int len = vector1.length;
float a = 0;
float b = 0;
for (int i = 0; i < len; i += 2) {
a += vector1[i] * vector2[i] + vector1[i + 1] * vector2[i + 1];
b += vector1[i] * vector2[i + 1] - vector1[i + 1] * vector2[i];
}
if (a != 0) {
final float tan = b/a;
final double angle = Math.atan(tan);
if (numOrientations > 2 && Math.abs(angle) >= Math.PI / numOrientations) {
return (float) Math.acos(a);
} else {
final double cosine = Math.cos(angle);
final double sine = cosine * tan;
return (float) Math.acos(a * cosine + b * sine);
}
} else {
return (float) Math.PI / 2;
}
}
Based on my reading, I theorized that the simplest and best approach would be to have one stored circular gesture, setting the sequence type and orientation to invariant. That way, anything circular should match pretty well, regardless of direction or orientation. So I tried that, and it did return high scores (in the range of about 25 to 70) for pretty much anything remotely resembling a circle. However, it also returned scores of 20 or so for gestures that were not even close to circular (horizontal lines, V shapes, etc.). So, I didn't feel good about the separation between what should be matches and what should not. What seems to be working best is to have two stored gestures, one in each direction, and using SEQUENCE_SENSITIVE in conjunction with ORIENTATION_INVARIANT. That's giving me scores of 2.5 or higher for anything vaguely circular, but scores below 1 (or no matches at all) for gestures that are not circular.