How does Matrix.setLookAtM work? I've been searching all over, and can't find an explanation. I understand that the first three coordinates are to define the location of the camera in the world space, and I take it that "center of view" means the x, y, z coordinate I'm looking at in the world space. That being the case, what does the "up vector" mean/do?
If there is a previous question or tutorial that I've overlooked, I would be happy to accept that.
Up vector is what the camera considers "up", i.e.: If you were looking forward and held your hand up, that is your "up" vector. Just set it to 0, 1, 0. I'm not an Android developer, but I'm guessing it's similar to gluLookAt().
What the function is really doing is setting up a view matrix for you. It needs the eye position to establish where the camera will be. After that, it will subtract eye position from center and normalize it to get a forward vector. Then it will cross the forward vector with the up vector to get a right vector. After normalizing all three, it can construct a matrix from those x, y, z vectors giving you a basic model view matrix.
It just discretizes the math for you.
Related
I have two 4x4 rotation matrices M and N. M is describing my current object attitude in space, and N is a desired object attitude. Now I would like to rotate M matrix towards N, so the object will slowly rotate towards desired position in following iterations. Any idea how to approach this?
If these matrices are not strange which should be the case describing "rotation matrices" you should do this by interpolating their base vectors in polar system.
To examine we want to convert top-left 3x3 matrix to 3 vectors defined by angles and distance. Once this is done you should do a linear interpolation on angles and distances for that top-left 3x3 part while the rest should have a direct cartesian interpolation. From angles and distances you can then convert back to cartesian coordinates.
Naturally there is still work internally like choosing which way to rotate (using closest) and checking there are no edge cases where one base vector rotates into different direction then the other...
I managed to successfully do this in 2D system which is a bit easier but should be no different in 3D.
To note a cartesian interpolation works fairly fine as long as angles are relatively small (<10 degrees to guess) which is most likely not your case at all.
sorry for my bad english. I have the following problem:
Lets say the camera of my mobile device is showing this picture.
In the picture you can see 4 different positions. Every position is known to me (longitude, latitude).
Now i want to know, where in the picture a specific position is. For example, i want to have a rectangle 20 meters in front and 5 meters to the left of me. I just know the latitude/longitude of this point, but i don't know, where i have to place it inside of the picture (x,y). For example, POS3 is at (0,400) in my view. POS4 is at (600,400) and so on.
Where do i have to put the new point, which is 20 meters in front and 5 meters to the left of me? (So my Input is: (LatXY,LonXY) and my result should be (x,y) on the screen)
I also got the height of the camera and the angles of x,y and z - axis from the camera.
Can i use simple mathematic operations to solve this problem?
Thank you very much!
The answer you want will depend on the accuracy of the result you need. As danaid pointed out, nonlinearity in the image sensor and other factors, such as atmospheric distortion, may induce errors, but would be difficult problems to solve with different cameras, etc., on different devices. So let's start by getting a reasonable approximation which can be tweaked as more accuracy is needed.
First, you may be able to ignore the directional information from the device, if you choose. If you have the five locations, (POS1 - POS4 and camera, in a consistent basis set of coordinates, you have all you need. In fact, you don't even need all those points.
A note on consistent coordinates. At his scale, once you use the convert the lat and long to meters, using cos(lat) for your scaling factor, you should be able to treat everyone from a "flat earth" perspective. You then just need to remember that the camera's x-y plane is roughly the global x-z plane.
Conceptual Background
The diagram below lays out the projection of the points onto the image plane. The dz used for perspective can be derived directly using the proportion of the distance in view between far points and near points, vs. their physical distance. In the simple case where the line POS1 to POS2 is parallel to the line POS3 to POS4, the perspective factor is just the ratio of the scaling of the two lines:
Scale (POS1, POS2) = pixel distance (pos1, pos2) / Physical distance (POS1, POS2)
Scale (POS3, POS4) = pixel distance (pos3, pos4) / Physical distance (POS3, POS4)
Perspective factor = Scale (POS3, POS4) / Scale (POS1, POS2)
So the perspective factor to apply to a vertex of your rect would be the proportion of the distance to the vertex between the lines. Simplifying:
Factor(rect) ~= [(Rect.z - (POS3, POS4).z / ((POS1, POS2).z - (POS3, POS4).z)] * Perspective factor.
Answer
A perspective transformation is linear with respect to the distance from the focal point in the direction of view. The diagram below is drawn with the X axis parallel to the image plane, and the Y axis pointing in the direction of view. In this coordinate system, for any point P and an image plane any distance from the origin, the projected point p has an X coordinate p.x which is proportional to P.x/P.y. These values can be linearly interpolated.
In the diagram, tp is the desired projection of the target point. to get tp.x, interpolate between, for example, pos1.x and pos3.x using adjustments for the distance, as follows:
tp.x = pos1.x + ((pos3.x-pos1.x)*((TP.x/TP.y)-(POS1.x/POS1.y))/((POS3.x/POS3.y)-(POS1.x/POS1.y))
The advantage of this approach is that it does not require any prior knowledge of the angle viewed by each pixel, and it will be relatively robust against reasonable errors in the location and orientation of the camera.
Further refinement
Using more data means being able to compensate for more errors. With multiple points in view, the camera location and orientation can be calibrated using the Tienstra method. A concise proof of this approach, (using barycentric coordinates), can be found here.
Since the transformation required are all linear based on homogeneous coordinates, you could apply barycentric coordinates to interpolate based on any three or more points, given their X,Y,Z,W coordinates in homogeneous 3-space and their (x,y) coordinates in image space. The closer the points are to the destination point, the less significant the nonlinearities are likely to be, so in your example, you would use POS 1 and POS3, since the rect is on the left, and POS2 or POS4 depending on the relative distance.
(Barycentric coordinates are likely most familiar as the method used to interpolate colors on a triangle (fragment) in 3D graphics.)
Edit: Barycentric coordinates still require the W homogeneous coordinate factor, which is another way of expressing the perspective correction for the distance from the focal point. See this article on GameDev for more details.
Two related SO questions: perspective correction of texture coordinates in 3d and Barycentric coordinates texture mapping.
I see a couple of problems.
The only real mistake is you're scaling your projection up by _canvasWidth/2 etc instead of translating that far from the principal point - add those value to the projected result, multiplication is like "zooming" that far into the projection.
Second, dealing in a global cartesian coordinate space is a bad idea. With the formulae you're using, the difference between (60.1234, 20.122) and (60.1235, 20.122) (i.e. a small, latitude difference) causes changes of similar magnitude in all 3 axes which doesn't feel right.
It's more straightforward to take the same approach as computer graphics: set your camera as the origin of your "camera space", and convert between world objects and camera space by getting the haversine distance (or similar) between your camera location and the location of the object. See here: http://www.movable-type.co.uk/scripts/latlong.html
Third your perspective projection calculations are for an ideal pinhole camera, which you probably do not have. It will only be a small correction, but to be accurate you need to figure out how to additionally apply the projection that corresponds to the intrinsic camera parameters of your camera. There are two ways to accomplish this: you can do it as a post multiplication to the scheme you already have, or you can change from multiplying by a 3x3 matrix to using a full 4x4 camera matrix:http://en.wikipedia.org/wiki/Camera_matrix with the parameters in there.
Using this approach the perspective projection is symmetric about the origin - if you don't check for z depth you'll project points behind you onto you screen as if they were the same z distance in front of you.
Then lastly I'm not sure about android APIs but make sure you're getting true north bearing and not magnetic north bearing. Some platform return either depending on an argument or configuration. (And your degrees are in radians if that's what the APIs want etc - silly things, but I've lost hours debugging less :) ).
If you know the points in the camera frame and the real world coordinates, some simple linear algebra will suffice. A package like OpenCV will have this type of functionality, or alternatively you can create the projection matrices yourself:
http://en.wikipedia.org/wiki/3D_projection
Once you have a set of points it is as simple as filling in a few vectors to solve the system of equations. This will give you a projection matrix. Once you have a projection matrix, you can assume the 4 points are planar. Multiply any 3D coordinate to find the corresponding 2D image plane coordinate.
in OpenGL ES 1, I have a Rubic cube that consists of 27 smaller cubes. i want a rotation which causes a particular small cube becoming exactly in front of the viewpoint. so i need two vectors. one is the vector that comes from the origin of the object to that particular cube. and another is the vector that comes from origin to the viewpoint. then the cross product of them gives me the axis of the rotation and the dot product gives me the angle.
but i cant convert the (0,0,1) -which is the vector that comes from the origin to the viewpoint in world coordinate- to object coordinates.
how can i do that? how can i convert "world coordinates to object coordinates"?
It's easier to rotate the camera around than it is rotating the object in front of a stationary camera.
You can do what you asked for by placing the camera at the origin (center) of the rubic cube, giving it the opposite direction from the small cube, and than translating z backwards.
I know it doesn't answer the question in the title, but I think it's a simpler solution. (As for your question, I keep world and object coordinates same, and set the object scale as needed when rendering).
I created one sphere using OpenGL ES20 in Android. In a perspective projection env, I animate the sphere from [-1.5, -2, -2] to [-1.5, 2, -2] . The problem is that, the sphere looks like a ellipse when it reach the frustum boundary. Indeed, it only look like a circle when it is at [0, 0, -2], the more it away from the [0,0], the more it looks like a ellipse.
Is this the standard behavior ? I thought, one sphere should look like a circle in all angles of view. Could you please help ?
You should lessen your field of view; what you show is normal and is a side effect of the slightly artificial nature of a 3d projection — a 3d projection assumes the viewer is sitting a fixed distance from the screen and that their eyes are positioned along z directly from the centre of the screen looking exactly forwards. Check out the related problems described here for a description of the same effect with a real camera.
Quite often the implicit default field of view is ninety degrees. But when you hold a phone in your hand it occupies much less than ninety degrees of your vision.
If you're using glFrustum then try specifying lesser values for left, right, top and bottom. As a quick fix, just throw a glScalef by, say, 2.0 onto your projection stack (or your ES 2 equivalent) after computing your projection matrix.
I am having trouble rotating my 3D objects in Open GL. I start each draw frame by loading the identity (glLoadIdentity()) and then I push and pop on the stack according to what I need (for the camera, etc). I then want 3D objects to be able to roll, pitch and yaw and then have them displayed correctly.
Here is the catch... I want to be able to do incremental rotations as if I was flying an airplane. So every time the up button is pushed the object rotates around it's own x axis. But then if the object is pitched down and chooses to yaw, the rotation should then be around the object's up vector and not the Y axis.
I've tried doing the following:
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and those don't seem to work. (Keeping in mind that the vectors are being computed correctly)I've also tried...
glRotatef(pitchTotal, 1,0,0);
glRotatef(yawTotal, 0,1,0);
glRotate(rollTotal, 0,0,1);
and I still get weird rotations.
Long story short... What is the proper way to rotate a 3D object in Open GL using the object's look, right and up vector?
You need to do the yaw rotation around (around Y) before you do the pitch one. Otherwise, the pitch will be off.
E.g. you have a 45 degrees downward pitch and a 180 degrees yaw. By doing the pitch first, and then rotate the yaw around the airplane's Y vector, the airplane would end up pointing up and backwards despite the pitch being downwards. By doing the yaw first, the plane points backwards, then the pitch around the plane's X vector will make it point downwards correctly.
The same logic applies for roll, which needs to be applied last.
So your code should be :
glRotatef(yawTotal, 0,1,0);
glRotatef(pitchTotal, 1,0,0);
glRotatef(rollTotal, 0,0,1);
Cumulative rotations will suffer from gimbal lock. Look at it this way: suppose you are in an aeroplane, flying level. You apply a yaw of 90 degrees anticlockwise. You then apply a roll of 90 degrees clockwise. You then apply a yaw of 90 degrees clockwise.
Your plane is now pointing straight downward — the total effect is a pitch of 90 degrees clockwise. But if you just tried to add up the different rotations then you'd end up with a roll of 90 degrees, and no pitch whatsoever because you at no point applied pitch to the plane.
Trying to store and update rotation as three separate angles doesn't work.
Common cited solutions are to use a quaternion or to store the object orientation directly as a matrix. The matrix solution is easier to build because you can prototype it with OpenGL's built-in matrix stacks. Most people also seem to find matrices easier to understand than quaternions.
So, assuming you want to go matrix, your prototype might do something like (please forgive my lack of decent Java knowledge; I'm going to write C essentially):
GLfloat myOrientation[16];
// to draw the object:
glMultMatrixf(myOrientation);
/* drawing here */
// to apply roll, assuming the modelview stack is active:
glPushMatrix(); // backup what's already on the stack
glLoadIdentity(); // start with the identity
glRotatef(angle, 0, 0, 1);
glMultMatrixf(myOrientation); // premultiply the current orientation by the roll
// update our record of orientation
glGetFloatv(GL_MODELVIEW_MATRIX, myOrientation);
glPopMatrix();
You possibly don't want to use the OpenGL stack in shipping code because it's not really built for this sort of use and so performance may be iffy. But you can prototype and profile rather than making an assumption. You also need to consider floating point precision problems — really you should be applying a step that ensures myOrientation is still orthonormal after it has been adjusted.
It's probably easiest to check Google for that, but briefly speaking you'll use the dot product to remove erroneous crosstalk from two of the axes to the third, then to remove from one of the first two axes from the second, then renormalise all three.
Thanks for the responses. The first response pointed me in the right direction, the second response helped a little too, but ultimately it boiled down to a combination of both. Initially, your 3D object should have a member variable which is a float array size 16. [0-15]. You then have to initialize it to the identity matrix. Then the member methods of your 3D object like "yawObject(float amount)" just know that you are yawing the object from "the objects point of view" and not the world, which would allow the incremental rotation. Inside the yawObject method (or pitch,roll ojbect) you need to call the Matrix.rotateM(myfloatarray,0,angle,0,1,0). That will store the new rotation matrix (as describe in the first response). You can then when you are about to draw your object, multiply the model matrix by the myfloatarray matrix using gl.glMultMatrix.
Good luck and let me know if you need more information than that.