Related
I'm currently trying to implement an AR-browser based on indoor maps, but I'm facing several problems, let's take a look at the figure:
In this figure, I've already changed the coordinate to OpenGL's right-handed coordinate system.
In our real-world scenario,
given the angle FOV/2 and the camera height h then I can get nearest visible point P(0,0,-n).
Given the angle B and the camera height h then I can get a point Q(0,0,-m) between nearest visible point and longest visible point.
Here comes a problem: when I finished setup my vertices(including P and Q) and use the method Matrix.setLookAtM like
Matrix.setLookAtM(modelMatrix, 0, 0f,h,0f,0f,-2000f,0f,0f,1f,0f);
the aspect ratio is incorrect.
If the camera height h is set to 0.92 and FOV is set to 68 degrees, n should be 1.43, But in OpenGL the coordinate of the nearest point is not (0,0,-1.43f). So I'm wondering how to fix this problem, how to map real-world coordinate to OpenGL's coordinate system?
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.
Model matrix:
The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions of the mesh to the world space.
View matrix:
The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space. In the coordinat system on the viewport, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
The view matrix can be set up by Matrix.setLookAtM
Projection matrix:
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
The perspective projection matrix can be set up by Matrix.perspectiveM
You can set up a separate view matrix and a separate projection matrix and finally multiply them. The aspect ratio and the field of view are parameters to [Matrix.perspectiveM]:
Matrix viewM = new Matrix();
Matrix.setLookAtM(viewM, 0, 0, 0, 0f, 0f,-2000f, 0f, 0f, 1.0f, 0.0f);
Matrix prjM = new Matrix();
Matrix.perspectiveM(prjM, 0, fovy, aspect, zNear, zFar);
Matrix viewPrjM = new Matrix();
Matrix.multiplyMM(viewPrjM, 0, prjM, 0, viewM, 0);
Thank to #Rabbid76's support, I finally figure it out myself.
Figure 1: Real-life scenario
Figure 2: OpenGL scenario
In real-life, if we are facing north, we coordinate system would be like:
x point to the east
y point to the north
z point to the sky
so given a camera held by a user, assuming its height is 1.5 meter and its field of view is 68 degrees, we can reference the nearest visible point is located at P(0,2.223,0). We can set the angle B to 89 degrees, so segment QP will be the visible ground on the smartphone screen.
How can we map the coordinate of real-life to OpenGL coordinate system? I found that we must go through several steps:
Assign the camera position to be the origin (e.g. C in figure2).
Due to OpenGL always draw from (1,1) to (-1,-1), we must assign the distance from C to C' to be 1, so that C' is (0, -1, 0).
Finally, we calculate the aspect ratio with camera height in real-life and segment C, C' in OpenGL, and apply it to other coordinates.
By doing stuff above, we can map real-world coordinate to OpenGL coordinate system magically.
I am trying to make an app using canvas and a surfaceview, and I am worrying that in the future I would have many problems with it because I am not sure if the canvas is proportional with every device. currently I can only use my emulator since my phone's usb cable doesn't work(I know.. I have to get a new one..).
anyways, i would like to know if the canvas would transfer my coordinates and make everything proportional, what I mean by that is that if i have something in point a, lets say (10, 10) on a device that the screen of it is 100 X 100 (this is just an example for easy calculation) it would be on point (1, 1) on a 10 X 10 device.
This is really bothering me...
Thanks!
No, this wouldn't be the case. If you have a coordinate (10,10), it would be the same on all devices. I'd suggest you scale your drawings.
To scale your drawings you simply define a bitmap (that will stay the same) you'd like to draw to (when screen sizes change, that bitmap will be stretched).
Define a constant bitmap:
Bitmap gameScreen = Bitmap.createBitmap(getGameScreenWidth(),
getGameScreenHeight(), Config.RGB_565);
Get the scale for both x and y
width = game.getWindowManager().getDefaultDisplay().getWidth();
height = game.getWindowManager().getDefaultDisplay().getHeight();
scaleXFromVirtualToReal = (float) width/this.gameScreenWidth;
scaleYFromVirtualToreal = (float) height/this.gameScreenHeight;
Define a canvas object based on the bitmap you defined earlier on (allowing you to draw to it eg. canvas.drawRect() [...]):
Canvas canvasGameScreen = new Canvas(gameScreen);
In your rendering Thread you'll have to have a Canvas called frameBuffer, which will render the virtual framebuffer:
frameBuffer.drawBitmap(this.gameScreen, null, new Rect(0, 0, width,
height), null);
No, the unit on the screen (whether you are using canvas or OpenGL) is a pixel. You can get the size of your canvas using Canvas.getWidth() and Canvas.getHeight() if you need relative coordinates, but your Canvas drawing methods are also in Pixels, so I guess you will need to convert coordinates in OpenGL only and not while using Canvas.
I used this function in my Android program:
public void drawBitmap (Bitmap bitmap, float left, float top, Paint paint)
However, I want to draw my bitmap not in the position 0 x 0, but in the position 10 x 10 (in PIXELS). The drawBitmap function, however, only accepts float numbers...
How can I achieve this??
Thank you in advance!
Have you tried drawBitmap(bitmap, 10.f, 10.f, ... )? Considering the transformation matrix of the canvas is set to the identity matrix, that is.
The reason those parameters are float is probably that the Canvas does not operate in an integer space (pixels), but in a user specified space defined by a transformation matrix. If you where to set a custom transformation matrix to scale by 2 then using 0.5, 0.5 would end up mapping to pixel 1, 1. This means you could also set a custom transformation to translate by 10, 10 and then just simply draw the bitmap without specifying a destination.
I am working on an Android Application in which a 3d scene is displayed and the user should be able to select an area by clicking/tapping the screen. The scene is pretty much a planar (game) board on which different objects are placed.
Now, the problem is how do I get the clicked area on the board from the actual screen-space coordinates?
I was planning on using gluUnProject(), as I have access to (almost) all the necessary parameters. Unfortunately I am missing the winZ param, and cannot get the current depth as the touch event is occurring in a different thread than the GL-thread.
My new plan is to still use gluUnProject, but with a winZ of 0, and then project the resulting point onto the board (the board stretches from 0,0,0 to 10,0,10 in model space), However, I can't seem to figure out how to do this?
I would be very happy if anyone could help me out with the maths needed to do this (matrices were never my strongest side), or perhaps find a better solution.
To clarify; here is an image of what I want to do:
The red rectangle represent the device screen, the green x is the touch event and the black square is the board (grey subdivisions represent a square of one unit). I need to figure out where on the board the touch has happened (in this case it is in square 1,1).
As you are working in 2D basically already, (I presume you mean your 3D board stretches from 0,0,0 to 10,10,0 (x,y,z).) you could translate and interpolate/extrapolate the 2D/3D space coordinates from your screen space coordinates without the gluUnProject(). You will need your screen resolution, and to pick the resolution of the 3D space grid you wish to convert to. If both the screen and 3D space origins are aligned (0,0 screen space is at 0,0,0 3D space), and your screen dimensions are 320x240, using your existing 10x10 3D grid, then 320/10 = 32, and 240/10 = 24, thus the screen space size of a single 1x1 area is 32x24. So if the user presses on 162, 40, then the user is pressing within ( 5, 1, 0) (162/32 >= 5 but < 6, 40/24 >= 1 but < 2 ) in the 3D space. If you need greater resolution than this you can select a higher 3D space grid resolution (i.e. using 20 instead of 10). You don't need to update the GL matrix to use this factor. Though it may make it simpler in some ways, I'm sure from a modeling perspective you would have additional work to do. Just be aware for a factor like 20, 1,3 would be at (.5, 1.5, 0). If your screen and 3D space origins are not already aligned will need to translate the screen space coord prior to this. If 0,0 screen space is 10,10,0, you will need to take your screen resolution and subtract the current point from it, making 0,0 into 320, 240 in this example, our example point of 162, 40, would be 158 (320-158 == 162), 200 (240-200 == 40).
If you'd like an overview of the projection matrix and how that all works, which could help you understand where to put the screen space dimensions in the unproject matrix, read this chapter of the OpenGL red book. http://www.glprogramming.com/red/chapter03.html
Hope this helps, and good luck!
So, I managed to solve this by doing the following:
float[] clipPoint = new float[4];
int[] viewport = new int[]{0, 0, width, height};
//screenY/screenX are the screen-coordinates, y should be flipped:
screenY = viewport[3] - screenY;
//Calculate a z-value appropriate for the far clip:
float dist = 1.0f;
float z = (1.0f/clip[0] - 1.0f/dist)/(1.0f/clip[0]-1.0f/clip[1]);
//Use gluUnProject to create a 3d point in the far clip plane:
GLU.gluUnProject(screenX, screenY, z, vMatrix, 0, pMatrix, 0, viewport, 0, clipPoint, 0);
//Get a point representing the 'camera':
float eyeX = lookat[0] + eyeOffset[0];
float eyeY = lookat[1] + eyeOffset[1];
float eyeZ = lookat[2] + eyeOffset[2];
//Do some magic to calculate where the line between clipPoint and eye/camera would intersect the y-plane:
float dX = eyeX - clipPoint[0];
float dY = eyeY - clipPoint[1];
float dZ = eyeZ - clipPoint[2];
float resX = glu[0] - (dX/dY)*glu[1];
float resZ = glu[2] - (dZ/dY)*glu[1];
//resX and resZ is the wanted result.
I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide