3d simulation: setting up frustum, viewport, camera, etc - android

Learning OpenGL ES 2.0, using java (for Android).
Currently, I'm fooling around with the following to set up ViewPort, ViewMatrix, and Frustum and to do translation:
GLES20.glViewport(0, 0, width, height) // max, full screen
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY,
lookZ, upX, upY, upZ);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
Matrix.translateM(mModelMatrix, 0, x, y, z);
Here's what I want to do:
In short, I want to display objects as realistically possible, in terms of their positions and shapes when they are projected on the device screen. (At this stage, I'm not concerned about texture, lighting, etc.)
Questions:
Suppose that I want to display a cube (each edge being 4 inches long) as if it's floating 20 inches behind the display screen of a 10" tablet that I'm holding directly in front of my eyes, 16 inches away. The line of sight is on (along) the z-axis running through the center of the display screen of the tablet perpendicularly, and the center of the cube is on the z-axis.
What are the correct values of the parameters I should pass to the above two functions to set up ViewMatrix and Frustum to simulate the above situation?
And what would be the value (length) of the edges of the cube to be defined in the model space, centered at (0, 0, 0) if NO SCALING will be used?
And finally, what would be the value of z I should pass to the above translate function, so that the cube appears to be 20 inches behind the display screen?
Do I need to set up something else?

Let's go through this step by step. Firstly, it makes sense to use inch as the world space unit. This way, you don't have to convert between units.
Let's start with the projection. If you only want objects behind the tablet to be visible, then you can just set znear to 16. zfar can be chosen arbitrarily (depending on the scene).
Next, we need the vertical field of view. If the tablet's screen is h inches high (this could be calculated from the aspect ratio and diagonal length. If you need this calculation, leave a comment), the fovy can be calculated as follows:
float fovy = 2 * atan(h / 2 / 16); //screen is 16 inches away
//needs to be converted to degrees
Matrix.perspectiveM(mProjectionMatrix, 0, fovy * 180.0f / PI, aspect, znear, zfar);
That's already been the harder part.
Let's go on to the view matrix. The view matrix is used if your camera is not aligned with the world coordinate system. Now it depends on how you want to set up the world coordinate system. If you want the eye to be the origin, you don't need a view matrix at all. We could also specify the display as the origin like so:
//looking from 16 inches in front of the tablet to the origin
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 16, 0, 0, 0, 0, 1, 0);
Positioning the cube is equally easy. If you want it to have an edge length of 4 inches, then make a cube with edge length 4. If you want its center to be positioned 20 inches behind the screen, translate it by this amount (assuming the view matrix above):
Matrix.translateM(mModelMatrix, 0, 0, 0, -20);

Related

Pixel perfect glTranslatef

how can I make it so I can plug in pixel coordinates into gl.glTranslatef() ?
Currently, I do this to get the texture appear at the bottom of the screen:
gl.glPushMatrix();
gl.glTranslatef(1.45f, -2.76f, 0);
gl.glScalef(1 / scaleX, 1 / scaleY, 0);
Square sq = getTexture(resourceId);
sq.draw(gl);
gl.glPopMatrix();
Without having to plug in the "1.45f, -2.76f, 0" values, the texture appears at the centre of the screen. How can I position my textures using pixel coordinates? Most of my texture's dimensions are 32x32, and a few 16x16.
Before I have used ((GL11Ext) gl).glDrawTexfOES() to render my textures, however I was unable to perform any transformations to the textures, for example I couldn't rotate them, etc.
I don't know how you set up your projection, but what you should be doing is use glOrtho and glViewport to set up your scene. given a window of size (width, height):
// init opengl at some previous point
gl.glViewport(0, 0, width, height);
// choose bottom-left corner, e.g. (0,0)
// use your own values for near and far planes
float left = 0, bottom = 0, near = 0.1f, far = 1000.0f;
gl.glMatrixMode(GL_PROJECTION);
gl.glOrtho(left, left + width, bottom, bottom + height, near, far);
gl.glMatrixMode(GL_MODELVIEW);
gl.glLoadIdentity();
and you will get pixel-coords (bottom-left at (left,bottom)) for your application

Applying Orthographic projection or frustum effecting normalised coordinates?

I know that normalised coordinates should be -1 (Left) and +1 (Right) and -1 (Bottom) and +1 (Top)
like this:
But after applying this:
From my onSurfaceChanged method
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
//The above line can be replaced with:
//Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
//But I get the same results with either frustumM or otrhoM
And this in my onDrawFrame method
Matrix.setLookAtM(mVMatrix, 0, 0, 0, 3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
(I then pass mMVPMatrix into my custom class's draw method where it is rotated and translated).
But, my co-ordinates seem to change - this is roughly what happens:
As you can see, the x co-ordinates are altered somewhat, -1 and +1 are no longer the edges of the screen (on the device I'm using at the moment, the outer edges become -1.7 and +1.7)
Y coordinates remain unchanged.
Would appreciate if someone could point out where I'm going wrong? I need it to be -1 through +1 like it should be.
Thanks
To my eyes it appears correct. If your screen is not a square, then are you sure you want your x axis to be stretched so it behaves like a square screen? Because if you do that, then if you tell OpenGL to draw a square, it will appear as a rectangle on the screen instead if you don't have your x axis edges be larger than your y axis edges when your screen width is wider than your screen height, as suggested by the image. That's why you pass your ratio to the projection, so it knows how to draw things properly.
What is happening when you draw a regular square on the screen? Is it appearing as a square or a rectangle?

OpenGL co-ordinate mapping to device co-ordinate

I have gone through so many tutorial & also implemented some small apps in OpenGL.Stil I have confusion over the mapping of OpenGL co-ordinate system to android view co-ordinate system.
I faced the problem while I was trying to display a texture as full screen.Somehow by hit&trial method I was able to show the texture as full screen,but has so many doubts for which I could not proceed fast.
In OpenGL co-ordinate system starts with left-bottom(as origin),whereas in device left- top as origin.How things are mapped correctly to device.
In OpenGL we specify vertices range start from -1 to 1.How these range maps to device where it ranges from 0 to width & height.
Can vertices be mapped exactly the same way as the device co-ordinate.Like vertex with 0,100 maps to device co-ordinates with 0,100.
While trying to show texture as fullscreen,I changed the code according to some blogs&it worked.Here is the changes.
glOrtho(0, width, height, 0, -1, 1); from glOrtho(0, width, 0, height, -1, 1);
& vertices[] = {
0, 0,
width, 0,
width, height,
0, height
};
from {-1,-1,
1,-1,
-1,1,
1,1}
Plz help me to understand the co-ordinate mapping.
what you set the glOrtho to the width and the height opengl is going to stretch that to fit the device you are using, say your width = 320 and height = 480 when you make glOrth(0,width,height,0,1,-1) opengl stretches that to fit your screen so the coordinates can be whatever you want them to be by setting the width and height of glOrth()

Reverse projecting screenspace coordinate to modelspace coordinates

I am working on an Android Application in which a 3d scene is displayed and the user should be able to select an area by clicking/tapping the screen. The scene is pretty much a planar (game) board on which different objects are placed.
Now, the problem is how do I get the clicked area on the board from the actual screen-space coordinates?
I was planning on using gluUnProject(), as I have access to (almost) all the necessary parameters. Unfortunately I am missing the winZ param, and cannot get the current depth as the touch event is occurring in a different thread than the GL-thread.
My new plan is to still use gluUnProject, but with a winZ of 0, and then project the resulting point onto the board (the board stretches from 0,0,0 to 10,0,10 in model space), However, I can't seem to figure out how to do this?
I would be very happy if anyone could help me out with the maths needed to do this (matrices were never my strongest side), or perhaps find a better solution.
To clarify; here is an image of what I want to do:
The red rectangle represent the device screen, the green x is the touch event and the black square is the board (grey subdivisions represent a square of one unit). I need to figure out where on the board the touch has happened (in this case it is in square 1,1).
As you are working in 2D basically already, (I presume you mean your 3D board stretches from 0,0,0 to 10,10,0 (x,y,z).) you could translate and interpolate/extrapolate the 2D/3D space coordinates from your screen space coordinates without the gluUnProject(). You will need your screen resolution, and to pick the resolution of the 3D space grid you wish to convert to. If both the screen and 3D space origins are aligned (0,0 screen space is at 0,0,0 3D space), and your screen dimensions are 320x240, using your existing 10x10 3D grid, then 320/10 = 32, and 240/10 = 24, thus the screen space size of a single 1x1 area is 32x24. So if the user presses on 162, 40, then the user is pressing within ( 5, 1, 0) (162/32 >= 5 but < 6, 40/24 >= 1 but < 2 ) in the 3D space. If you need greater resolution than this you can select a higher 3D space grid resolution (i.e. using 20 instead of 10). You don't need to update the GL matrix to use this factor. Though it may make it simpler in some ways, I'm sure from a modeling perspective you would have additional work to do. Just be aware for a factor like 20, 1,3 would be at (.5, 1.5, 0). If your screen and 3D space origins are not already aligned will need to translate the screen space coord prior to this. If 0,0 screen space is 10,10,0, you will need to take your screen resolution and subtract the current point from it, making 0,0 into 320, 240 in this example, our example point of 162, 40, would be 158 (320-158 == 162), 200 (240-200 == 40).
If you'd like an overview of the projection matrix and how that all works, which could help you understand where to put the screen space dimensions in the unproject matrix, read this chapter of the OpenGL red book. http://www.glprogramming.com/red/chapter03.html
Hope this helps, and good luck!
So, I managed to solve this by doing the following:
float[] clipPoint = new float[4];
int[] viewport = new int[]{0, 0, width, height};
//screenY/screenX are the screen-coordinates, y should be flipped:
screenY = viewport[3] - screenY;
//Calculate a z-value appropriate for the far clip:
float dist = 1.0f;
float z = (1.0f/clip[0] - 1.0f/dist)/(1.0f/clip[0]-1.0f/clip[1]);
//Use gluUnProject to create a 3d point in the far clip plane:
GLU.gluUnProject(screenX, screenY, z, vMatrix, 0, pMatrix, 0, viewport, 0, clipPoint, 0);
//Get a point representing the 'camera':
float eyeX = lookat[0] + eyeOffset[0];
float eyeY = lookat[1] + eyeOffset[1];
float eyeZ = lookat[2] + eyeOffset[2];
//Do some magic to calculate where the line between clipPoint and eye/camera would intersect the y-plane:
float dX = eyeX - clipPoint[0];
float dY = eyeY - clipPoint[1];
float dZ = eyeZ - clipPoint[2];
float resX = glu[0] - (dX/dY)*glu[1];
float resZ = glu[2] - (dZ/dY)*glu[1];
//resX and resZ is the wanted result.

Help me configure OpenGL for 2D

I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide

Categories

Resources