OpenGL co-ordinate mapping to device co-ordinate - android

I have gone through so many tutorial & also implemented some small apps in OpenGL.Stil I have confusion over the mapping of OpenGL co-ordinate system to android view co-ordinate system.
I faced the problem while I was trying to display a texture as full screen.Somehow by hit&trial method I was able to show the texture as full screen,but has so many doubts for which I could not proceed fast.
In OpenGL co-ordinate system starts with left-bottom(as origin),whereas in device left- top as origin.How things are mapped correctly to device.
In OpenGL we specify vertices range start from -1 to 1.How these range maps to device where it ranges from 0 to width & height.
Can vertices be mapped exactly the same way as the device co-ordinate.Like vertex with 0,100 maps to device co-ordinates with 0,100.
While trying to show texture as fullscreen,I changed the code according to some blogs&it worked.Here is the changes.
glOrtho(0, width, height, 0, -1, 1); from glOrtho(0, width, 0, height, -1, 1);
& vertices[] = {
0, 0,
width, 0,
width, height,
0, height
};
from {-1,-1,
1,-1,
-1,1,
1,1}
Plz help me to understand the co-ordinate mapping.

what you set the glOrtho to the width and the height opengl is going to stretch that to fit the device you are using, say your width = 320 and height = 480 when you make glOrth(0,width,height,0,1,-1) opengl stretches that to fit your screen so the coordinates can be whatever you want them to be by setting the width and height of glOrth()

Related

Projection and Translation in OpenGL ES 2

I have a question regarding transformations in OpenGL ES 2. I'm currently drawing a rectangle using triangle fans as depicted in the image below. The origin is located in its center, while its width and height are 0.6 and 2 respectively. I assume that these sizes are related to the model space. However, in order to maintain the ratio of height and width on a tablet or phone one has to do a projection that considers the proportion of the device lengths (again width and height). This is why I call orthoM(projectionMatrix, 0, -aspectRatio, aspectRatio, -1f, 1f, -1f, 1f);and the aspectRatio is given by float aspectRatio = (float) width / (float) height. This finally leads to the rectangle shown in the image below. Now, I would like to move the rectangle along the x-axis to the border of the screen. However, I was not able to come up with the correct calculation to do so, either I moved it too little or too much. So how would the calculation look like? Furtermore, I'm a little bit confused about the sizes given in the model space. What are the max and min values that can be achieved there?
Thanks a lot!
Vertex position of the rectangle are in world space. A way to do this it could be get the screen coordinates you want to move to and then transform them into world space.
For example:
If the screen is 300 x 200 and you are in the center 0,0 in world space (or 150, 100) in screen space). You want to translate to 300.
So the transformation should be screen_position to normalized device coordiantes and then multiply by inverseOf(projection matrix * view matrix) and divided by the w component.
Here it is explained for mouse that it is finally the same, just that you know the z because it is the one you used for your rectangle already (if it is on the plane x,y): OpenGL Math - Projecting Screen space to World space coords.

Determining what I'm looking at with Google Cardboard [Android]

So here's the problem overview: I render a number of models using OpenGL ES in GvrView using the GvrView.StereoRenderer and I want to determine which exact model I'm looking at.
My idea is to reproject the screen coordinates back to the model space (discarding Z) and check if the point (lets call it Point) is in the range:
(ModelsMinX < Point.x < ModelsMaxX) and (ModelsMinY < Point.y < ModelsMaxY).
I was trying to use GLU.gluUnproject to get the initial coordinates. This function requires the current viewport and that's where the problems begin:
GvrView.StereoRenderer has a method .onDrawEye, which is called whenever there's something specific to one eye that should be setup before rendering (aka the view and the projection matrices should be acquired from the Eye instance). An Eye also has a method .getViewport which is supposed to return a viewport for the current eye, however the returned result is completely clear to me. More specifically, I'm developing on Nexus 6 (1440x2560 pixels) and .getViewport returns:
x = 0, y 0, width = 1280, height = 1440 // for the first eye
x = 1280, y 0, width = 1280, height = 1440 // for the second eye.
Now this is interesting.. Somehow I assumed two things about the current viewport:
width = 1440, height = 1280 (we are in the landscape mode after all);
the viewport size for each eye will be half the size of the whole viewport.
Hence, calling .gluUnproject on the middle point of the viewport:
GLU.gluUnProject(viewport.width / 2, viewport.height / 2, 0, mEyeViewMatrix, 0, mEyeProjectionMatrix, 0, new int[] {viewport.x, viewport.y, viewport.width, viewport.height}, 0, center, 0);
does not yield expected results, in fact, it gives me all 0s. I found this question (Determining exact eye view size), but the guy gets even stranger viewport values and it doesn't contain an answer..
So the question is - how to I get from the 'eye'-space coordinates to the model? And what those coordinates even are?
Here's the project's github for reference: https://github.com/bowlingforsoap/CardboardDataVisualizationJava.
Some other approaches I'm aware of:
In the treasurehunt demo they use an opposite way of doing things - they go from a model coordinate (0, 0, 0, 1) into the head view space using the HeadTransform to get the headView matrix (look for method .isLookingAtObject in https://github.com/googlevr/gvr-android-sdk/blob/master/samples/sdk-treasurehunt/src/main/java/com/google/vr/sdk/samples/treasurehunt/TreasureHuntActivity.java).
Using raycasting. I'm not sure this is going to help my cause, because after I get an observed object I would like to create an 'floating' activity which is going to contain information about it (I certainly don't wanna render that data through shaders).
Wheh! Boy, that's a lot of text. But yeah, it seems like a generic problem, yet I haven't found an easy/elegant/working solution to that.. Would appreciate any feedback.

3d simulation: setting up frustum, viewport, camera, etc

Learning OpenGL ES 2.0, using java (for Android).
Currently, I'm fooling around with the following to set up ViewPort, ViewMatrix, and Frustum and to do translation:
GLES20.glViewport(0, 0, width, height) // max, full screen
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY,
lookZ, upX, upY, upZ);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
Matrix.translateM(mModelMatrix, 0, x, y, z);
Here's what I want to do:
In short, I want to display objects as realistically possible, in terms of their positions and shapes when they are projected on the device screen. (At this stage, I'm not concerned about texture, lighting, etc.)
Questions:
Suppose that I want to display a cube (each edge being 4 inches long) as if it's floating 20 inches behind the display screen of a 10" tablet that I'm holding directly in front of my eyes, 16 inches away. The line of sight is on (along) the z-axis running through the center of the display screen of the tablet perpendicularly, and the center of the cube is on the z-axis.
What are the correct values of the parameters I should pass to the above two functions to set up ViewMatrix and Frustum to simulate the above situation?
And what would be the value (length) of the edges of the cube to be defined in the model space, centered at (0, 0, 0) if NO SCALING will be used?
And finally, what would be the value of z I should pass to the above translate function, so that the cube appears to be 20 inches behind the display screen?
Do I need to set up something else?
Let's go through this step by step. Firstly, it makes sense to use inch as the world space unit. This way, you don't have to convert between units.
Let's start with the projection. If you only want objects behind the tablet to be visible, then you can just set znear to 16. zfar can be chosen arbitrarily (depending on the scene).
Next, we need the vertical field of view. If the tablet's screen is h inches high (this could be calculated from the aspect ratio and diagonal length. If you need this calculation, leave a comment), the fovy can be calculated as follows:
float fovy = 2 * atan(h / 2 / 16); //screen is 16 inches away
//needs to be converted to degrees
Matrix.perspectiveM(mProjectionMatrix, 0, fovy * 180.0f / PI, aspect, znear, zfar);
That's already been the harder part.
Let's go on to the view matrix. The view matrix is used if your camera is not aligned with the world coordinate system. Now it depends on how you want to set up the world coordinate system. If you want the eye to be the origin, you don't need a view matrix at all. We could also specify the display as the origin like so:
//looking from 16 inches in front of the tablet to the origin
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 16, 0, 0, 0, 0, 1, 0);
Positioning the cube is equally easy. If you want it to have an edge length of 4 inches, then make a cube with edge length 4. If you want its center to be positioned 20 inches behind the screen, translate it by this amount (assuming the view matrix above):
Matrix.translateM(mModelMatrix, 0, 0, 0, -20);

How can I mirror a texture bitmap in OpenGL Android?

When I'm applying a texture to a shape, I keep seeing it mirrored. The GLU.gluLookAt is set to be 5 units up, so it's GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 2, 0);. If it would be 5 units down, the x axis would be reversed, and that would be an even bigger problem.
Can you please tell me how to mirror the bitmap that is being load to be a texture? I want to maintain the position of the axis and the shapes being drawn the way they are, I just want to automatically mirror the bitmap.
Can you please tell me how to do that? Perhaps give me a code sequence that mirrors the bitmap on the x axis?
You should be able to reverse the texture mapping coordinates you're using. For horizontal mirror, reverse the u values. For vertical, reverse v.

Help me configure OpenGL for 2D

I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide

Categories

Resources