I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide
Related
I am drawing an image based texture using opengl in android and trying to rotate it about its center.
But the result is not as expected and it appears skewed.
First screen grab is the texture drawn without rotation and the second one is the one drawn with 10 degree rotation.
Code snippet is as below:
mViewWidth = viewWidth;//View port width
mViewHeight = viewHeight;//View port height
float ratio = (float) viewWidth / viewHeight;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
.....
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 5, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.setRotateM(mRotationMatrix, 0, 10, 0, 0, 1.0f);
Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, temp, 0, mRotationMatrix, 0);
GLES20.glUniformMatrix4fv(mRotationMatrixHandle , 1, false, mRotationMatrix, 0);
And in shader:
....
" gl_Position = uMVPMatrix*a_position;\n"
....
The black area in the first screen grab is the area of GLSurfaceView and the grey area is the area where I am trying to draw the image.
The image is already at origin and I think there is no need to translate before rotating it.
The basic problem is that you're scaling your geometry to adjust for the screen aspect ratio before you apply the rotation.
It might not be obvious that you're actually scaling the geometry. But by calculating the coordinates you use for drawing to adjust for the aspect ratio, you are effectively applying a non-uniform scaling transformation to the geometry. And if you then rotate the result, it will get distorted.
What you need to do is apply the rotation before you scale. This will require some reorganization of your current code. Since you apply the scaling before you pass the coordinates to OpenGL, and then do the rotation in the shader, you can't easily change the order. You either have to:
Apply both transformations, in the proper order, to the input coordinates before you pass them to OpenGL, and remove the rotation from the shader code.
Apply both transformations, in the proper order, in the shader code. To do this, you would not modify the input coordinates to adjust to the aspect ratio, and pass a scaling factor into the shader instead.
For the first option, applying a 2D rotation in your own code is easy enough, and it looks like you only have 4 vertices, so there is no efficiency concern. Still the second options is certainly more elegant. So instead of scaling the coordinates in your client code, pass a scaling factor as a uniform into the shader. Then, in the GLSL code, apply the rotation first, and scale the resulting coordinates.
Another option is that you build the complete transformation matrix (again based on applying the individual transformations in the correct order), and pass that matrix into the shader.
Learning OpenGL ES 2.0, using java (for Android).
Currently, I'm fooling around with the following to set up ViewPort, ViewMatrix, and Frustum and to do translation:
GLES20.glViewport(0, 0, width, height) // max, full screen
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY,
lookZ, upX, upY, upZ);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
Matrix.translateM(mModelMatrix, 0, x, y, z);
Here's what I want to do:
In short, I want to display objects as realistically possible, in terms of their positions and shapes when they are projected on the device screen. (At this stage, I'm not concerned about texture, lighting, etc.)
Questions:
Suppose that I want to display a cube (each edge being 4 inches long) as if it's floating 20 inches behind the display screen of a 10" tablet that I'm holding directly in front of my eyes, 16 inches away. The line of sight is on (along) the z-axis running through the center of the display screen of the tablet perpendicularly, and the center of the cube is on the z-axis.
What are the correct values of the parameters I should pass to the above two functions to set up ViewMatrix and Frustum to simulate the above situation?
And what would be the value (length) of the edges of the cube to be defined in the model space, centered at (0, 0, 0) if NO SCALING will be used?
And finally, what would be the value of z I should pass to the above translate function, so that the cube appears to be 20 inches behind the display screen?
Do I need to set up something else?
Let's go through this step by step. Firstly, it makes sense to use inch as the world space unit. This way, you don't have to convert between units.
Let's start with the projection. If you only want objects behind the tablet to be visible, then you can just set znear to 16. zfar can be chosen arbitrarily (depending on the scene).
Next, we need the vertical field of view. If the tablet's screen is h inches high (this could be calculated from the aspect ratio and diagonal length. If you need this calculation, leave a comment), the fovy can be calculated as follows:
float fovy = 2 * atan(h / 2 / 16); //screen is 16 inches away
//needs to be converted to degrees
Matrix.perspectiveM(mProjectionMatrix, 0, fovy * 180.0f / PI, aspect, znear, zfar);
That's already been the harder part.
Let's go on to the view matrix. The view matrix is used if your camera is not aligned with the world coordinate system. Now it depends on how you want to set up the world coordinate system. If you want the eye to be the origin, you don't need a view matrix at all. We could also specify the display as the origin like so:
//looking from 16 inches in front of the tablet to the origin
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 16, 0, 0, 0, 0, 1, 0);
Positioning the cube is equally easy. If you want it to have an edge length of 4 inches, then make a cube with edge length 4. If you want its center to be positioned 20 inches behind the screen, translate it by this amount (assuming the view matrix above):
Matrix.translateM(mModelMatrix, 0, 0, 0, -20);
I have gone through so many tutorial & also implemented some small apps in OpenGL.Stil I have confusion over the mapping of OpenGL co-ordinate system to android view co-ordinate system.
I faced the problem while I was trying to display a texture as full screen.Somehow by hit&trial method I was able to show the texture as full screen,but has so many doubts for which I could not proceed fast.
In OpenGL co-ordinate system starts with left-bottom(as origin),whereas in device left- top as origin.How things are mapped correctly to device.
In OpenGL we specify vertices range start from -1 to 1.How these range maps to device where it ranges from 0 to width & height.
Can vertices be mapped exactly the same way as the device co-ordinate.Like vertex with 0,100 maps to device co-ordinates with 0,100.
While trying to show texture as fullscreen,I changed the code according to some blogs&it worked.Here is the changes.
glOrtho(0, width, height, 0, -1, 1); from glOrtho(0, width, 0, height, -1, 1);
& vertices[] = {
0, 0,
width, 0,
width, height,
0, height
};
from {-1,-1,
1,-1,
-1,1,
1,1}
Plz help me to understand the co-ordinate mapping.
what you set the glOrtho to the width and the height opengl is going to stretch that to fit the device you are using, say your width = 320 and height = 480 when you make glOrth(0,width,height,0,1,-1) opengl stretches that to fit your screen so the coordinates can be whatever you want them to be by setting the width and height of glOrth()
When I'm applying a texture to a shape, I keep seeing it mirrored. The GLU.gluLookAt is set to be 5 units up, so it's GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 2, 0);. If it would be 5 units down, the x axis would be reversed, and that would be an even bigger problem.
Can you please tell me how to mirror the bitmap that is being load to be a texture? I want to maintain the position of the axis and the shapes being drawn the way they are, I just want to automatically mirror the bitmap.
Can you please tell me how to do that? Perhaps give me a code sequence that mirrors the bitmap on the x axis?
You should be able to reverse the texture mapping coordinates you're using. For horizontal mirror, reverse the u values. For vertical, reverse v.
This is my first post here, therefore apologize for any blunders.
I'm developing a simple action game with the usage of OpenGL ES 2.0 and Android 2.3. My game framework on which I'm currently working on is based on two dimensional sprites which exists in three dimensional world. Of course my world entities possess information such as position within the imaginary world, rotational value in form of float[] matrix, OpenGL texture handle as well as Android's Bitmap handle (I'm not sure if the latter is necessary as I'm doing the rasterisation with the usage of OpenGl machine, but for the time being it is just there, for my convenience). This is briefly the background, now to the problematic issue.
Presently I'm stuck with the pixel based collision detection as I'm not sure which object (here OGL texture, or Android Bitmap) I need to sample. I mean, I've already tried to sample Android's Bitmap, but it completely didn't worked for me - many run-time crashes in relation to reading outside of the bitmap. Of course to be able to read the pixels from the bitmap, I've used Bitmap.create method to obtain properly rotated sprite. Here's the code snippet:
android.graphics.Matrix m = new android.graphics.Matrix();
if(o1.angle != 0.0f) {
m.setRotate(o1.angle);
b1 = Bitmap.createBitmap(b1, 0, 0, b1.getWidth(), b1.getHeight(), m, false);
}
Another issue, which might add to the problem, or even be the main problem, is that my rectangle of intersection (rectangle indicating two dimensional space mutual for both objects) is build up from parts of two bounding boxes which were computed with the usage of OpenGL matrices Matrix.multiplyMV functionality (code below). Could it be, that those two Android and OpenGL matrices computation methods aren't equal?
Matrix.rotateM(mtxRotate, 0, -angle, 0, 0, 1);
// original bitmap size, equal to sprite size in it's model space,
// as well as in world's space
float[] rect = new float[] {
origRect.left, origRect.top, 0.0f, 1.0f,
origRect.right, origRect.top, 0.0f, 1.0f,
origRect.left, origRect.bottom, 0.0f, 1.0f,
origRect.right, origRect.bottom, 0.0f, 1.0f
};
android.opengl.Matrix.multiplyMV(rect, 0, mtxRotate, 0, rect, 0);
android.opengl.Matrix.multiplyMV(rect, 4, mtxRotate, 0, rect, 4);
android.opengl.Matrix.multiplyMV(rect, 8, mtxRotate, 0, rect, 8);
android.opengl.Matrix.multiplyMV(rect, 12, mtxRotate, 0, rect, 12);
// computation of object's bounding box (it is necessary as object has been
// rotated second ago and now it's bounding rectangle doesn't match it's host
float left = rect[0];
float top = rect[1];
float right = rect[0];
float bottom = rect[1];
for(int i = 4; i < 16; i += 4) {
left = Math.min(left, rect[i]);
top = Math.max(top, rect[i+1]);
right = Math.max(right, rect[i]);
bottom = Math.min(bottom, rect[i+1]);
};
Cheers,
first note that there is a bug in your code. You can not use Matrix.multiplyMV() with source and destination vector being the same (the function will correctly calculate an x coordinate which it will overwrite in the source vector. However, it needs the original x to calculate the y, z and w coordinates - which are in turn flawed). Also note that it would be easier for you to use bounding spheres for the first detection collision step, as they do not require such a complicated code to perform matrix transformation.
Then, the collision detection. You should not read from bitmaps nor textures. What you should do is to build a silhouette for your object (that is pretty easy, silhouette is just a list of positions). After that you need to build convex objects that fill the (non-convex) silhouette. It can be acheived by eg. ear clipping algorithm. It may not be the fastest, but it is very easy to implement and will be done only one time. Once you have the convex objects, you can transform their coordinates using a matrix and detect collisions with your world (there are many nice articles on ray-triangle intersections you can use), and you get the same precision as if you were to use pixel-based collision detection.
I hope it helps ...