Android opengl-es view question. So in openGL, the default position of the camera and view is at 0,0. How do you set the view and camera so that it acts basically the same as computer screen coordinates with 0,0 at the top. I've called gl.glOrthof(-screenWidth/2, ScreenWidth/2, -ScreenHeight/2, ScreenHeight/2). But I think this is wrong. I also need to set the camera to view the entire field. I'm not sure how to use gl.glFrustumf to accomplish this task.
To use your vertex coordinates as screen coordinates, just use glOrtho(0, width, height, 0, -1, 1) on the projection matrix and keep your modelview matrix identity (which is the default). Note that I flipped bottom and top, as in GL (0,0) is at the lower left and you want it at the top (but keep in mind that this also flips every object and therefore the triangle ordering). You also forgot to set the near and far planes (everything with a z out of this interval won't get displayed). But when you now draw all your objects with z=0 (which is the default, when drawing only 2d vertices), all should be fine.
glFrustum is just an alternative to glOrtho. Where glOrtho constrcuts an orthographic (parallel) view, glFrustum constructs a perspective view. So you don't need glFrustum.
Related
I am using OpenGl to draw an image. Now when i try to move the image, it moves by too much. So if i say the following:
gl.glTranslatef(0, 1, -5.0f);
squirrel.draw(gl);
If i out one as a parameter, the image is now located half way of screen. How do i make it so i can say things like:
gl.glTranslatef(screen_width - image_width , 0);
Is there an alternative method for drawing images in OpenGl?
I previously used canvas to draw images, and i had no problem positioning images on the screen. However with openGl i'm experiencing issues.
All you need to remember is, the screen space in OpenGL ranges from -1,-1 (top left), and 1,1 (bottom right). So you need to provide normalized values to OpenGL. To move a point along x direction from one end of the screen (-1.0) to another (1.0), left to right, you will have to Translate by 2.0 by using glTranslatef(2.0, 0, 0). This point is on the border, so you will have to adjust depending on the actual size of your object and its location.
I'm using OpenGL ES 2.0 on Android and I and I initialise my display like so:
float ratio = (float) width / height;
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7); //Using Orthographic as developing 2d
What I'm having trouble understanding is this:
Let's say my app is a 'fixed screen' game (like Pac-Man ie, no scrolling, just the whole game visible on the screen).
Now at the moment, if I draw a quad at -1 to +1 on both x and y I get something like this:
Obviously, this is because I am setting -ratio, ratio as seen above. So this is correct.
But am I supposed to use this as my 'whole' screen? With rather massive letterboxing on the left and right?
I want a rectangular display that is the whole height of the physical display (and as much of the width as possible), but this would mean drawing at less that -1 and more than +1, is this a problem?
I realise the option may be to use clipping if this was a scrolling game, but for this particular scenario I want the whole 'game board' on the screen and to be static (And to use as much of the available screen real estate as possible without 'stretching' thus causing elongation of my sprites).
As I like to work with 0,0 as the top of the screen, basically what I do is pass my draw method something like so:
quad1.drawQuad (10,0);
When the drawQuad method get's this, it basically takes the range from left to right as expressed my openGL and divide the the screen width (so, in my case -1.7 through +1.7 so 3.4/2560 = 0.001328125). And say I specify 10 as my X (as above), it will say something like:
-1.7 + (10*0.001328125) = -1.68671875
It then plots the quad at -1.68671875.
Doing this I am able to work with normal co-ords (and I just subtract rather than add for y axis so I can have 0 at the top).
Is this a good way to do things?
Because with this method, at the moment, if I specify a 100,100 square, it isn't a square, it's rectangle. However, on the plus side, I can fill the whole physical screen by scaling the quad by width x height.
You are drawing a 1x1 quad, so that is why you see a 1x1 quad. Try translating the quad 0.25 to the right or left and you will see that you can draw in that space too.
In graphics, you create an object, like a quad, in your case you made it 1x1. Then you position it wherever you want. If you do not position it, then it will be at the origin, which is what you see.
If you draw a wider shape, you will also see you can draw outside this area on the screen.
By the way, with your ortho matrix function, you aren't just specifying the screen aspect ratio, you are also specifying the coordinate unit size you have to work with. This is why a 1x1 is filling the height the of the screen, because your upper and lower boundaries are set to 1 and -1. Your ratio is a little more than one, since your width is longer than your height, so your left and right boundaries are essentially something like -1.5 and 1.5 (whatever your ratio happens to be).
But you can also do something like this;
Matrix.orthoM(mProjMatrix, 0, -width/2, width/2, -height/2, height/2, 3, 7);
Here, your ratio is the same, but you are sending it to your ortho projection with screen coordinates. (Disclaimer: I don't use the same math library you do, but these appears to be a conventional ortho matrix function based on the arguments you are passing to it).
So lets say you have a 1000x500 pixel resolution. In OpenGL your origin of 0,0 is in the middle. So now your left edge is at (-500,y), right edge at (500,y) and your top is (x,250). So if you draw your 1x1 quad, it will be very tiny, but if you draw a 250x250 square, it will look like your 1x1 quad in your previous ortho projection.
So you can specify the coordinates you want, the ratio, the unit size, etc for how you want to work. Personally, I dont't like specifying coordinates as fractions between 0 and 1, I like to think about them in the same sense as the screen pixels.
But whether or not you choose to do this, hopefully you understand what you are actually passing to these matrix functions.
One of the best ways to learn is draw an object to the screen and just play around with different numbers you send to your modelview and projection matrices so you can see what it is they are actually doing.
I am currently creating an android game and implemented collision detection a while back. I am simply drawing a Rect around sprites using their position, width and height and seeing if they intersect other Rects. However, my sprites now rotate depending on their trajectory, but I cannot find how to rotate the Rect so the bound is correct. Any suggestions?
Thanks
Andy
Rect objects are usually axis-aligned, and so they only need 4 values: top, left, bottom, right.
If you want to rotate your rectangle, you'll need to convert it to eight values representing the co-ordinate of each vertex.
You can easily calculate the centre value by averaging all the x- and y-values.
Then it's just basic maths. Here's something from StackOverflow:
Rotating a point about another point (2D)
Your eight values, or four corners are (assuming counter-clockwise from the top right):
v0 : (right, top)
v1 : (left, top)
v2 : (left, bottom)
v3 : (right, bottom)
Create your own rectangle object to cope with this, and compute intersections etc.
Note that I've talked about how to rotate the rectangle's vertices. If you still want a bounding box, this is normally still considered to be axis-aligned, so you could take the max and min of the rotated vertices and construct a new (larger) rectangle. That might not be what you want though.
My question may be a bit unclear, but I have extended the View class and generated a number of shapes on the canvas around (0,0). I want to put this point in the middle, so I have to tell the View that it has to draw horizontally, for example, from -640 to 640 on the x-axis and vertically, for example, from -360 to 360 on the y-axis.
Is there a way to tell the view that it has to draw these pixels without changing the coordinates of the drawn shapes. I just want to tell the view that it has to draw certain coordinates.
I want to be able to change dynamically which area is drawn.
I'm not 100% what you are trying to achieve, but if you want to move and scale your shapes, you can use the canvas translate or scale methods, to move the canvas to under your shapes. Remember that it is the canvas you translate, and not the shape, so the transformations will have to be done in reverse. You should also use the canvas save and restore methods to restore your canvas position between transformations.
If you instead want to limit any drawing to an area, you can use the canvas clip-methods, for example:
canvas.clipRect(-640, -360, 640, 360);
Would case any drawing outside that rectangle to be discarded.
I have a Opengl ES 1.x ANdroid 1.5 app that shows a Square with Perspective projection, on the center of the screen.
I need to move the camera (NOT THE SQUARE) when the user moves the finger on the screen, for example, if the user moves the finger to the right, the camera must be moved to the left, it must be shown like if the user is moving the square.
I need to do it without translating the square. The square must be on the opengl position 0,0,-1 allways.
I DONT WANT to rotate the camera arround the square, no, what i want is to move the camera side to side. Code examples are welcome, my opengl skills are very low, and i can't find good examples for this in google
I know that i must use this function: public static void gluLookAt (GL10 gl, float eyeX, float eyeY, float eyeZ, float centerX, float centerY, float centerZ, float upX, float upY, float upZ), but i dont understand where and how to get the values for the parameters. Because this, i will apreciate code examples for doing this.
for example:
I have a cube on the position 0,0,-1. I want that my camera points the cube. I tryed with this: GLU.gluLookAt(gl, 0, 0, 2, 0, 0, 0, 0, 0, 1);, but the cube is not in the screen, i just donmt understand what im doing wrong
First of all, you have to understand that in OpenGL there are not distinct model and view matrices. There is only a combined modelview matrix. So OpenGL doesn't care (or even know) if you translate the camera (what is a camera anyway?) or the object, so your requirement not to move the square is entirely artificial. Though it may be that this is a valid requirement and the distinction between model and view transformation often is very practical, just don't think that translating the square is any different from translating the camera from OpenGL's point of view.
Likewise don't you neccessarily need to use gluLookAt. Like glOrtho, glFrustum or gluPerspective this function just modifies the currently selected matrix (usually the modelview matrix), nothing different from the glTranslate, glRotate or glScale functions. The gluLookAt function comes in handy when you want to position a classical camera, but its functionality can also be achieved by calls to glTranslate and glRotate without problems and sometimes (depending on your requirements) this is even easier than artificially mapping your view parameters to gluLookAt parameters.
Now to your problem, which is indeed solvable quite easily without gluLookAt: What you want to do is move the camera in a direction parallel to the screen plane and this in turn is equivalent to moving the camera in the x-y-plane in view space (or camera space, if you want). And this in turn is equivalent to moving the scene in opposite direction in the x-y-plane in view space.
So all that needs to be done is
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, 0.0f);
//camera setup...
Where (x, y) is the movement vector determined from the touch events, appropriately scaled (try dividing the touch coords you get by the screen dimensions or something similar for example). After this glTranslate comes whatever other camera or scene transformations you already have (be it gluLookAt or just some glTranslate/glRotate/glScale calls). Just make sure that the glTranslate(x, y, ...) is the first transformation you do on the modelview matrix after setting it to identity, since we want to move in view space.
So you don't even need gluLookAt. From your other questions I know your code already looks something like
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, z);
glRotatef(...);
...
So everything you need to do is plug the x and y values determined from the touch movement into the first glTranslate call (or add them to already existing x and y values), since multiple translations are perfectly commutative.
For more insight into OpenGL's transformation pipeline (which is definitely needed before progressing further), you may also look at the asnwers to this question.
EDIT: If you indeed want to use gluLookAt (be it instead or after the above mentioned translation), here some small words about its workings. It defines a camera using three 3d vectors (passed in as 3 consecutive values each). First the camera's position (in your case (0, 0, 2)), then the point at which the camera looks (in your case (0, 0, 0), but (0, 0, 1) or (0, 0, -42) would result in the same camera, the direction matters). And last comes an up-vector, defining the approximate up-direction of the camera (which is further orthogonalized by gluLookAt to make an appropriate orthogonal camera frame).
But since the up-vector in your case is the z-axis, which is also the negative viewing direction, this results in a singular matrix. You probably want the y-axis as up-direction, which would mean a call to
gluLookAt(0,0,2, 0,0,0, 0,1,0);
which is in turn equivalent to a simple
glTranslate(0, 0, -2);
since you use the negative z-axis as viewing direction, which is also OpenGL's default.