This may sound really obvious but how can I find the points along the edge of the screen in order to know if a moving object has hit the edge of the screen.
Thankyou
Given you have orthographic projection, you can easily know where your edges are by knowing the values you've passed into the glOrtho() function.
For example, consider this line:
glOrtho(0.0f, width, 0.0f, height, 0.0f, 1.0f);
Here you can decide how big influence a single float will have in your program. You can find more information about this here.
However, in this scenario your left edge is 0.0 and your bottom edge is 0.0. Your top edge is defined by height and your right edge is defined by width.
In this way you don't have to rely on a specific screen size since your world will always be as big as you define. For example, you don't need to define the orthographical width and height by the width and height parameters, but instead use some data telling your application how big your world should be.
Just do this:
Display display = getWindowManager().getDefaultDisplay();
int width = display.getWidth();
int height = display.getHeight();
Related
I wonder if there is an easy way to flip the y-coordinates when using perspective projection? The threads about the issue seem to focused on orthographic projection. I am translating my game based on Canvas to OpenGL ES 2.0 and have relatively complex collision detection. And a lot of syntax is based on the y-axis starts from top of the screen with 0 and ends on the bottom of thes screen for instance 2560
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
game_width = width;
game_height = height;
GLES20.glViewport(0, 0, width, height);
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1f;
final float far = 40.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
There is very little difference using orthogonal or frustum matrix so the most simple answer would to simply swap the bottom and top parameters or even set them to whatever you need.
But to look into frustum a bit more:
What this method does is it creates a matrix that will scale the objects depending on the distance from near. It is designed so that an object at near is scaled by 1.0. So for instance if you put a rectangle with coordinates left, right, top, bottom as x and y then near as z and using no other matrix but the frustum the result will be exactly a full screen rectangle.
Objects that are closer to near will usually not be drawn and those further will be scaled linearly depending on all parameters but far. The far parameter effects nothing but where your objects will stop being drawn. So in most cases there is no difference if you put a very large far value but one very important; Effect of having a large far value will be precision of depth test. So when using depth buffer ensure that this value is as small as possible but still large enough to see all your objects.
In most cases we define frustum with a field of view as angle. You define constant near, far and fov from which the border parameters are then computed like right = tan(fov)*near*0.5 and top = tan(fov)*near*0.5*(viewHeight/viewWidth). These are just some examples though as there are many ways to define it.
In your case there is no reason not to define these values as you please. So having something like left = 0.0, right = width, bottom = height and top = 0.0. But then you still need to define near and far values which must be positive. Then if your objects are at 0.0 distance then they will all be clipped.
To avoid this it is best if you use a lookAt procedure which will generate another matrix that may define "camera" position in your scene. By simply putting it to z=-near you should see the objects exactly as with using orthographic projection. The problem now is that if you want to "zoom in" by putting the camera closer to the objects those objects will again not be drawn.
To achieve something like that you need to define some maximum scale for instance maxZoom = 10.0. What you would do then is divide all of the border parameters (top, left...) with that value. You would also apply this scale to the z value in your lookAt matrix to see the scene as not being zoomed.
So in general to flip the coordinates you may modify the border values or you may play with look at matrix. There are other ways as well but these are pretty standard. I hope this clears up a few things for you.
I am trying to render a Sprite onto my phone screen. My world has a size of 100x100 units and I would like to split it into 10 equal rectangles (10 x 100 units each).
Each of them will be viewed as full screen and I want the camera to be able to scroll from one to another according to the character movement (as the character in the game reaches the halfway width of the rectangle).
The problem is that the camera zooms in too much to the Sprite area and the Sprite rendered doesn't respect the aspect ratio of the PNG file.
Should I use a shape render object such as a rectangle which would be the same size as the phone screen and fill the rectangle with parts of the Sprite, then somehow scale this shape render rectangle in order to preserve the aspect ratio of the PNG file?
Please advise me as to what is best?
If you do not specify units then Orthographic camera has a accessible zoom field. But it is always best to specify what you want exactly.
If you want to have 10 "things" next to eachother and fitting on the camera I would just specify that.
int thingsWidth = 1; //1 could stand for meter
int amountOfThings = 10;
//give you texture/image/sprite the width of "thingsWidth"
#override
public void resize(float width, float height)
{
float camWidth = thingsWidth * amountOfThings;
//You probably want to keep the aspect ration of the window
float camHeight = camWidth * ((float)height / (float)width);
camera.viewportWidth = camWidth;
camera.viewportHeight = camHeight;
camera.update;
}
This is basically how the camera works with a regular screenViewport since we did not specify a specific viewport.
I'm not sure what you want to achieve exactly but Scene2D Table could work in your favor too. You just set table.setFillParent(true); then add 10 of your images to the table using something like table.add(someActor).expand().fill(). Now all your actors will fully expand and fill vertical and share the horizontal space. Now it does not matter how you setup your camera since the table takes care of the layout.
I'm using OpenGL ES 2.0 on Android and I and I initialise my display like so:
float ratio = (float) width / height;
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7); //Using Orthographic as developing 2d
What I'm having trouble understanding is this:
Let's say my app is a 'fixed screen' game (like Pac-Man ie, no scrolling, just the whole game visible on the screen).
Now at the moment, if I draw a quad at -1 to +1 on both x and y I get something like this:
Obviously, this is because I am setting -ratio, ratio as seen above. So this is correct.
But am I supposed to use this as my 'whole' screen? With rather massive letterboxing on the left and right?
I want a rectangular display that is the whole height of the physical display (and as much of the width as possible), but this would mean drawing at less that -1 and more than +1, is this a problem?
I realise the option may be to use clipping if this was a scrolling game, but for this particular scenario I want the whole 'game board' on the screen and to be static (And to use as much of the available screen real estate as possible without 'stretching' thus causing elongation of my sprites).
As I like to work with 0,0 as the top of the screen, basically what I do is pass my draw method something like so:
quad1.drawQuad (10,0);
When the drawQuad method get's this, it basically takes the range from left to right as expressed my openGL and divide the the screen width (so, in my case -1.7 through +1.7 so 3.4/2560 = 0.001328125). And say I specify 10 as my X (as above), it will say something like:
-1.7 + (10*0.001328125) = -1.68671875
It then plots the quad at -1.68671875.
Doing this I am able to work with normal co-ords (and I just subtract rather than add for y axis so I can have 0 at the top).
Is this a good way to do things?
Because with this method, at the moment, if I specify a 100,100 square, it isn't a square, it's rectangle. However, on the plus side, I can fill the whole physical screen by scaling the quad by width x height.
You are drawing a 1x1 quad, so that is why you see a 1x1 quad. Try translating the quad 0.25 to the right or left and you will see that you can draw in that space too.
In graphics, you create an object, like a quad, in your case you made it 1x1. Then you position it wherever you want. If you do not position it, then it will be at the origin, which is what you see.
If you draw a wider shape, you will also see you can draw outside this area on the screen.
By the way, with your ortho matrix function, you aren't just specifying the screen aspect ratio, you are also specifying the coordinate unit size you have to work with. This is why a 1x1 is filling the height the of the screen, because your upper and lower boundaries are set to 1 and -1. Your ratio is a little more than one, since your width is longer than your height, so your left and right boundaries are essentially something like -1.5 and 1.5 (whatever your ratio happens to be).
But you can also do something like this;
Matrix.orthoM(mProjMatrix, 0, -width/2, width/2, -height/2, height/2, 3, 7);
Here, your ratio is the same, but you are sending it to your ortho projection with screen coordinates. (Disclaimer: I don't use the same math library you do, but these appears to be a conventional ortho matrix function based on the arguments you are passing to it).
So lets say you have a 1000x500 pixel resolution. In OpenGL your origin of 0,0 is in the middle. So now your left edge is at (-500,y), right edge at (500,y) and your top is (x,250). So if you draw your 1x1 quad, it will be very tiny, but if you draw a 250x250 square, it will look like your 1x1 quad in your previous ortho projection.
So you can specify the coordinates you want, the ratio, the unit size, etc for how you want to work. Personally, I dont't like specifying coordinates as fractions between 0 and 1, I like to think about them in the same sense as the screen pixels.
But whether or not you choose to do this, hopefully you understand what you are actually passing to these matrix functions.
One of the best ways to learn is draw an object to the screen and just play around with different numbers you send to your modelview and projection matrices so you can see what it is they are actually doing.
I'm writing a little cross platform game engine for iOS, Android and BADA. I have a question about setting the perspective to be consistent regardless of screen resolution and ratio.
I have the following set up for my flipped normalized orthographic projection which works fine:
glViewport(0, 0, mWidth, mHeight);
glMatrixMode(GL_PROJECTION);
glOrthof(-1.0, //LEFT
1.0, //RIGHT
-1.0 * mHeight / mWidth, //BOTTOM
1.0 * mHeight / mWidth, //TOP
-2.0, //NEAR
100.0); //FAR
On my iPhone's this is fine and I get the desired position of world objects but on some of the Android devices and iPads the position when retaining the correct ratio.
All of the meshes are the right proportions but the position obviously alters so that if something is aligned to the bottom of the screen, when rendered on the iPad the objects will be drawen partly off the screen.
So the question:
Is this correct and I need to place objects relative to the width and height of the viewport?
Or
Is there a method to setting up the orthogonal perspective so that regardless of screen ratio the positioning will remain constant without damaging the perspective of world objects?
I am thinking from what I know and the math I did the second isn't an option because perspective is defined based on ratio.
The iPad and iPhone screens have different proportions. The iPhone is 3:2 and the iPad is 4:3. Android phones have too wide range of different proportions to list and I wouldn't really like to comment on Bada. So unless you're going to show your image in a letterbox or pillarbox, or stretch it so that the aspect ratio changes from device to device, the amount of your internal world that's visible is going to change between devices.
At the minute you've fixed the left and right parameters while calculating the top and bottom based on the screen proportions. So you'll get exactly the same amount of your world across the screen on every device but the amount on the vertical will change.
If your game involves a camera moving in 3d then there's really not much you can do about it. But since you're talking about things being aligned to sides of the screen I guess the camera moves in 2d?
As a general rule, if the camera moves along the vertical then you probably want to keep what you have. Your level layouts that are exactly the width of the screen will be the width of everybody's screen. Wider devices will be able to see further ahead or behind, but there you go.
If the camera moves along the horizontal then you probably want to switch to supplying fixed values for top and bottom, and calculating left and right as per the aspect ratio. So I guess that'd be:
glOrthof(-1.0 * mWidth / mHeight, //LEFT
1.0 * mWidth / mHeight, //RIGHT
-1.0 , //BOTTOM
1.0 , //TOP
-2.0, //NEAR
100.0); //FAR
In terms of being able to close this all off inside a library, you'll probably just need to receive a flag as to whether logical viewport width should be fixed and height adapted to the screen or height fixed and width adapted.
I am trying to learn opengl stuff on Android. In the gl.gltranslatef(x,y,z) call, I am shifting my texture by some units in the +ve x direction. But I am unable to find the number of pixels does 1 unit of x belong to?
Here is what I am doing:
I call gl.glviewport(0,0,width,height); // This will set my rectangle with 0,0 as lowerleft corner and then extend it to accommodate width and height.
Then
I call to gl.glfrustrum(-5,5,-7,7,3,7); // I am little confused how this call is using the dimensions I set in gl.glviewport.
How will -5 to 5 units from left to right in the above call, translate to pixels on the screen of android?
I mean if width = 320 and height = 533 pixels, then what will be the number of pixels occupied on the screen due to the gl.glfrustrum call?
I am experimenting in the gl.gltranslatef call by specifying xshift as 5.0, but it does not translate the bitmap at the right or left corner of the screen, when I increase it to 6, part of it is still visible on the screen.
Thanks
Siddhesh
In short, I am searching for the maximum number of units (in terms of X) which will represent extreme corners of my android phone screen.
glViewpoint tells it what rectangle (in pixels) your OpenGL output should be displayed in.
glFrustum tells it what coordinates in your "world" units should be mapped to that viewport.
An important point: your glFrustum call includes not only a height and width, but also a depth. Since you are specifying a Frustum, not a cube, that means anything with a Z coordinate anywhere but the very front of your frustum will be scaled down appropriately for its distance from the viewer.
As such, when you to a glTranslatef, the distance by which a particular object will move (in terms of pixels) will depend on its distance from the viewer. The further away it is from the viewer, the fewer pixels a particular sideways or up/down will translate to.
Depending on what else you're doing, one easy way to deal with this might be to use glOrtho instead of glFrustum. glOrtho gives orthographic mode, which means no perspective scaling is done, so a given X or Y distance will translate to the same number of pixels, regardless of distance from the viewer.