I am trying to emulate point sprites in Android using OpenGL, specifically the characteristic of the size staying the same if the view is zoomed.
I do not want to use point sprites themselves as they pop out of the frustum when the "point" reaches the edge, regardless of size. I do not want to go down the route of Orthographic projection either.
Say I have a billboard square with a size of 1, when the use zooms in, I would need to decrease the size of the square so it looks the same size, if the user zooms out, I increase it. I have the projection and model matrices to hand if these are required as well as the FOV. My head just goes blank every time I sit down and think about it! Any ideas on the necessary algorithm?
Ok, By changing field of view to zoom into the environment I divide the quad size by (max_fov / current_fov). It works for me.
Related
I am using OpenGLES20 with android and I would like to know how to do the following:
I think it's easier to explain with a picture...
How can I rectify this stretching. Note: I am working in 2D.
I've heard this problem is solved using something called a projection matrix. I have also read a StackOverflow question saying that the android documentation for setting up a projection matrix is not good. I have tried it personally and couldn't get it to work.
This question is extremely poorly put. On the left image you have rectangular view with coordinate system [0,1]x[0,1] and a correctly drawn triangle while on the right you have the same view with view coordinate system and stretched triangle... Taking this 2 things into consideration your triangle coordinates are already stretched to begin with (or there is an extra model view matrix). If they weren't stretched the triangle would be drawn correctly on the right image.
It is a very common issue your scene is stretched when dealing with different view ratios. In general to solve this you are looking for something like glOrtho which can define your coordinate system. The input parameters for this method are left, right, top and bottom and it is easiest to simply use screen coordinates (like presented on the right image). Another approach is to normalise this input to either [0,1]x[0,height/width] or [0,width/height]x[0,1]. These two methods represent "fit" and "fill" and which is which depends on width of view being smaller or larger then height (portrait, landscape).
When using a correct orthographical matrix your square will always be a square without using any additional matrices or multiplying the vertex arrays... In your case it seems you already multiplied your vertices so I suggest you remove that, all of it. If you can not and those vertices will continue to be scaled incorrectly I suggest you use model view matrix to rescale them.
Drawing 2D graphics request only two coordinates and by default Z coordinate is 0. Is it possible to use that Z coordinate to adjust graphics sizes. Lets say for larger screens I set Z to be 0 but when screen is small (ldpi) i set z to be lets say -5 units and whole graphics fits into the screen. Is it good practice? Is it even possible to do like that?
To adjust your graphics to the screen size (and rotation) you should adjust the opengl viewport size.
Not sure what you exactly plan to do with the z-coordinate, but it doesn't look like a good way for me.
Looks like you plan to use the z coordinate to zoom in or zoom out so that the scene fits correctly into the screen. It is valid point, you can easily do that by "hacking" the projection matrix that way. The only drawback I truly see is that you need to send down your pipeline one more coordinate for each vertex. Would be much easier to just set a global scaling factor which is stored either in the modelview-projection matrix or passed to the vertex shaders.
My guess is (and i mean no disrespect) that you know little about 2D rendering and came up with this idea. Actually is not that bad, it's a good first approach, but things are quite polished in the area. You should stick to the standard way of dealing with it, unless you really know what you are doing.
Standard way is to use projection matrices (or cameras in a higher level of abstraction). When using projections you define your "world coordinates". The projection maps your world to the GL viewport (usually the hole screen), so no matter the device screen size, you always show the same portion of the world. Note you'll have to deal with stretching.
I don't know if I'm really answering your question. This is not really what you asked, but what i think you wanted to ask. You shouldn´t bother with z-components if used an Orthographic projection (which is typical for 2D).
So you'd want to add a "fake" depth to your 2D app ?
With an orthographic projection (used in most of the 2D rendering world), it would be completely useless.
With a perspective projection, it would surely lead to many subpixel glitches when the texture minification will occur, or blurring in case of magnification.
You could resize your sprites or - better - you could create a set of baked sprites of different sizes.
I am using textured quads to render a grid of tiles from a sprite sheet. Unfortunately when rendered, there are small gaps between the individual tiles:
Changing the texture parameters to scale the texture using GL_NEAREST rather than GL_LINEAR fixes this, but results in artifacts within the textured quad itself. Is there some way to prevent GL_LINEAR from interpolating using pixels outside of the specified UV coordinates? Any other suggestions for how to fix this?
For reference, here's the sprite sheet I am using:
Looks like a precision problem with your texture maps, are you using floats (32bit) or something smaller ? And how do you calculate the coordinates ?
Also leaving a 1 pixel border between texture sometimes helps (sometimes you always get a rounding error).
Myself I use this program http://www.texturepacker.com/ (not affiliated in any way), and you get the texture map and UV coordinates from it, you can also specify a padding around the textures and it can also extrude the last color around your texture, so even if get weird rounding probs you can always get a perfect seam.
I would check your precision and calcs first though.
I'm having difficulties understanding about the OpenGL perspective view. I've read tons of information however it didn't help me trying to achieve what I'm after. Which is making sure my 3d scene is filling the entire screen on every Android device.
To test this, I will be drawing a quad in 3d space which in the end should touch every corner, filling up the whole device's screen. I could then use this quad, or actually its coordinates to specify a bounding box at a certain Z distance which I could use to put my geometry and making sure those fill up my screen. When the screen resizes, or I am to run it on another screen resolution, I would recalculate this bounding box and geometry. I'm not talking about static geometry, but for instance say I want to fill the screen with balls and it doesn't matter how big or how many balls there are, the only important thing is the screen is filled and there are no redundant balls outside the visible frustum.
As far as I understand when specifying the viewport you actually bind pixel values to the frustum's boundaries. I know that you can actually set an orthographic view in a way your window pixels match 3d geometry position but I'm not sure how this works in perspective view.
Here I'm assuming the viewport width and height to be mapped to the nearZ. So when a ball is at Z=1f it has it's original size
When moving the ball into the screen so into the direction of farZ, the ball would be scaled down in order for the perspective to work. So a ball at Z=51f for instance, would appear smaller on my screen and I would need more balls to fill up the entire screen.
Now in order to do so, I'm looking for the purple boundaries
Actually I need these boundaries to fill the entire screen on different frustum sizes (width x height) while the frustum angle and Z distance for the balls is always the same
I guess I could use trig to calculate these purple boundaries (see blue triangle note)
Am I correctly marking the frustum angle, it being the vertical angle of the frustum?
Could someone elaborate on the green 1f and -1f values as I seem to have read something about it? Seems like some scalar that is used to resize the geometry within the frustum?
I actually want to be able to programmaticaly position geometry against the viewport borders within 3d space at any resolution/window for any arbitrary Android device.
Note: I have a flash background which uses a stage (visual area) with a known width x height at any resolution which makes it easy to position/scale assets either using absolute measurements or percentual measurements. I guess I'm trying to figure out how this system applies to OpenGL's perspective view.
I guess this post using gluUnproject answers the question.
How to draw, say, a rectangle on the screen with it being proportional to the current device?
e.g. a rectangle, centered on the viewport, one pixel smaller than the screen on each border.
I can live with Orthogonal, but would like perspective (basically everything at Z=something should be proportional to the screen, and the upper parts of the elements being distorted by perspective)
I can calculate everything on my own if i know the relation... but i don't have a starting point.
I could experiment and get to a relation myself... i even resorted to that while coding for the Wii, but that's a really bad decision on Android and all the screen ratios/sizes out there...
seems that at z=1 you can fit in the screen all -1,-1-1,1 quads.