In my previouse question I had some problems regarding texturing in OpenGL ES 2.0. With some help this problem is solved now, but some related questions have appeared:
How do I know wheather uv-coordinate and vertex-coordinates fit together? I thought there is a bijection between the vertex and uv, for example (0,0) of vertex to (0,0) of uv, and (width,0) of vertex to (1,0), and (0,height) of vertex to (0,1) of uv. But since the texture appears flipped in my example, the thought might be wrong?
How can I know the second param of the glVertexAttribPointer method? Why do I have to set 2 in this case?
Bullet point 1 sounds like it is related to the texture origin in GL.
(0,0) maps to the lower-left corner of a texture image.
If you have assumed something similar to another graphics API (maybe one where (0,0) is top-left), that will not help your understanding.
Bullet point 2, assuming it is for texture coordinates, defines the 2D coords.
If you are talking about calling glVertexAttribPointer (...) with your texture coordinates, that is because they are 2-dimensional. Texture coordinates can be 1,2,3 or 4-dimensional as unusual as that may sound at first. The most common use-case is 2D, but 3D and 4D texture coordinates have specialized applications of their own.
In fact, if you call glVertexAttribPointer (..., 2, ...), the vertex attribute gets expanded automatically like this by GLSL:
vec2 (x,y) ==> vec4 (x,y,0.0,1.0)
So in effect, all texture coordinates (or any vertex attribute) are technically 4-dimensional. The function call above only supplies enough data for x and y, and GL fills-in the rest. Go ahead and think of these as 2D coordinates, because the rest of the vector is an identity.
Related
I am currently working on a project using OpenGL-ES 3.0 for Android. In my project, I have drawn a 3d human head, whose centroid lies at the origin. I also have a cylinder, with the center of one of its faces lying on the origin. The cylinder extends out longer than the length of the head. In other words, I have a cylinder running through the head.
Right now, I am just using the default depth test (GL_LESS) to NOT draw the section of the cylinder that lies inside of the head. What would be ideal for my project is if I could somehow only draw the circle where the cylinder intersects with the head. I tried changing my depth test to (GL_EQUAL) but it did not do the trick.
How can I do this? Keep in mind that the head object is very complex, with a large amount of points and triangles.
The most practical solution that comes to mind is to discard the fragments outside the cylinder in the fragment shader.
One approach for this is that you pass the original coordinates from the vertex shader into the fragment shader. Say you currently have a typical vertex shader that applies a MVP transformation:
uniform mat4 MvpMat;
in vec4 InPos;
...
gl_Position = MvpMat * InPos;
You can extend this to:
uniform mat4 MvpMat;
in vec4 InPos;
out vec4 OrigPos;
...
gl_Position = MvpMat * InPos;
OrigPos = InPos;
Then in the fragment shader (for a cylinder with the given radius along the z-axis):
uniform float CylRad;
in vec4 OrigPos;
...
if (dot(OrigPos.xy, OrigPos.xy) > CylRad * CylRad) {
discard;
}
There are countless variations of how exactly you can handle this. For example, instead of passing the original coordinates into the fragments shader, you could transform the cylinder geometry, and then perform the test using the transformed vertex/fragment coordinates. You'll have to figure out what looks the cleanest based on your exact use, but this should illustrate the basic idea.
Discarding fragments can have a negative performance impact. But unless you're already pushing the performance envelope, this approach might work just fine. And I doubt that there's a solution that will not have a performance cost.
It would be nice to operate on the vertex level. Full OpenGL has clip planes, which could potentially be used creatively for this case, but the clip plane feature is not in any version of OpenGL ES. Without this, there is really no way of discarding vertices, at least that I can think of. So discarding fragments might be the most reasonable option.
So Im trying to figure out how to draw a single textured quad many times. My issue is that since these are create and deleted and every one of them has a unique position and rotation. Im not sure a vbo is the best solution as I've heard modifying buffers is extremely slow on android and it seems I would need to create a new one each frame since different quads might disappear randomly (collide with an enemy). If I simply do a draw call for each one I get 20fps around 100, which is unusable. any advice?
Edit: I'm trying to create a bullethell, but figuring out how to draw 500+ things is hurting my head.
I think you're after a particle system. A similar question is here: Drawing many textured particles quickly in OpenGL ES 1.1.
Using point sprites is quite cheap, but you have to do extra work in the fragment shader and I'm not sure if GLES2 supports gl_PointSize if you need different sized particles. gl_PointSize Corresponding to World Space Size
My go-to particle system is storing positions in a double buffered texture, then draw using a single draw call and a static array of quads. This is related but I'll describe it a bit more here...
Create a texture (floating point if you can, but this may limit the supported devices). Each pixel holds the particle position and maybe rotation information.
[EDITED] If you need to animate the particles you want to change the values in the texture each frame. To make it fast, get the GPU to do it in a shader. Using an FBO, draw a fullscreen polygon and update the values in the fragment shader. The problem is you can't read and write to the same texture (or shouldn't). The common approach is to double buffer the texture by creating a second one to render to while you read from the first, then ping-pong between them.
Create a VBO for drawing triangles. The positions are all the same, filling a -1 to 1 quad. However make texture coordinates for each quad address the correct pixel in the above texture.
Draw the VBO, binding your positions texture. In the vertex shader, read the position given the vertex texture coordinate. Scale the -1 to 1 vertex positions to the right size, apply the position and any rotation. Use the original -1 to 1 position as the texture coordinate to pass to the fragment shader to add any regular colour textures.
If you ever have a GLSL version with gl_Vertex, I quite like generating these coordinates in the vertex shader, saving storing unnecessarily trivial data just to draw simple objects. This for example.
To spawn particles, use glTexSubImage2D and write a block of particles into the position texture. You may need a few textures if you start storing more particle attributes.
In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.
Until now, I worked with gluProject, perspective projection, and a zoomable square centered on the screen with a lower left vertex (-1,-1,0). I zoom the square adjusting the Z axis.
For example, I zoomed the square to Z=-5, and I call gluProject with the openGL object parameters (-1,-1,0) to know the window pixels X,Y position of that vertex of the square. It works fine.
But now, I changed my architecture, and now I'm not using Z to zoom, I'm scaling to zoom. I have the square at Z=-1.0f, and initially it is scaled to (0.01f,0.01f,0.0f), is a small square.
Which X,Y,Z values do I have to pass to gluProject? I'm passing -1,-1,0, and gluProject is giving me erroneous x,y outPutCoords values, (-101.774124,-226.27419)
Again and again and again: gluProject does exactly the same thing like the OpenGL transformation pipeline (if called with OpenGL's matrices and viewport, of course). So whatever vertices you send to OpenGL, these are the vertices you have to put into gluProject.
If you render the polygon using the vertex (-1,-1,0), then you have to call gluProject with this vertex. Every other transformation (be it translation, scaling, rotation, or whatever) comes from the transformation matrices. But if you indeed render the polygon using the vertex (0.01, 0.01, 0), then you have to put this into gluProject.
Make sure you completely understand the OpenGL transformation pipeline (the answers to this question may help) and the workings of gluProject before continuing to use it and posting questions for every little input that you think gives wrong results.
I'm trying to learn OpenGL, and it's a bit daunting. To get started I am trying to use it to create some effects on a 2D image. Basically, I want to take an image (say 1000px by 1000px) and divide it into a grid of equally sized squares (say a 10 by 10 grid) and then manipulate the squares individually (like turn one square black, flip another over, make another "fall" off the screen, etc). I've followed some basic online instructions (http://blog.jayway.com/2010/12/30/opengl-es-tutorial-for-android-%E2%80%93-part-vi-textures/) on how to map a texture to a simple square, but I'm having problems with mapping the texture to a more complex arrangement of multiple squares.
1) Given a 2x2 (and larger size) grid of squares, how can I map a single texture/image over the entire grid? That is, what are the texture coordinates that OpenGL expects (and in what order) to make this work? I can't seem to wrap my head around how to figure out the order of the "UV" coordinates on a larger polygon structure.
2) Since I will ultimately be transforming, rotating, etc. each individual square of the grid, would it be better to create each square of the grid and individually divide the texture/bitmap and apply each piece of the image to each square separately? If so, do you have any recommendations on how to do efficiently divide the bitmap into pieces?
Any and all help, links, suggestions, etc. will be greatly appreciated. I'm doing this in an Android app with assumed support for OpenGL ES 2, but I assume most of the OpenGL discussion/concepts are platform agnostic. I don't want to include some large framework or toolkit to do this if possible since I want a lot of speed and minimum size.
Starting with the more important reply, #2, you really don't want to do more texture switches than you absolutely need. They're an insane performance loss.
Back to #1, the way you do this is actually quite straight forward. Your texture resides in a unit square of coordinates (0,0) to (1,1). This is called texture-space coordinates, and the axes are called U and V respectively. So each go between 0 and 1 and cover your whole image.
Now when you create your objects, vertex by vertex (through a vertex buffer or immediately), you can send a second set of coordinates, the UV texture coordinates, for each vertex. You use this to "slice up" your image into parts.
The easiest way to do your specific application is to take the vertex coordinate, divide it by the length of length of the image and multiply it by the length of the small square you're building. This will obviously not work once you start rotating your squares, but maybe it will help you visualize the process better.
Note how this is completely platform independent, you can use the same reasoning for DirectX applications or whatever!
1)I think the keyword you search for is "Texture atlas".
Some hints, but you may find better explanations on the Internet as im still learning too: (i use OpenGl 2.1, and GLSL 1.2 so ymmv)
The vertex shader has something like this:
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 vertTexCoord;
void main()
{
vertTexCoord = texcoord;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0f);
}
in your fragment shader you do something like this:
uniform sampler2D texture;
varying vec2 vertTexCoord;
void main()
{
gl_FragColor = texture2D(texture, vertTexCoord);
}
and then, for example, if you want a 2x2 grid of squares (texture divided in 4 parts) you would have vertices like this; (assuming the squares are 1 unit wide and tall)
(uv = texcoord, vertex = position)
square1
and the next square to the right would look like this:
square2
It's kinda important to remember that while vertex coordinates can go over 1 and below 0, texture coordinates can not( always between 0 and 1).
2) I wouldn't divide the bitmap, since using texture atlases(plural?) is already quite fast, and i doubt you would gain a significant speed gain (if at all).
I hope i could help! (also sorry but i couldn't embed the images directly, please give me rep. so i can do in the future! :))