how to apply texture to grid of squares using OpenGL - android

I'm trying to learn OpenGL, and it's a bit daunting. To get started I am trying to use it to create some effects on a 2D image. Basically, I want to take an image (say 1000px by 1000px) and divide it into a grid of equally sized squares (say a 10 by 10 grid) and then manipulate the squares individually (like turn one square black, flip another over, make another "fall" off the screen, etc). I've followed some basic online instructions (http://blog.jayway.com/2010/12/30/opengl-es-tutorial-for-android-%E2%80%93-part-vi-textures/) on how to map a texture to a simple square, but I'm having problems with mapping the texture to a more complex arrangement of multiple squares.
1) Given a 2x2 (and larger size) grid of squares, how can I map a single texture/image over the entire grid? That is, what are the texture coordinates that OpenGL expects (and in what order) to make this work? I can't seem to wrap my head around how to figure out the order of the "UV" coordinates on a larger polygon structure.
2) Since I will ultimately be transforming, rotating, etc. each individual square of the grid, would it be better to create each square of the grid and individually divide the texture/bitmap and apply each piece of the image to each square separately? If so, do you have any recommendations on how to do efficiently divide the bitmap into pieces?
Any and all help, links, suggestions, etc. will be greatly appreciated. I'm doing this in an Android app with assumed support for OpenGL ES 2, but I assume most of the OpenGL discussion/concepts are platform agnostic. I don't want to include some large framework or toolkit to do this if possible since I want a lot of speed and minimum size.

Starting with the more important reply, #2, you really don't want to do more texture switches than you absolutely need. They're an insane performance loss.
Back to #1, the way you do this is actually quite straight forward. Your texture resides in a unit square of coordinates (0,0) to (1,1). This is called texture-space coordinates, and the axes are called U and V respectively. So each go between 0 and 1 and cover your whole image.
Now when you create your objects, vertex by vertex (through a vertex buffer or immediately), you can send a second set of coordinates, the UV texture coordinates, for each vertex. You use this to "slice up" your image into parts.
The easiest way to do your specific application is to take the vertex coordinate, divide it by the length of length of the image and multiply it by the length of the small square you're building. This will obviously not work once you start rotating your squares, but maybe it will help you visualize the process better.
Note how this is completely platform independent, you can use the same reasoning for DirectX applications or whatever!

1)I think the keyword you search for is "Texture atlas".
Some hints, but you may find better explanations on the Internet as im still learning too: (i use OpenGl 2.1, and GLSL 1.2 so ymmv)
The vertex shader has something like this:
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 vertTexCoord;
void main()
{
vertTexCoord = texcoord;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0f);
}
in your fragment shader you do something like this:
uniform sampler2D texture;
varying vec2 vertTexCoord;
void main()
{
gl_FragColor = texture2D(texture, vertTexCoord);
}
and then, for example, if you want a 2x2 grid of squares (texture divided in 4 parts) you would have vertices like this; (assuming the squares are 1 unit wide and tall)
(uv = texcoord, vertex = position)
square1
and the next square to the right would look like this:
square2
It's kinda important to remember that while vertex coordinates can go over 1 and below 0, texture coordinates can not( always between 0 and 1).
2) I wouldn't divide the bitmap, since using texture atlases(plural?) is already quite fast, and i doubt you would gain a significant speed gain (if at all).
I hope i could help! (also sorry but i couldn't embed the images directly, please give me rep. so i can do in the future! :))

Related

Libgdx - setting Sprite color

Is it possible to .setColor(x,x,x,1) for a border of a circle like this:
else I have to use 2 sprites, and I already have 500 sprites referenced. Do not want to use a 1000.
To do it in one pass with a custom shader, encode your source sprite in a certain way. Like this:
Now you can blend between white and your border color using the R channel as the interpolation factor. You can copy the vertex shader from the SpriteBatch source code and modify the fragment shader main function to look like this:
vec4 texColor = texture2D(u_texture, v_texCoords);
gl_FragColor = vec4(mix(vec3(1.0), v_color.rgb, texColor.rrr), v_color.a * texColor.a);
Just simply use two different textures, one with a filled-in circle and one with only a stroke. Set the texture of whichever sprite needs a stroked or a filled circle using the setTexture method in the libGDX Sprite class.
This is still efficient since the textures only need to be loaded once and by setting the texture of a sprite it only keeps a pointer in memory not the whole texture.
I would go with the TextureRegion idea above and when you've finished your game, IF your having performance issues THEN and only then would i really worry about this, but im somewhat scared of shaders.
Most people never finish their games because they get caught up in the details way before they really need to.

OpenGLES 3.0, Android: How to draw intersection of two objects

I am currently working on a project using OpenGL-ES 3.0 for Android. In my project, I have drawn a 3d human head, whose centroid lies at the origin. I also have a cylinder, with the center of one of its faces lying on the origin. The cylinder extends out longer than the length of the head. In other words, I have a cylinder running through the head.
Right now, I am just using the default depth test (GL_LESS) to NOT draw the section of the cylinder that lies inside of the head. What would be ideal for my project is if I could somehow only draw the circle where the cylinder intersects with the head. I tried changing my depth test to (GL_EQUAL) but it did not do the trick.
How can I do this? Keep in mind that the head object is very complex, with a large amount of points and triangles.
The most practical solution that comes to mind is to discard the fragments outside the cylinder in the fragment shader.
One approach for this is that you pass the original coordinates from the vertex shader into the fragment shader. Say you currently have a typical vertex shader that applies a MVP transformation:
uniform mat4 MvpMat;
in vec4 InPos;
...
gl_Position = MvpMat * InPos;
You can extend this to:
uniform mat4 MvpMat;
in vec4 InPos;
out vec4 OrigPos;
...
gl_Position = MvpMat * InPos;
OrigPos = InPos;
Then in the fragment shader (for a cylinder with the given radius along the z-axis):
uniform float CylRad;
in vec4 OrigPos;
...
if (dot(OrigPos.xy, OrigPos.xy) > CylRad * CylRad) {
discard;
}
There are countless variations of how exactly you can handle this. For example, instead of passing the original coordinates into the fragments shader, you could transform the cylinder geometry, and then perform the test using the transformed vertex/fragment coordinates. You'll have to figure out what looks the cleanest based on your exact use, but this should illustrate the basic idea.
Discarding fragments can have a negative performance impact. But unless you're already pushing the performance envelope, this approach might work just fine. And I doubt that there's a solution that will not have a performance cost.
It would be nice to operate on the vertex level. Full OpenGL has clip planes, which could potentially be used creatively for this case, but the clip plane feature is not in any version of OpenGL ES. Without this, there is really no way of discarding vertices, at least that I can think of. So discarding fragments might be the most reasonable option.

In opengl es fragment shader, how to shift texture by some pixel values?

I am currently using multiple textures for a rendering job. All textures are passed to the same fragment shader for processing. One of the textures need to be shifted by some pixel values. How to achieve this in fragment shader code?
I have tried texture2D(tex, texCoord.xy + vec2(shiftx, 0.0)), where shiftx is a less than 1 float value. This does not work properly. Ideally, the area where the texture shift away should be blank, but the shader uses the last pixel color to fill the area. If this area can be cleared, this could be one method. But is there any other solutions?
Thanks!
No. The primitive defines the drawing area so if you are using multiple textures in a single shader that is not possible (if only 1 texture is used then rather offset the primitive). So what you need to do is check the coordinate and if the values are out of bounds (less then zero or more then one) do a separate logic as in not including the texture there.
Some source code might be helpful to suggest you a proper solution.
Just a note here it usually makes sense to do the coordinate translation in the vertex shader. You need to use another varying coordinates like varying lowp vec2 offsetedTexCoord for it then.
There's more than one way to go about this. The easiest is to check if one of the wrap types will do what you need. This one fills with a solid color instead of the last pixel which you said was causing trouble.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
If a wrap doesn't give the desired effect, you already know where the cutoff for the shift is. So discard the fragment or set it to any color you want.
if (texCoord.x < shiftx) discard;

Uv-Coordinates: Theoretical questions

In my previouse question I had some problems regarding texturing in OpenGL ES 2.0. With some help this problem is solved now, but some related questions have appeared:
How do I know wheather uv-coordinate and vertex-coordinates fit together? I thought there is a bijection between the vertex and uv, for example (0,0) of vertex to (0,0) of uv, and (width,0) of vertex to (1,0), and (0,height) of vertex to (0,1) of uv. But since the texture appears flipped in my example, the thought might be wrong?
How can I know the second param of the glVertexAttribPointer method? Why do I have to set 2 in this case?
Bullet point 1 sounds like it is related to the texture origin in GL.
(0,0) maps to the lower-left corner of a texture image.
If you have assumed something similar to another graphics API (maybe one where (0,0) is top-left), that will not help your understanding.
Bullet point 2, assuming it is for texture coordinates, defines the 2D coords.
If you are talking about calling glVertexAttribPointer (...) with your texture coordinates, that is because they are 2-dimensional. Texture coordinates can be 1,2,3 or 4-dimensional as unusual as that may sound at first. The most common use-case is 2D, but 3D and 4D texture coordinates have specialized applications of their own.
In fact, if you call glVertexAttribPointer (..., 2, ...), the vertex attribute gets expanded automatically like this by GLSL:
vec2 (x,y) ==> vec4 (x,y,0.0,1.0)
So in effect, all texture coordinates (or any vertex attribute) are technically 4-dimensional. The function call above only supplies enough data for x and y, and GL fills-in the rest. Go ahead and think of these as 2D coordinates, because the rest of the vector is an identity.

Shared vertex indices with normals in opengl

In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.

Categories

Resources