OpenGLES 3.0, Android: How to draw intersection of two objects - android

I am currently working on a project using OpenGL-ES 3.0 for Android. In my project, I have drawn a 3d human head, whose centroid lies at the origin. I also have a cylinder, with the center of one of its faces lying on the origin. The cylinder extends out longer than the length of the head. In other words, I have a cylinder running through the head.
Right now, I am just using the default depth test (GL_LESS) to NOT draw the section of the cylinder that lies inside of the head. What would be ideal for my project is if I could somehow only draw the circle where the cylinder intersects with the head. I tried changing my depth test to (GL_EQUAL) but it did not do the trick.
How can I do this? Keep in mind that the head object is very complex, with a large amount of points and triangles.

The most practical solution that comes to mind is to discard the fragments outside the cylinder in the fragment shader.
One approach for this is that you pass the original coordinates from the vertex shader into the fragment shader. Say you currently have a typical vertex shader that applies a MVP transformation:
uniform mat4 MvpMat;
in vec4 InPos;
...
gl_Position = MvpMat * InPos;
You can extend this to:
uniform mat4 MvpMat;
in vec4 InPos;
out vec4 OrigPos;
...
gl_Position = MvpMat * InPos;
OrigPos = InPos;
Then in the fragment shader (for a cylinder with the given radius along the z-axis):
uniform float CylRad;
in vec4 OrigPos;
...
if (dot(OrigPos.xy, OrigPos.xy) > CylRad * CylRad) {
discard;
}
There are countless variations of how exactly you can handle this. For example, instead of passing the original coordinates into the fragments shader, you could transform the cylinder geometry, and then perform the test using the transformed vertex/fragment coordinates. You'll have to figure out what looks the cleanest based on your exact use, but this should illustrate the basic idea.
Discarding fragments can have a negative performance impact. But unless you're already pushing the performance envelope, this approach might work just fine. And I doubt that there's a solution that will not have a performance cost.
It would be nice to operate on the vertex level. Full OpenGL has clip planes, which could potentially be used creatively for this case, but the clip plane feature is not in any version of OpenGL ES. Without this, there is really no way of discarding vertices, at least that I can think of. So discarding fragments might be the most reasonable option.

Related

How to start an opengl conversion of rgb to hsv on a bitmap in android

I want to do image processing on a bitmap I have loaded. I am afraid it will take too long to do anything in normal android on the cpu, and have read that allowing the gpu to do stuff may speed things up substantially. I feel if I could turn the image from rgb to hsv in opengl I could figure out how to do everything I need.
I want to do work in the fragment shader(since that is about pixel color), and since I don't want to transform the image at all I feel the vertex shader is useless for my needs. Do I still need the vertex shader? Read something about a passthrough vertex shader. I read some implementations don't need both. My main points of confusion definitely lie in how opengl works. I have gathered how to form a texture and a framebuffer object. I don't understand the GLUtils.texSubImage2D. what is the diff between that and texImage2d. In any case, per this site (Site 1) and this code (Site 2) they loaded the bitmap using texSubImage2D.
I get how to load a vertex shader and fragment shader. I don't understand what happens after glUseProgram(). Does it run to completion? Do you continually supply it with information to process? how to stop it?
I think you pass in a texture using a uniform sampler2D -- I mean I think this is how to use the texture in the fragment shader.I guess it gets placed into the fragment shader automatically. How all this works is a mystery. I plan on using glReadPixels to get back my bitmap from the texture. when would I call this? the callback functions discussed in android doc only deal with a surfaceview afaik
I plan on using the information from this site (Site3) minus their vertex shader to convert it. I just need to put it all together.
Sorry if this is not enough to go on. BTW I don't want to use any libraries.
I have tried reading about opengl and specifically opengl es 2.0. They all talk about either using a surfaceview ( which I don't want ) or working on many triangles. I do get that I am using opengl for a purpose that opengl was not exactly made for.
This is some code from Site3
So I think tex would be the texture passed via texSubImage2D
hue, no clue.
texture2D() gets the pixel I guess, but where vTextureCoord comes from is a mystery. If I want, say the 3x3 matrix around the pixel do I just subtract and add to vTextureCoord values and keep calling texture2D(). and how to set a pixel a certain color.what is gl_FragColor
uniform sampler2D tex;
uniform vec3 hue;
// Add the two methods here (rgb2hsv and hsv2rgb are at Site 3)
void main() {
vec4 textureColor = texture2D(tex, vTextureCoord);
vec3 fragRGB = textureColor.rgb;
vec3 fragHSV = rgb2hsv(fragRGB).xyz;
fragHSV.x += hue.x;
fragHSV.yz *= hue.yz;
fragHSV.xyz = mod(fragHSV.xyz, 1.0);
fragRGB = hsv2rgb(fragHSV);
gl_FragColor = vec4(fragRGB, textureColor.w);
}
Do I still need the vertex shader? Read something about a passthrough vertex shader. I read some implementations don't need both.
You have to use both as per the specification, otherwise your program will fail to link:
The following lists some of the conditions that will cause a link error.
A vertex shader and a fragment shader are not both present in the program object.
GLUtils.texSubImage2D. what is the diff between that and texImage2d.
glTexSubImage2D uploads a region of data to a currently existing texture. But if you are using it, you have to allocate the texture with some other operation first (e. g. glTexImage2D or glCopyTexImage2D.
I don't understand what happens after glUseProgram(). Does it run to completion? Do you continually supply it with information to process? how to stop it?
It doesn't run anything. The vertex and fragment shaders will be executed when you render something, using glDraw* commands. glUseProgram only sets the program that will be used in some of the operations.
I think you pass in a texture using a uniform sampler2D -- I mean I think this is how to use the texture in the fragment shader.I guess it gets placed into the fragment shader automatically. How all this works is a mystery. I plan on using glReadPixels to get back my bitmap from the texture. when would I call this? the callback functions discussed in android doc only deal with a surfaceview afaik
Samplers are references to Texture Image Units. Basically yes, you pass the corresponding texture image unit number as a uniform. glReadPixels returns the data from the current framebuffer. You would need to call after all the needed rendering operations are done. Although be aware that glReadPixels stalls the pipeline, slowing your program. You could also use it with Pixel Buffer Objects (available in OpenGL ES 3.0+) or, maybe, use EGLImageKHR.
I have tried reading about opengl and specifically opengl es 2.0. They all talk about either using a surfaceview ( which I don't want ) or working on many triangles. I do get that I am using opengl for a purpose that opengl was not exactly made for.
SurfaceView is just a system-level thing you can render into. You don't have to use it, and you can instead set up an off-screen rendering surface with EGL. You'll have to work with triangles (unless can you use OpenGL ES 3.1 and Compute Shaders), though you would only need 2 of them. Image processing is a valid application for OpenGL, it's actually what many of the image processing libs/programs use.
texture2D() gets the pixel I guess, but where vTextureCoord comes from is a mystery.
You would need to pass it as a varying from the vertex shader.
and how to set a pixel a certain color.what is gl_FragColor
gl_FragColor is the output color of the fragment this fragment shader instance is currently working on.
If I want, say the 3x3 matrix around the pixel do I just subtract and add to vTextureCoord values and keep calling texture2D().
Yes, but be aware of filtering, since the nearby texels could affect the texture lookup functions. Newer versions of OpenGL ES versions let you use texelFetch function instead.

Uv-Coordinates: Theoretical questions

In my previouse question I had some problems regarding texturing in OpenGL ES 2.0. With some help this problem is solved now, but some related questions have appeared:
How do I know wheather uv-coordinate and vertex-coordinates fit together? I thought there is a bijection between the vertex and uv, for example (0,0) of vertex to (0,0) of uv, and (width,0) of vertex to (1,0), and (0,height) of vertex to (0,1) of uv. But since the texture appears flipped in my example, the thought might be wrong?
How can I know the second param of the glVertexAttribPointer method? Why do I have to set 2 in this case?
Bullet point 1 sounds like it is related to the texture origin in GL.
(0,0) maps to the lower-left corner of a texture image.
If you have assumed something similar to another graphics API (maybe one where (0,0) is top-left), that will not help your understanding.
Bullet point 2, assuming it is for texture coordinates, defines the 2D coords.
If you are talking about calling glVertexAttribPointer (...) with your texture coordinates, that is because they are 2-dimensional. Texture coordinates can be 1,2,3 or 4-dimensional as unusual as that may sound at first. The most common use-case is 2D, but 3D and 4D texture coordinates have specialized applications of their own.
In fact, if you call glVertexAttribPointer (..., 2, ...), the vertex attribute gets expanded automatically like this by GLSL:
vec2 (x,y) ==> vec4 (x,y,0.0,1.0)
So in effect, all texture coordinates (or any vertex attribute) are technically 4-dimensional. The function call above only supplies enough data for x and y, and GL fills-in the rest. Go ahead and think of these as 2D coordinates, because the rest of the vector is an identity.

Shared vertex indices with normals in opengl

In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.

OpenGL ES2.0 Lighting in the vertex shader or fragment shader

I have seen many different tutorials on lighting in OpenGL ES2.0.
Some use the vertex shader to do all the lighting and transforms and then just pass the final colour through the fragment shader.
Others pass the position and other variables from the vertex shader and then do all the lighting in the fragment shader.
From my experience i always thought lighting should be done in the fragment shader. Can anyone tell my why do one over the other?
Traditional, fixed-pipeline OpenGL did lighting at the vertices and merely interpolated per fragment. So it tended to show visible seaming along edges:
That was considered an acceptable compromise however, because lighting was too expensive to do per-pixel. Hardware is better now but lighting is still more expensive to do per pixel. So I guess there's a potential argument there. Also I guess if you were trying to emulate the old fixed pipeline you might deliberately do lighting inaccurately.
However I'm struggling to think of any particularly sophisticated algorithm that would be amenable. Is it possible that the examples you've seen are just doing things like figuring out the tangent and cotangent vectors per vertex, or some other similar expensive step, then interpolating those per pixel and doing the absolute final calculations in there?
Lighting calculations can be fairly expensive. Since there are a lot more fragments than vertices while rendering a typical model, it's generally more efficient to do the lighting calculations in the vertex shader, and interpolate the results across the fragments. Beyond the pure number of shader executions, performing typical lighting calculations in the fragment shader can also need more operations because interpolated normal need to be re-normalized, which requires relatively expensive sqrt operations.
The downside of per-vertex lighting is that it works poorly if the lighting values change quickly across a surface. This makes perfect sense, because the values are interpolated linearly across triangles. If the desired value does not change approximately linearly across the triangle, this will introduce artifacts.
The prototypical example are specular highlights. If you define a shiny material with relatively sharp/small specular highlights, you can easily see the brightness of the highlight changing while the object is animated. It also looks like the highlight seems to "wander" around on the object. For example, if you rotate a sphere with a specular highlight around its center, the highlight should stay exactly the same. But with per-vertex lighting, the brightness of the highlight will increase and decrease, and it will wobble slightly.
There's two main ways to avoid these effects, or at least reduce them to a level where they don't look disturbing anymore:
Use per-fragment lighting.
Use a finer tessellation for the geometry.
Which solution is better needs to be decided case by case. Of course using a finer tessellation adds overhead on the geometry processing side, while using per-fragment lighting adds overhead in the fragment shader.
Per-vertex lighting becomes even more problematic when you want to apply effects like bump mapping, where the lighting values change very quickly across the surface. In those cases, there's almost no way around using per-fragment lighting.
I have seen advice suggesting that GPUs were so fast now that per-vertex lighting should never be used anymore. I think that's a gross simplification. Even if you can get the desired performance with per-fragment lighting, most computers/devices these days are battery powered. To be power efficient, making your rendering as efficient as possible is as important as it ever was. And I believe that there are still use cases where per-vertex lighting is the most efficient approach.

how to apply texture to grid of squares using OpenGL

I'm trying to learn OpenGL, and it's a bit daunting. To get started I am trying to use it to create some effects on a 2D image. Basically, I want to take an image (say 1000px by 1000px) and divide it into a grid of equally sized squares (say a 10 by 10 grid) and then manipulate the squares individually (like turn one square black, flip another over, make another "fall" off the screen, etc). I've followed some basic online instructions (http://blog.jayway.com/2010/12/30/opengl-es-tutorial-for-android-%E2%80%93-part-vi-textures/) on how to map a texture to a simple square, but I'm having problems with mapping the texture to a more complex arrangement of multiple squares.
1) Given a 2x2 (and larger size) grid of squares, how can I map a single texture/image over the entire grid? That is, what are the texture coordinates that OpenGL expects (and in what order) to make this work? I can't seem to wrap my head around how to figure out the order of the "UV" coordinates on a larger polygon structure.
2) Since I will ultimately be transforming, rotating, etc. each individual square of the grid, would it be better to create each square of the grid and individually divide the texture/bitmap and apply each piece of the image to each square separately? If so, do you have any recommendations on how to do efficiently divide the bitmap into pieces?
Any and all help, links, suggestions, etc. will be greatly appreciated. I'm doing this in an Android app with assumed support for OpenGL ES 2, but I assume most of the OpenGL discussion/concepts are platform agnostic. I don't want to include some large framework or toolkit to do this if possible since I want a lot of speed and minimum size.
Starting with the more important reply, #2, you really don't want to do more texture switches than you absolutely need. They're an insane performance loss.
Back to #1, the way you do this is actually quite straight forward. Your texture resides in a unit square of coordinates (0,0) to (1,1). This is called texture-space coordinates, and the axes are called U and V respectively. So each go between 0 and 1 and cover your whole image.
Now when you create your objects, vertex by vertex (through a vertex buffer or immediately), you can send a second set of coordinates, the UV texture coordinates, for each vertex. You use this to "slice up" your image into parts.
The easiest way to do your specific application is to take the vertex coordinate, divide it by the length of length of the image and multiply it by the length of the small square you're building. This will obviously not work once you start rotating your squares, but maybe it will help you visualize the process better.
Note how this is completely platform independent, you can use the same reasoning for DirectX applications or whatever!
1)I think the keyword you search for is "Texture atlas".
Some hints, but you may find better explanations on the Internet as im still learning too: (i use OpenGl 2.1, and GLSL 1.2 so ymmv)
The vertex shader has something like this:
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 vertTexCoord;
void main()
{
vertTexCoord = texcoord;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(position, 1.0f);
}
in your fragment shader you do something like this:
uniform sampler2D texture;
varying vec2 vertTexCoord;
void main()
{
gl_FragColor = texture2D(texture, vertTexCoord);
}
and then, for example, if you want a 2x2 grid of squares (texture divided in 4 parts) you would have vertices like this; (assuming the squares are 1 unit wide and tall)
(uv = texcoord, vertex = position)
square1
and the next square to the right would look like this:
square2
It's kinda important to remember that while vertex coordinates can go over 1 and below 0, texture coordinates can not( always between 0 and 1).
2) I wouldn't divide the bitmap, since using texture atlases(plural?) is already quite fast, and i doubt you would gain a significant speed gain (if at all).
I hope i could help! (also sorry but i couldn't embed the images directly, please give me rep. so i can do in the future! :))

Categories

Resources