I am trying to do some animation. I have a initial vertex value and texture buffer value and also final texture and vertex value. Now i want to apply some kind of transformation and animation between those two values. How can we achieve this?
What interpolation? There are many types of interpolations.
One kind of interpolation is cross-fade effect. You can use the accumulation buffer to fade between two images. On each frame draw, you use alpha blending to draw a semi-transparent of your final image on the accumulation buffer, and then blit them to the screen buffer.
Related
I am working with OpenGL-ES 3.0 in Android.
To simplify my issue, I am going to describe a scenario very similar to mine. Let's say that I have a sphere centered at the origin. Let's also say that I have another sphere centered at the origin, with a larger radius. Finally, let's say that I have a cylinder, and the center of the top face of the cylinder is lying on the origin. The cylinder intersects the two spheres. A picture of the scenario is below:
This is my initial setup:
I would like to only draw the section in between the two spheres, as shown below:
However, in my application, the smaller of the two spheres isn't visible (though it exists). It is completely transparent. Thus, the final end product I would like would look something like this:
Now, one more piece of information: as I mentioned earlier, this is a simplification of my current scenario. Instead of spheres, I have far more complex objects (not simple primitive shapes). Thus, approaching this from a mathematical perspective (such as only drawing the portion of the cylinder that is greater than the smaller sphere's radius and less than the larger sphere's radius) is not going to work. I need to approach this somehow from a programming perspective (but given my limited knowledge of OpenGL, I can only think of Depth Testing and Blending as viable options)
You can probably do this using a stencil buffer.
I haven't compiled this code and it will need modifying, but this is the general idea:
glDisable( GL_STENCIL_TEST );
<Render rest of scene (everything other than the spheres and cylinder)>
// Render the larger sphere into the stencil buffer, setting stencil bits to 1
glEnable( GL_STENCIL_TEST );
glClear( GL_STENCIL_BUFFER_BIT );
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE ); // Don't render into the color buffer
glDepthMask( GL_FALSE ); // Don't render into the depth buffer
glStencilMask( 0xff ); // Enable writing to stencil buffer
glStencilFunc( GL_ALWAYS, 1, 0xff ); // Write 1s into stencil buffer
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE ); // Overwrite for every fragment that passes depth test (and stencil <- GL_ALWAYS)
<Render big sphere>
// Render the smaller sphere into the stencil buffer, setting stencil bits to 0 (it carves out the big sphere)
glStencilFunc( GL_ALWAYS, 0, 0xff ); // Write 0s into stencil buffer
<Render small sphere>
// Render the cylinder into the color buffer, only where the stencil bits are 1
glStencilMask( 0 ); // Don't need to write to stencil buffer
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE ); // Render into the color buffer
glStencilFunc( GL_EQUAL, 1, 0xff ); // Only render where there are 1s in the stencil buffer
<Render cylinder>
glDisable( GL_STENCIL_TEST );
// Now render the translucent big sphere using alpha blending
<Render big sphere>
What you are describing is Constructive Solid Geometry, but with the added complexity of using meshes as one of the primitive types.
Event with only mathematically simple primitives, it is very hard to implement CSG purely in the OpenGL pipeline because you would need to find a way to represent the scene graph in a way that the shaders can understand and efficiently parse. Once you add in meshes, it's basically impossible because the vertex and fragment shaders won't have easy access to the mesh geometry.
You might be able approximate it by executing a draw call for every item in the CGS graph and with clever manipulation of stencil and depth buffers, but you would probably still end up with lots of edge cases that didn't render properly.
I'm trying to learn some OpenGL ES for Android, entirely for fun. Graphics programming is very new to me, so sorry if this is a stupid question.
I've read a few examples and tutorials, and can create shapes, move the camera with touch events etc. Following Google's tutorial, I modified it to make some 3d shapes. I'm happy with all of this so far.
Currently my shapes have the same colour on all sides, and it isn't clear to me how I colour/texture them separately with the code that I've got at the minute (I just built on the Google tutorials by experimenting basically).
All the other examples I look at use glDrawArrays rather than glDrawElements. Defining a shape this way seems a lot clumsier, but then you have a unique set of vertices which makes colouring (and normals) seem more obvious.
The draw method for my cube is below so you can see where I'm at. Do I have to do this differently to progress further? If so, doesn't that make glDrawElements a lot less useful?
public void draw(float[] mvpMatrix) {
// Add program to OpenGL environment
GLES20.glUseProgram(mProgram);
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
// Enable a handle to the vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
// Set color for drawing the Cube
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
// Draw the cube, tell it that vertices describe triangles
GLES20.glDrawElements(
GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
}
To color a certain part of a shape you need to define the color per vertex. That in your case means you need another attribute next to vPosition which represents the color. In the shaders you will need add this new attribute and add a varying vector to pass the color to the fragment shader. No transformations are needed in vertex shader though. Also do not forget to enable the color attribute array and to set the pointer...
Try doing so to begin with and specify a different color for each vertex to see what happens.
If you will do this correctly you will see that each face of the cube now has a gradient color and only the corners of the cube have the exact color you specified. This is due to interpolation and there is not much you can do to fix this but create the vertex buffer differently:
So this is where the difference between drawing arrays or elements come in. By using elements you have an index buffer to lower the actual vertex buffer size by reusing the same vertices. But in the case of the cube with colored faces those vertices can not be shared anymore as they are not the same. If you want to have a cube with 6 different color each representing a single face you can see that each vertex (corner) actually contains 3 different colors, one for each face. So the vertex containing a position and a color must have the same both position and color to actually be the same...
So what must be done here is you need to create not 8 but 3*8 vertices and indices to draw such a cube. After you will do so you can see that number of indices and used vertices is the same so you gain nothing by using elements (but you still can if you wish) so it is easier to simply draw the arrays.
This same situation on a cube will happen for normals or texture coordinates, you simply need to make 8*3 vertices.
Note though that a cube is just one of those unfriendly shapes for which you need to do this. If you rather draw a sphere with some large amount of triangles you would not need to inflate the vertex count to do a nice texturing or lighting (using normals). In fact, tripling the sphere vertex count and doing the same procedure as on a cube would make the shape look worse.
I suggest you try playing around a bit with this kind of shapes, normals, colors, textures and you will learn most from own experience.
I'm making game in OpenGL2.0 and I want to check are two sprites have intersection but i don't need to check intersection between two rectangles.I have two sprites with texture,some part of texture is transparent,some not. I need to check intersection between sprites only on not trasnparent part.
Example: http://i.stack.imgur.com/ywGN5.png
The easiest way to determine intersection between two sprites is by Bounding Box method.
Object 1 Bounding Box:
vec3 min1 = {Xmin, Ymin, Zmin}
vec3 max1 = {Xmax, Ymax, Zmax}
Object 2 Bounding Box:
vec3 min2 = {Xmin, Ymin, Zmin}
vec3 max2 = {Xmax, Ymax, Zmax}
You must precompute the bounding box by traversing through the vertex buffer array for your sprites.
http://en.wikibooks.org/wiki/OpenGL_Programming/Bounding_box
Then during each render frame check if the bounding boxes overlap (compute on CPU).
a) First convert the Mins & Maxs to world space.
min1WorldSpace = modelViewMatrix * min1
b) Then check their overlap.
I need to check intersection between sprites only on not trasnparent part.
Checking this test case maybe complicated depending on your scene. You may have to segment your transparent sprites into a separate sprite and compute their bounding box.
In your example it looks like the transparent object is encapsulate inside an opaque object so it's easy. Just compute two bounding boxes.
I don't think there's a very elegant way of doing this with ES 2.0. ES 2.0 is a very minimal version of OpenGL, and you're starting to push the boundaries of what it can do. For example in ES 3.0, you could use queries, which would be very helpful in solving this nicely and efficiently.
What can be done in ES 2.0 is draw the sprites in a way so that only pixels in the intersection of the two end up producing color. This can be achieved with either using a stencil buffer, or with blending (see details below). But then you need to find out if any pixels were rendered, and there's no good mechanism in ES 2.0 that I can think of to do this. I believe you're pretty much stuck with reading back the result, using glReadPixels(), and then checking for non-black pixels on the CPU.
One idea I had to avoid reading back the whole image was to repeatedly downsample it until it reaches a size of 1x1. It would originally render to a texture, and then in each step, sample the current texture with linear sampling, rendering to a texture of half the size. I believe this would work, but I'm not sure if it would be more efficient than just reading back the whole image.
I won't provide full code for the proposed solution, but the outline looks like this. This is using blending for drawing only the pixels in the intersection.
Set up an FBO with an RGBA texture attached as a color buffer. The size does not necessarily have to be the same as your screen resolution. It just needs to be big enough to give you enough precision for your intersection.
Clear FBO with black clear color.
Render first sprite with only alpha output, and no blending.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// draw sprite 1
This leaves the alpha values of sprite 1 in the alpha of the framebuffer.
Render the second sprite with destination alpha blending. The transparent pixels will need to have black in their RGB components for this to work correctly. If that's not already the case, change the fragment shader to create pre-multiplied colors (multiply rgb of the output by a).
glColorMask(GL_TRUE GL_TRUE, GL_TRUE, GL_TRUE);
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
glEnable(GL_BLEND);
// draw sprite 2
This renders sprite 2 with color output only where the alpha of sprite 1 was non-zero.
Read back the result using glReadPixels(). The region being read needs to cover at least the bounding box of the two sprites.
Add up all the RGB values of the pixels that were read.
There was overlap between the two sprites if the resulting color is not black.
I'm trying to display a geographically complex, semi-transparent (e.g. alpha = 0.5) object (terrain). When I render this object, the hidden front-faces of this object will also be drawn (like a hill that actually lies behind another one).
I would like to see other objects behind my "terrain" object, but don't want to see the hidden faces of my terrain (the second hill). So actually set the transparency for the "whole" object, not for single faces.
Q: How could I achieve to hide the "hidden" front-faces of a semi-transparent object?
I'm setting the transparency in the vertex shader by multiplying the color vector with the desired transparency:
fColor = vec4(vColor, 1.0);
fColor *= 0.5;
// fColor goes to fragment shader
GL_DEPTH_TEST is activated with GL_LEQUAL as depth function.
GL_BLEND is activated with GL_ONE, GL_ONE_MINUS_SRC_ALPHA as blending functions.
I tried to deactivate the depth buffer by GLES20.glDepthMask(false); before drawing, but this doesn't make any difference.
Probably I don't get the idea for the right depth buffer settings or the blending functions.
Well, I think I got it now:
Actually I can resign on blending at all. With the depth test switched on only the foreground fragments are visible (the front hill of my terrain). With the multiplication in the vertex shader the fragment shader will draw these visible fragments with desired transparency (the terrain as a whole becomes semi-transparent).
So, depth test on, blending off, color multiplication in vertex shader.
I read the dimensions of a cube from a file (x,y,z) and create opengl vertex array with equally spaced points. I am able to display the points as a 3d point cube of dimension (x,y,z). However, I want to display small cubes instead of points so that the output will look like a grid of cubes of dimension x*Y*Z instead of 3d points. How can I achieve this in android openges1.0 in java?
Thanks.
You should create an index buffer which lists the order of vertices (from your vertex buffer) to visit, and then draw them as triangles (or ideally a triangle strip).
For instance, if you have four vertices, top left, top right, bottom left, bottom right, in that order, your index buffer will be something like [0, 1, 3, 2] to wind the vertices clockwise.
Since the vertices are equally spaced and axis-aligned, it should not be too hard to write a loop which will generate the appropriate index buffer for you.