I'm making game in OpenGL2.0 and I want to check are two sprites have intersection but i don't need to check intersection between two rectangles.I have two sprites with texture,some part of texture is transparent,some not. I need to check intersection between sprites only on not trasnparent part.
Example: http://i.stack.imgur.com/ywGN5.png
The easiest way to determine intersection between two sprites is by Bounding Box method.
Object 1 Bounding Box:
vec3 min1 = {Xmin, Ymin, Zmin}
vec3 max1 = {Xmax, Ymax, Zmax}
Object 2 Bounding Box:
vec3 min2 = {Xmin, Ymin, Zmin}
vec3 max2 = {Xmax, Ymax, Zmax}
You must precompute the bounding box by traversing through the vertex buffer array for your sprites.
http://en.wikibooks.org/wiki/OpenGL_Programming/Bounding_box
Then during each render frame check if the bounding boxes overlap (compute on CPU).
a) First convert the Mins & Maxs to world space.
min1WorldSpace = modelViewMatrix * min1
b) Then check their overlap.
I need to check intersection between sprites only on not trasnparent part.
Checking this test case maybe complicated depending on your scene. You may have to segment your transparent sprites into a separate sprite and compute their bounding box.
In your example it looks like the transparent object is encapsulate inside an opaque object so it's easy. Just compute two bounding boxes.
I don't think there's a very elegant way of doing this with ES 2.0. ES 2.0 is a very minimal version of OpenGL, and you're starting to push the boundaries of what it can do. For example in ES 3.0, you could use queries, which would be very helpful in solving this nicely and efficiently.
What can be done in ES 2.0 is draw the sprites in a way so that only pixels in the intersection of the two end up producing color. This can be achieved with either using a stencil buffer, or with blending (see details below). But then you need to find out if any pixels were rendered, and there's no good mechanism in ES 2.0 that I can think of to do this. I believe you're pretty much stuck with reading back the result, using glReadPixels(), and then checking for non-black pixels on the CPU.
One idea I had to avoid reading back the whole image was to repeatedly downsample it until it reaches a size of 1x1. It would originally render to a texture, and then in each step, sample the current texture with linear sampling, rendering to a texture of half the size. I believe this would work, but I'm not sure if it would be more efficient than just reading back the whole image.
I won't provide full code for the proposed solution, but the outline looks like this. This is using blending for drawing only the pixels in the intersection.
Set up an FBO with an RGBA texture attached as a color buffer. The size does not necessarily have to be the same as your screen resolution. It just needs to be big enough to give you enough precision for your intersection.
Clear FBO with black clear color.
Render first sprite with only alpha output, and no blending.
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE);
glDisable(GL_BLEND);
// draw sprite 1
This leaves the alpha values of sprite 1 in the alpha of the framebuffer.
Render the second sprite with destination alpha blending. The transparent pixels will need to have black in their RGB components for this to work correctly. If that's not already the case, change the fragment shader to create pre-multiplied colors (multiply rgb of the output by a).
glColorMask(GL_TRUE GL_TRUE, GL_TRUE, GL_TRUE);
glBlendFunc(GL_DST_ALPHA, GL_ZERO);
glEnable(GL_BLEND);
// draw sprite 2
This renders sprite 2 with color output only where the alpha of sprite 1 was non-zero.
Read back the result using glReadPixels(). The region being read needs to cover at least the bounding box of the two sprites.
Add up all the RGB values of the pixels that were read.
There was overlap between the two sprites if the resulting color is not black.
Related
I have a set of small images. If I draw these images individually on canvas, the draw quality is significantly low, compared to the case where I draw them on a screen size large bitmap and draw that bitmap on the canvas. Specially the lines get distorted. See the below (right side).
From the code below, the canvas also supports zooming (scaling). This issue occurs on small scale factors.
Question is how to improve the draw quantity of multiple small images to the standard of large image.
This is a code of multiple bitmaps drawn on canvas
canvas.scale(game.mScaleFactor, game.mScaleFactor);
canvas.translate(game.mPosX, game.mPosY);
for (int i = 0; i < game.clusters.size(); i++) {
Cluster cluster = game.clusters.get(i);
canvas.drawBitmap(cluster.Picture, cluster.left,
cluster.top, canvasPaint);
}
This is the code for single bitmap, game.board is a screen size image which has all the small bitmaps drawn on.
canvas.scale(game.mScaleFactor, game.mScaleFactor);
canvas.translate(game.mPosX, game.mPosY);
canvas.drawBitmap(game.board, matrix, canvasPaint)
The paint brush has following properties set.` All bitmaps are Bitmap.Config.ARGB_8888.
canvasPaint.setAntiAlias(true);
canvasPaint.setFilterBitmap(true);
canvasPaint.setDither(true);`
I can think of a couple, depending on you you are drawing the borders of the puzzle pieces.
The problem you are having is that when the single image is scaled, the lines are filtered with the rest of the image and it looks smooth (the blending is correct). When the puzzle is draw per-piece, the filtering reads adjacent pixels on the puzzle piece and blends them with the piece.
Approach 1
The first approach (one that is easy to do) is to render to FBO (RTT) at the logical size of the game and then scale the whole texture to the canvas with a fullscreen quad. This will get you the same result as single because the pixel blending involves neighboring pieces.
Approach B
Use bleeding to solve the issue. When you cut your puzzle piece, include the overlapping section of the adjacent pieces. Instead of setting the discarded pixels to zero, only set the alpha to zero. This will cause your blending function to pickup the same values as if it were placed on a single image. Also, double the lines for the border, but set the outside border alpha to zero.
Approach the Final
This last one is the most complicated, but will be smooth (AF) for any scaling.
Turn the alpha channel of your puzzle piece into a Signed Distance Field and render using a specialized shader that will smooth the output at any distance. Also, SDF allows you to draw the outline with a shader during rendering, and the outline will be smooth.
In fact, your SDF can be a separate texture and you can load it into the second texture stage. Bind the source image as tex unit 0, the sdf puzzle piece cutout(s) on tex unit 1 and use the SDF shader to determine the alpha from the SDF and the color from tex0, then mix in the outline as calculated from the SDF.
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
https://github.com/Chlumsky/msdfgen
http://catlikecoding.com/sdf-toolkit/docs/texture-generator/
SDF is generated from a Boolean map. Your puzzle piece cutouts will need to start as monochrome cutout and then turned into SDF (offline) using a tool or similar as listed above. Valve and LibGDX have example SDF shaders, as well as the tools listed above.
I am working with OpenGL-ES 3.0 in Android.
To simplify my issue, I am going to describe a scenario very similar to mine. Let's say that I have a sphere centered at the origin. Let's also say that I have another sphere centered at the origin, with a larger radius. Finally, let's say that I have a cylinder, and the center of the top face of the cylinder is lying on the origin. The cylinder intersects the two spheres. A picture of the scenario is below:
This is my initial setup:
I would like to only draw the section in between the two spheres, as shown below:
However, in my application, the smaller of the two spheres isn't visible (though it exists). It is completely transparent. Thus, the final end product I would like would look something like this:
Now, one more piece of information: as I mentioned earlier, this is a simplification of my current scenario. Instead of spheres, I have far more complex objects (not simple primitive shapes). Thus, approaching this from a mathematical perspective (such as only drawing the portion of the cylinder that is greater than the smaller sphere's radius and less than the larger sphere's radius) is not going to work. I need to approach this somehow from a programming perspective (but given my limited knowledge of OpenGL, I can only think of Depth Testing and Blending as viable options)
You can probably do this using a stencil buffer.
I haven't compiled this code and it will need modifying, but this is the general idea:
glDisable( GL_STENCIL_TEST );
<Render rest of scene (everything other than the spheres and cylinder)>
// Render the larger sphere into the stencil buffer, setting stencil bits to 1
glEnable( GL_STENCIL_TEST );
glClear( GL_STENCIL_BUFFER_BIT );
glColorMask( GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE ); // Don't render into the color buffer
glDepthMask( GL_FALSE ); // Don't render into the depth buffer
glStencilMask( 0xff ); // Enable writing to stencil buffer
glStencilFunc( GL_ALWAYS, 1, 0xff ); // Write 1s into stencil buffer
glStencilOp( GL_KEEP, GL_KEEP, GL_REPLACE ); // Overwrite for every fragment that passes depth test (and stencil <- GL_ALWAYS)
<Render big sphere>
// Render the smaller sphere into the stencil buffer, setting stencil bits to 0 (it carves out the big sphere)
glStencilFunc( GL_ALWAYS, 0, 0xff ); // Write 0s into stencil buffer
<Render small sphere>
// Render the cylinder into the color buffer, only where the stencil bits are 1
glStencilMask( 0 ); // Don't need to write to stencil buffer
glColorMask( GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE ); // Render into the color buffer
glStencilFunc( GL_EQUAL, 1, 0xff ); // Only render where there are 1s in the stencil buffer
<Render cylinder>
glDisable( GL_STENCIL_TEST );
// Now render the translucent big sphere using alpha blending
<Render big sphere>
What you are describing is Constructive Solid Geometry, but with the added complexity of using meshes as one of the primitive types.
Event with only mathematically simple primitives, it is very hard to implement CSG purely in the OpenGL pipeline because you would need to find a way to represent the scene graph in a way that the shaders can understand and efficiently parse. Once you add in meshes, it's basically impossible because the vertex and fragment shaders won't have easy access to the mesh geometry.
You might be able approximate it by executing a draw call for every item in the CGS graph and with clever manipulation of stencil and depth buffers, but you would probably still end up with lots of edge cases that didn't render properly.
I'm trying to display a geographically complex, semi-transparent (e.g. alpha = 0.5) object (terrain). When I render this object, the hidden front-faces of this object will also be drawn (like a hill that actually lies behind another one).
I would like to see other objects behind my "terrain" object, but don't want to see the hidden faces of my terrain (the second hill). So actually set the transparency for the "whole" object, not for single faces.
Q: How could I achieve to hide the "hidden" front-faces of a semi-transparent object?
I'm setting the transparency in the vertex shader by multiplying the color vector with the desired transparency:
fColor = vec4(vColor, 1.0);
fColor *= 0.5;
// fColor goes to fragment shader
GL_DEPTH_TEST is activated with GL_LEQUAL as depth function.
GL_BLEND is activated with GL_ONE, GL_ONE_MINUS_SRC_ALPHA as blending functions.
I tried to deactivate the depth buffer by GLES20.glDepthMask(false); before drawing, but this doesn't make any difference.
Probably I don't get the idea for the right depth buffer settings or the blending functions.
Well, I think I got it now:
Actually I can resign on blending at all. With the depth test switched on only the foreground fragments are visible (the front hill of my terrain). With the multiplication in the vertex shader the fragment shader will draw these visible fragments with desired transparency (the terrain as a whole becomes semi-transparent).
So, depth test on, blending off, color multiplication in vertex shader.
I am working on a game for Android and I was wondering why whenever I draw images with transparency there seems to always be some black added to the transparent parts. This happens all over and makes some of my effects look strange.
Here is an example. The two circles are only white images with a blur but you can see when one overlaps the other it has a shadow. If I overlap two of the circles say in Inkscape I get pure white where they overlap.
I am using
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
for my blending.
Any idea why this happens and how I can avoid it?
Edit: the only thing I can think of is that the two images have the same z so maybe they are blending only with the background instead of each other?
Edit:
I changed
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
to
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_DST_ALPHA);
Here is the result I was looking for.
The only thing now is that the transparent images that I had that have a transparent black in them are ignored, which makes sense because I think the destination alpha is 1. Why would One minus source add that gray?
I figured out how to fix it.
I changed
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
to
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
It turns out that Android pre-multiplies the alpha of textures of PNG images when you load it (making white turn gray).
I added
vColor.r = vColor.r * vColor.a;
vColor.g = vColor.g * vColor.a;
vColor.b = vColor.b * vColor.a;
to my vertex shader to do the multiplying for my other colors.
Are you sure your content is correct? If you don't want the circles to produce any black color the color values in the image should be completely white, and the alpha channel should define the shape of the circle. Now it looks like the image has a white circle with both alpha channel and the color value fading to zero, which leads to a halo.
Are you using linear sampling? My thought could be if you have a very small image (less than say 30-40 pixels in dimension), you could be getting interpolated alpha values between the inside of the circle (alpha = 1) to the outside of the circle (alpha = 0). This would give you intermediate alpha values that result in the kind of blur effect.
Try the following code when you have the circle texture bound:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
Maybe you can host the actual texture that you're using such that we can inspect it? Also please post your drawing code if this doesn't help.
You can also divide by alpha, because, the problem is, when you export your textures, Image processing software may pre-multiply your Alpha channel. I ended up with alpha division into my fragment shader. The following code is HLSL, but you can easily convert it to GLSL:
float4 main(float4 tc: TEXCOORD0): COLOR0
{
float4 ts = tex2D(Tex0,tc);
//Divide out this pre-multiplied alpha
float3 outColor = ts.rgb / ts.a;
return float4(outColor, ts.a);
}
Note that this operation is lossy, very lossy and even if it will most likely suffice in cases such as these, it's not a general solution. In your case you can totally ignore the COLOR and serve original alpha AND white in your fragment shader, e.g. return float4(1,1,1,ts.a); (convert to GLSL)
I am trying to do some animation. I have a initial vertex value and texture buffer value and also final texture and vertex value. Now i want to apply some kind of transformation and animation between those two values. How can we achieve this?
What interpolation? There are many types of interpolations.
One kind of interpolation is cross-fade effect. You can use the accumulation buffer to fade between two images. On each frame draw, you use alpha blending to draw a semi-transparent of your final image on the accumulation buffer, and then blit them to the screen buffer.