Libgdx decalbatch particles alpha - android

I have a decalbatch and some decal in it. Lets say 50. I have a groupstrategy and a cuszom shader.
My probleme is when I do something in that shader than my all particles responding to that. So when I change the alpha on the shader than it changes on all particle decal.
How can I change one by one on that shader?
Thanks

Shader uniforms and constants affect everything in the batch.
If you want to continue doing this with shader uniforms, you could flush the batch and then submit more decals each time you change the value of the parameter, but you would need to keep them sorted for transparent decals to look correct. You could do this by creating a GroupStrategy that sorts all the decals and then assigns them groups in ascending order from far to near, creating a new group each time the affected parameter is different.
The above has the potential to cause a lot of batch flushing which could cause a performance hit. An alternative is to use the existing vertex attributes to encode your data per decal. However, the only one that's really available is vertex color, since you need the texture coordinates and position attributes as is. So you can only put data into the color of the decal, if you aren't using color for tinting purposes.
A third possibility is to use a library that allows more customization than DecalBatch, such as FlexBatch. FlexBatch can be used sort of like a DecalBatch, but you can define whatever vertex attributes you need.

Related

Shared vertex indices with normals in opengl

In opengl or opengl-es you can use indices to share a vertices. This works fine if you are only using vertex coords and texture coords that don't change, but when using normals, the normal on a vertex may change depending on the face. Does this mean that you are essentially forced to scrap vertex sharing in opengl? This article http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-9-vbo-indexing/
seems to imply that this is the case, but I wanted a second opinion. I'm using .obj models so should I just forget about trying to share verts? This seems like it would increase the size of my model though as I iterate and recreate the array since i am repeating tons of verts and their tex/normal attributes.
The link you posted explains the situation well. I had the same question in my mind couple months ago.I remember I read that tutorial.
If you need exactly 2 different normal, so you should add that vertex twice in your index list. For example, if your mesh is a cube you should add your vertices twice.
Otherwise indexing one vertex and calculating an average normal is kind of smoothing your normal transitions on your mesh. For example if your mesh is a terrain or a detailed player model etc. you can use this technique which you save free space and get better looking result.
If you ask how to calculate average normal, I used average normal calculating algorithm from this question and result is fast and good.
If the normals are flat faces then you can annotate the varying use in the fragment shader with the "flat" qualifier. This means only the value from the provoking vertex is used. With a good model exporter you can get relatively good vertex sharing with this.
Not sure on availability on GLES2, but is part of GLES3.
Example: imagine two triangles, expressed as a tri-strip:
V0 - Norm0
V1 - Norm1
V2 - Norm2
V2 - Norm3
Your two triangles will be V0/1/2 and V1/2/3. If you mark the varying variable for the normal as "flat" then the first triangle will use Norm0 and the second triangle will use Norm1 (i.e. only the first vertex in the triangle - known as the provoking vertex - needs to have the correct normal). This means that you can safely reuse vertices in other triangles, even if the normal is "wrong" provides that you make sure that it isn't the provoking vertex for that triangle.

OpenGL ES : Understanding Vertex Buffer Objects

I am working on an Android project a bit like Minecraft. I am finding this a great way to learn about OpenGL Performance.
I have moved over to a vertex buffer object which has given me huge performance gains but now I am seeing the down sides.
I am right in thinking I need a vertex buffer object per:
Different mesh
Different texture
Different colour
Am I also right in thinking that every time the player adds a cube I need to add that on to the end of the VBO and every time the user removes a cube I need to regenerate the VBO?
I can't see how you could map a object with properties to its place in the VBO.
Does anyone know if Minecraft type games use VBO's
Yeah if you malloc() a memory space then you need to create new VBO-s. If you want to expand it because you need more memory. If you want to show less then I guess you could play with IBO-s but again you have to rearrange the VBO at some point.
I'm not really sure what you mean by object properties but if you want them to be shown then I think you'll need different VBO-s for each kind of property/cube-type / shader pairs. And draw them in groups.
If you want to store other kind of properties then you shouldn't store it in VBO that you pass to OpenGL.
I have no idea what Minecraft uses but my best advice is that you store the not likely to reach cubes in VBO-s and the the likely to use cubes in easy to modify container. (I don't know if it would help or not)

Batching Multiple Rectangles in OpenGL ES

I currently am experiencing very slow performance by iterating through quad triangle strips and drawing each one separately, so I would like to batch all of my rectangles into one single draw call.
Looking around, it seems the best way to do this is to simply occur the overhead of duplicating vertices and using GL_TRIANGLES instead of GL_TRIANGLE_STRIP, simply drawing two separate triangles for each rectangle.
The problem is that each rectangle can have a different color, and I need to programmatically change the color of any of the rectangles. So simply using one GL_TRIANGLES call does not do the trick. Instead, it looks like I'll need to somehow index color data with my vertex data, associating a color with each rectangle. How would I go about this?
Thank you!
You can use vertex coloring.
Vertices can each have multiple channels of data, including position, color, (multiple) texture, normal, and more.
I recommend interleaving your vertices to include position and color one after the other, directly. Although you can set up a separate array of just colors and do it that way as well (just make sure you line up the colors with the positions correctly).
(Those tutorials are iPhone-oriented but the OpenGL ES code should work fine on Android)

glBlendEquation(GL_MIN) replacement in OpenGL ES 2.0

I've search around and seems that glBlendEquation has some issues in Android, -GL_MAX/MIN is not even listed in the opengles specs
I need to find a workaround to GL_MIN blendEquation mode. Is that possible? I want to write to a color buffer only if the alpha there is greater than the pixel's alpha i'm trying to write. Is it possible to do it without any extension or using more than 1 texture?
The dirty solution I was trying to avoid is this. Using Frame Buffer objects and shaders to emulate the color buffer and the blending mode:
Modify existing shaders to blend the scene with an FBO_1, into FBO_2.
Render FBO_2 to the screen.
The next drawing call swap FBO_1 with FBO_2, as FBO_2 equals the color buffer.
An "unintrusive" and more inefficient alternative is to use 3 FBOs and a shader, and make an additional pass.
Render scene to FBO_1 //without any modification to existing shaders
Blend FBO_1 with FBO_2 into FBO_3 //with new shader.
Render FBO_3 to the screen.
The next drawing call swap FBO_2 with FBO_3. The only advantage of this alternative is that i dont have to modify the existing drawing logic.
I really don't like any of this ideas. I'll gladly accept better answers!

Animating color values in open gles 1.0 on Android

I am working on an application in which I will perform some drawing using openGL. In this application I will draw a 3d-object with each vertex a different color. This can be done by using a colorpointer.
Now, my problem is that I would like these colors to animate over time.
As the color values are given using a buffer, I would have to either recreate the buffer every frame with new colors, or replacing the values in the buffer somehow (which is probably quite error prone). I also thought about the possibility of using two buffers and switching between them (drawing with one buffer, and changing the other, then switch).
And in any case, I would have to upload the buffer to the video memory every frame...
So, my question is this; how do I, as efficient as possible, animate the different colors of an object in GL10?
Note; It would of course be easy to do this using shaders in gles 2.0, but I would prefer it if I could just use GL10 (or 11) for this project.
Instead of using vertex colors, maybe you could come up with a clever way to use a texture instead, and animate this using the texture matrix? That way you wouldn't have to update your vertex buffers ever.

Categories

Resources