GLScissors: what is faster/better? - android

I'm having a small dilemma.
I'm working with Android (not that it is relevant) and I noticed one thing on some phones and not others: the scissor prevents the call glClear(GL_COLOR_BUFFER_BIT) from working correctly.
Therefore I'm wondering if it is better to do:
gl.glDisable(GL10.GL_SCISSOR_TEST);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
gl.glEnable(GL10.GL_SCISSOR_TEST);
or
gl.glScissors(0,0,800,480);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
Basically, is it better to change to scissors test zone before the clear or is it better to disable/enable?

This is the specification of glScissors, since a glClear is considered in OpenGL as a drawing command. So the behavior you see is perfectly normal (D3D works similarly) but buggy on the other phones where funnily it seems for you to work!
About which solution to choose, I don't know, both are valid. It's a matter of taste I would say. I prefer the first one because it's more easy to figure out what happens.
Now if on your OpenGL implementation the second solution turns out to be faster than the first one, I would picked the second one. Benchmark!
Under the glHood:
Let me share of what I know on desktop GPUs and speculate a bit (don't take it too seriously!). A glClear command actually results in a draw of a full-screen quad, since drawing triangles is the "fast path" on GPUs. This is probably more efficient than DMAs or fixed hardware, since it's parallel (all shader cores clear its portion of the screen, color, z and stencil) and if you do that you can avoid the cost of fixed hardware.
About glScissor, I heard it's implemented via stencil buffering (the same mechanism than usual OpenGL stencil buffer), so only fragments that fall into the scissor zone can participate to depth dest, fragment shading, blending, etc (this is done for same reason than glClear, avoid dedicated hardware). It could be also implemented as a fragment discard+dynamic branching on modern GPUs.
Now you can see why it works that way. Only fragments of the full screen quad that lies within the scissor zone can "shade" the color buffer and clear it!

Related

Is there anything I can do about the overhead from running a shader multiple times

I'm trying to implement deferred rendering on an Android phone using OPENGL ES 3.0. I've gotten this to work ok but only very slowly, which rather defeats the whole point. What really slows things up is the multiple calls to the shaders. Here, briefly, is what my code does:
Geometry Pass:
Render scene - output position, normal and colour to off-screen buffers.
For each light:
a) Stencil Pass:
Render a sphere at the current light position, sized according to the lights intensity. Mark these pixels as influenced by current light. No actual output.
b) Light Pass:
Render a sphere again, this time using the data from the geometry pass to apply lighting equations to pixels marked in the previous step. Add this to off-screen buffer
Blit to screen
It's this restarting the shaders for each light causing the bottleneck. For example, with 25 lights the above steps run at about 5 fps. If instead I do: Geometry Pass / Stencil Pass - draw 25 lights / Light Pass - draw 25 lights it runs at around 30 fps. So, does anybody know how I can avoid having to re-initialize the shaders? Or, in fact, just explain what's taking up the time? Would it help or even be possible (and I'm sorry if this sounds daft) to keep the shader 'open' and overwrite the previous data rather than doing whatever it is that takes so much time restarting the shader? Or should I give this up as a method for having multiple lights, on a mobile devise anyway.
Well, I solved the problem of having to swap shaders for each light by using an integer texture as a stencil map, where a certain bit is set to represent each light. (So, limited to 32 lights.) This means step 2a (above) can be looped, then a single change of shader, and looping step 2b. However, (ahahaha!) it turns out that this didn't really speed things up as it's not, after all, swapping shaders that's the problem but changing write destination. That is, multiple calls to glDrawBuffers. As I had two such calls in the stencil creation loop - one to draw nowhere when drawing a sphere to calculate which pixels are influenced and one to draw to the integer texture used as the stencil map. I finally realized that as I use blending (each write with a colour where a singe bit is on) it doesn't matter if I write at the pixel calculation stage, so long as it's with all zeros. Getting rid of the unnecessary calls to glDrawBuffers takes the FPS from single figures to the high twenties.
In summary, this method of deferred rendering is certainly faster than forward rendering but limited to 32 lights.
I'd like to say that me code was written just to see if this was a viable method and many small optimizations could be made. Incidentally, as I was limited to 4 draw buffers, I had to scratch the position map and instead recover this from gl_FragCoord.xyz. I don't have proper benchmarking tools so I'd be interested to hear from anyone who can tell me what difference this makes, speedwise.

Android OpenGL rendering bug without glClear?

I'm developing a drawing application where the user can select a range of brushes and paint on the screen. I'm using textures as brushes and I'm drawing vertexes as points with PointSpriteOES enabled as displayed below.
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnable(GL11.GL_POINT_SPRITE_OES);
gl.glTexEnvf(GL11.GL_POINT_SPRITE_OES, GL11.GL_COORD_REPLACE_OES, GL10.GL_TRUE);
The application worked just as desired, but I needed to optimize it for runtime as its framerate dropped under 30 when dealt with a lot of vertexes. Since the application's domain enables it, it seemed a good idea to leave the glClear and leave the redrawing of already existing lines as it's really unnecessary. However, this resulted in a very strange bug I couldn't fix since then. When the OpenGL is not rendering (I have set render mode to WHEN_DIRTY), only about 1/3 of all the vertexes are visible on the screen. Requesting a redraw by calling requestRender() makes these vertexes disappear and others are shown. There are three states I can tell apart, each state showing an approximate of 1/3 of all vertexes.
I have uploaded three screenshots (http://postimg.org/image/d63tje56l/, http://postimg.org/image/npeds634f/) to make it a bit easier for you to understand. The screenshots show the state where I have drawn three lines with different colors (SO didn't enable me to link all 3 images, but I hope you can imagine it - it has the segments missing from the 1st and the 2nd). It can clearly be seen that if I could merge the screens into a single one, I would get the desired result.
I'm only guessing what the issue is caused by since I'm not an OpenGL expert. My best shot is that OpenGL uses triple buffers and only a single buffer is shown at a given time, while other vertexes are placed on the backbuffers. I have tried forcing all buffers to be rendered as well as trying to force all vertexes to appear on all buffers, but I couldn't manage either.
Could you help me solve this?
I believe your guess is exactly right. The way OpenGL is commonly used, you're expected to draw a complete frame, including an initial clear, every time you're asked to redraw. If you don't do that, behavior is generally undefined. In your case, it certainly looks like triple buffering is used, and your drawing is distributed over 3 separate surfaces.
This model does not work very well for incremental drawing, where drawing a full frame is very expensive. There are a few options you can consider.
Optimize your drawing
This is not directly a solution, but always something worth thinking about. If you can find a way to make your rendering much more efficient, there might be no need to render incrementally. You're not showing your rendering code, so it's possible that you simply have too many points to get a good framerate.
But in any case, make sure that you use OpenGL efficiently. For example, store your points in VBOs, and update only the parts that change with glBufferSubData().
Draw to FBO, then blit
This is the most generic and practical solution. Instead of drawing directly to the primary framebuffer, use a Frame Buffer Object (FBO) to render to a texture. You do all of your drawing to this FBO, and copy it to the primary framebuffer when it's time to redraw.
For copying from FBO to the primary framebuffer, you will need a simple pair of vertex/fragment shaders in ES 2.0. In ES 3.0 and later, you can use glBlitFramebuffer().
Pros:
Works on any device, using only standard ES 2.0 features.
Easy to implement.
Cons:
Requires a copy of framebuffer on every redraw.
Single Buffering
EGL, which is the underlying API to connect OpenGL to the window system in Android, does have attributes to create single buffered surfaces. While single buffered rendering is rarely advisable, your use case is one of the few where it could still be considered.
While the API definition exists, the documentation specifies support as optional:
Client APIs may not be able to respect the requested rendering buffer. To determine the actual buffer being rendered to by a context, call eglQueryContext.
I have never tried this myself, so I have no idea how widespread support is, or if it's supported on Android at all. The following sketches how it could be implemented if you want to try it out:
If you derive from GLSurfaceView for your OpenGL rendering, you need to provide your own EGLWindowSurfaceFactory, which would look something like this:
class SingleBufferFactory implements GLSurfaceView.EGLWindowSurfaceFactory {
public EGLSurface createWindowSurface(EGL10 egl, EGLDisplay display,
EGLConfig config, Object nativeWindow) {
int[] attribs = {EGL10.EGL_RENDER_BUFFER, EGL10.EGL_SINGLE_BUFFER,
EGL10.EGL_NONE};
return egl.eglCreateWindowSurface(display, config, nativeWindow, attribs);
}
public void destroySurface(EGL10 egl, EGLDisplay display, EGLSurface surface) {
egl.eglDestroySurface(display, surface);
}
}
Then in your GLSurfaceView subclass constructor, before calling setRenderer():
setEGLWindowSurfaceFactory(new SingleBufferFactory());
Pros:
Can draw directly to primary framebuffer, no need for copies.
Cons:
May not be supported on some or all devices.
Single buffered rendering may be inefficient.
Use EGL_BUFFER_PRESERVE
The EGL API allows you to specify a surface attribute that requests the buffer content to be preserved on eglSwapBuffers(). This is not available in the EGL10 interface, though. You'll have to use the EGL14 interface, which requires at least API level 17.
To set this, use:
EGL14.eglSurfaceAttrib(EGL14.eglGetCurrentDisplay(), EGL14.eglGetCurrentSurface(EGL14.EGL_DRAW),
EGL14.EGL_SWAP_BEHAVIOR, EGL14.EGL_BUFFER_PRESERVED);
You should be able to place this in the onSurfaceCreated() method of your GLSurfaceView.Renderer implementation.
This is supported on some devices, but not on others. You can query if it's supported by querying the EGL_SURFACE_TYPE attribute of the config, and check it against the EGL_SWAP_BEHAVIOR_PRESERVED_BIT bit. Or you can make this part of your config selection.
Pros:
Can draw directly to primary framebuffer, no need for copies.
Can still use double/triple buffered rendering.
Cons:
Only supported on subset of devices.
Conclusion
I would probably check for EGL_BUFFER_PRESERVE support on the specific device, and use it if it is suppported. Otherwise, go for the FBO and blit approach.

OpenGL/Android big sprite quads bad performance

I am developping a game on android using opengl and am having a little performance problem.
Let's say for example I want to draw a background partially filled with grass "bushes". Bushes have different x,y,z, different sizes and so on (each bush is a 2D sprite), and potentially partially hide each other (I use a perspective camera). I am having a big performance problem if those sprites are big (i.e. the quad sizes, not the texture size/resolution) :
If I use a classical front to back draw (to avoid overdraw), I find myself having problems because of (I think) alpha testing. Even if the bushes have only opaque and fully transparent pixels (no partial transparency), and if I use the proper alpha testing comparison (GL_EQUAL 1) the performances are bad because a lot of pixels have to be alpha tested (If I understand right).
If I use a back to front display with alpha testing disabled, I lose a lot of performance too (but this time because of overdraw problems), even when disabling depth buffer writing (not sure if it does anything if depth test is disabled by the way).
I am having good performances if using front to back without alpha testing, but of course sprite cutout is completely gone, which is really really bad.
All the bushes have the same texture, I use 16 bit colors, mip mapping, geometry batching, cull faces, no shaders, etc. All what I can think of to improve performances (which are not bad in other cases), except texture compression. I even filter the sprites to avoid "displaying" the ones out the screen. I have also tried some "violent optimizations" for test purposes, such as making the textures fully opaque, lowering the texture resolution a lot, disabling blending, etc, but nothing was fantastic performance-wise except the alpha testing removal.
I was wondering if I was forgetting something here to help with the performance. Back to front creates overdraw, front to back is slow because of alpha testing (and I do not want my bushes to be "square" images so I cannot disable alpha testing). If I create smaller sprites performances are far better (even with a lot more sprites), but this is only a workaround.
To summarize, how can you display overlapping big quads needing cutout, without losing performance?
PS : I am testing on a nexus one.
PS2 : Some optimizations suggest to not create quads but geometries more "fitting" the texture, but it seems to be a really tedious process, and would not help me a lot I think.
Drawing front-to-back is normally a benefit because of early-z: the hardware can do the depth test right after rasterization, before doing the texture fetch or shading. With front-to-back sorting, most fragments fail the depth test, and you save a lot of texture bandwidth, shading throughput, and zbuffer-write bandwidth.
But alpha test breaks that. If a fragment passes the depth test, it might still be killed by alpha test, so zwrite can't happen until after texturing/shading. Most hardware that can do early-z still has to do the depth test at the same point in the pipeline as it does zwrite, so with alpha test you end up doing ztest + zwrite after texturing and shading. As a result, front-to-back sorting only saves you zwrite bandwidth, nothing else.
I think you have two options, if you really want large sprites that overlap significantly:
(a) Only use two or three distinct Z values for your sprites. Draw them back-to-front with blending (and alpha-test, if it helps). No overlap within a layer: you can pre-render each layer either in the original assets or once at runtime, then just shift it left and right.
(b) If your sprites have large opaque regions surrounded by a semi-transparent border, you can draw the opaque regions in a first pass with no alpha test, then draw borders as a separate pass. This will cut down on the number of alpha-tested fragments.

Is discard bad for program performance in OpenGL?

I was reading this article, and the author writes:
Here's how to write high-performance applications on every platform in two easy steps:
[...]
Follow best practices. In the case of Android and OpenGL, this includes things like "batch draw calls", "don't use discard in fragment shaders", and so on.
I have never before heard that discard would have a bad impact on performance or such, and have been using it to avoid blending when a detailed alpha hasn't been necessary.
Could someone please explain why and when using discard might be considered a bad practise, and how discard + depthtest compares with alpha + blend?
Edit: After having received an answer on this question I did some testing by rendering a background gradient with a textured quad on top of that.
Using GL_DEPTH_TEST and a fragment-shader ending with the line "if(
gl_FragColor.a < 0.5 ){ discard; }" gave about 32 fps.
Removing the if/discard statement from the fragment-shader increased
the rendering speed to about 44 fps.
Using GL_BLEND with the blend function "(GL_SRC_ALPHA,
GL_ONE_MINUS_SRC_ALPHA)" instead of GL_DEPTH_TEST also resulted in around 44 fps.
It's hardware-dependent. For PowerVR hardware, and other GPUs that use tile-based rendering, using discard means that the TBR can no longer assume that every fragment drawn will become a pixel. This assumption is important because it allows the TBR to evaluate all the depths first, then only evaluate the fragment shaders for the top-most fragments. A sort of deferred rendering approach, except in hardware.
Note that you would get the same issue from turning on alpha test.
"discard" is bad for every mainstream graphics acceleration technique - IMR, TBR, TBDR. This is because visibility of a fragment(and hence depth) is only determinable after fragment processing and not during Early-Z or PowerVR's HSR (hidden surface removal) etc. The further down the graphics pipeline something gets before removal tends to indicate its effect on performance; in this case more processing of fragments + disruption of depth processing of other polygons = bad effect
If you must use discard make sure that only the tris that need it are rendered with a shader containing it and, to minimise its effect on overall rendering performance, render your objects in the order: opaque, discard, blended.
Incidentally, only PowerVR hardware determines visibility in the deferred step (hence it's the only GPU termed as "TBDR"). Other solutions may be tile-based (TBR), but are still using Early Z techniques dependent on submission order like an IMR does.
TBRs and TBDRs do blending on-chip (faster, less power-hungry than going to main memory) so blending should be favoured for transparency. The usual procedure to render blended polygons correctly is to disable depth writes (but not tests) and render tris in back-to-front depth order (unless the blend operation is order-independent). Often approximate sorting is good enough. Geometry should be such that large areas of completely transparent fragments are avoided. More than one fragment still gets processed per pixel this way, but HW depth optimisation isn't interrupted like with discarded fragments.
Also, just having an "if" statement in your fragment shader can cause a big slowdown on some hardware. (Specifically, GPUs that are heavily pipelined, or that do single instruction/multiple data, will have big performance penalties from branch statements.) So your test results might be a combination of the "if" statement and the effects that others mentioned.
(For what it's worth, testing on my Galaxy Nexus showed a huge speedup when I switched to depth-sorting my semitransparent objects and rendering them back to front, instead of rendering in random order and discarding fragments in the shader.)
Object A is in front of Object B. Object A has a shader using 'discard'. As such, I can't do 'Early-Z' properly because I need to know which sections of Object B will be visible through Object A. This means that Object A has to pass all the way through the processing pipeline until almost the last moment (until fragment processing is performed) before I can determine if Object B is actually visible or not.
This is bad for HSR and 'Early-Z' as potentially occluded objects have to sit and wait for the depth information to be updated before they can be processed. As has been stated above, its bad for everyone, or, in slightly more friendly way "Friends don't let friends use Discard".
In your test, your if statment is in per pixel level performance
if ( gl_FragColor.a < 0.5 ){ discard; }
Would be processed once per pixel that was being rendered (pretty sure that's per pixel and not per texel)
If your if statment was testing a Uniform or Constant you'd most likley get a different result due to to Constants only being processed once on compile or uniforms being processed once per update.

Single Bone (Matrix Palette) Animation vs. simple rotation/translation of parts

First of all, I'm using OpenGL ES 1.1 and not 2.0, mostly because I've spent quite a bit of time learning 1.0/1.1 and am pretty comfortable with it.
I took time to learn the use of matrix palettes for animation and after switching gears on a project I've come to a question.
Originally I was using 2 and 3 bone animation because I had the need of weights for certain vertex groups. Now... in the new project I'm working on I will be animating more mechanical things, so the need for more than 1 bone or weighting is unnecessary. I'd like to still use a matrix palette with verts weighted 100% to single bones... but I wonder if that will cause a performance hit. Instead, I could break a mesh into smaller pieces and do simple translation and rotation between element draw calls. I am concerned, of course, with performance.
TL;DR version: try both ways and see which one performs better.
Really, using palettes for 1-bone animation is something that you can do without too much hassle, and depending on the number of different bones, and the driver overhead on the devices you do it on, might perform better.
It's worth noting that weights can be ignored in a 1-bone model, and the resulting per-vertex code should typically be comparable to a single transform, modulo the indirection to the palette.
That, of course, hinges on the GL implementation to optimize the weighting away. On the other hand, the higher the number of bones, the more draw calls you would have to generate without palettes, and the more you tax the CPU/driver code.
So at a broad level, I'd say that the palette is somewhat more work per-vertex, but significantly less per-bone. Where the tipping point is depends on the platform, as both of those cost can vary significantly.

Categories

Resources