I wrote a web application that draws an interactive scene, including shadow mapping, using JavaScript and WebGL. I'd like to make sure that this site works on as many Android devices as reasonably possible. Shadow mapping works fine on desktop computers - both by using depth textures and by abuse a color texture to store the depth. But I haven't managed to make the site render the scene with shadow mapping on Android devices without major artifacts.
The problems are
According to webglstats.com, most Android devices do not support the WEBGL_depth_texture extension, so directly using the light source depth buffer as the shadow map does not work.
The workaround is to encode the depth of each fragment as a RGBA-value. While this works fine on desktop computers, the same code causes major artifacts on Android. My guess is that this is a precision issue: Either the precision of the WebGL-computed depth values is too low, and/or the floats in WebGL fragment shaders with mediump precision (According to the shader compiler error message on my Nexus 7 2012 Chrome, highp is not supported for fragment shaders) are actually only half floats and thus have a too low precision for splitting the depth value to RGBA values.
Are there working examples of WebGL shadow mapping that run fine on the majority of Android devices? Or is this just not reasonably possible? (Emulating a higher float accuracy through something like floating point expansions seems prohibitively expensive in a fragment shader).
Related
I'm working with the opengl es 3.0 api for android. I'm writing functionality for a bitmap text renderer, and, well, it works flawlessly on two of my devices, both samsung. but on my kindle fire 10 HD (7th generation) tablet, it's all messed up. it's sampling from wrong portions of the texture atlas, displaying wrong glyphs, and it won't display the entire message sometimes, and sometimes when i go to switch messages, via mapped buffers, it briefly displays the entire message at once before starting the animation of it. Anyhow, I seem to think it's related to degenerate triangle strips that I'm using, so I ask, is support for them not ubiquitous across all android devices supporting opengl es 3.0? I've had trouble before with the kindle fire, in shader-related stuff. It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
A degenerate triangle "should just work"; there is no feature here in the API for the hardware not to support. They are just triangles which happen to have zero area because of two coincident vertices.
They have historically be really common for any content using triangle strips, so to be honest I'd be surprised if they are broken.
It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
That's not a bug; that's what the the specification requires you to do. See "4.7.4. Default Precision Qualifiers" in the OpenGL ES 3.2 shader language specification.
I have created an android app for drawing of lines,circles.. by using GLSurfaceView in OpenGLES 2.0 like an Auto cad app.
The app works well with Google Nexus 7, in the sense that if we draw a line & then a circle the line doesn't get erased in surface view. But with Samsung Galaxy Note II, it is entirely different.
The line previously drawn before a circle being drawn, gets erased. i.e., each time if we draw a new line or circle, the previous one gets erased.I can draw only one image at a time. What I need is the same output which I get in Google Nexus 7 in Samsung Galaxy Note II.i.e.I want to draw more than one image in the GLSurfaceView at a time.
Note :
Both the Android OS in Google Nexus 7 and Samsung Galaxy Note II are Jelly Bean 4.2. But both devices are different GPU. Google Nexus 7 GPU is ULP GeForce & Samsung Galaxy Note II is Mali400MP.
Would this be an issue in the rendering of output of the Surfaceview ?
Should we take into account of GPU while Coding ?
Can anyone tell me why this problem of different output in different devices ?
Should we take into account of GPU while Coding ? No way, The OpenGL API is a layer between your application and the hardware.
This is largely correct for desktop graphics as all GPUs are immediate renderers, however, this is NOT the case in mobile graphics.
The Mali GPUs use tile-based immediate-mode rendering.
For this type of rendering, the framebuffer is divided into tiles of size 16 by 16 pixels. The Polygon List Builder (PLB) organizes input data from the application into polygon lists. There is a polygon list for each tile. When a primitive covers part of a tile, an entry, called a polygon list command, is added to the polygon list for the tile.
The pixel processor takes the polygon list for one tile and computes values for all pixels in that tile before starting work on the next tile. Because this tile-based approach uses a fast, on-chip tile buffer, the GPU only writes the tile buffer contents to the framebuffer in main memory at the end of each tile. Non-tiled-based, immediate-mode renderers generally require many more framebuffer accesses. The tile-based method therefore consumes less memory bandwidth, and supports operations such as depth testing, blending and anti-aliasing efficiently.
Another difference is the treatment of rendered buffers. Immediate renderers will "save" the content of your buffer, effectively allowing you to only draw differences in the rendered scene on top of what previously existed. This IS available in Mali, however, is not enabled by default as it can cause undesired effects if used incorrectly.
There is a Mali GLES2 SDK example on how to use "EGL Preserve" Correctly available in the GLES2 SDK here
The reason the Geforce ULP based nexus 7 works as intended is that, as an immediate based renderer, it defaults to preserving the buffers, whereas Mali does not.
From the Khronos EGL specification:
EGL_SWAP_BEHAVIOR
Specifies the effect on the color buffer of posting a surface with eglSwapBuffers. A value of EGL_BUFFER_PRESERVED indicates that color buffer contents are unaffected, while EGL_BUFFER_DESTROYED indicates that color buffer contents may be destroyed or changed by the operation.
*The initial value of EGL_SWAP_BEHAVIOR is chosen by the implementation.*
The default value for EGL_SWAP_BEHAVIOUR on the Mali platform is EGL_BUFFER_DESTROYED. This is due to the performance hit associated with having to fetch the previous buffer from memory before rendering the new frame, and storing it at the end as well as the consumption of bandwidth (which is also incredibly bad for battery life on mobile devices). I am unable to comment with certainty as to the default behavior of the Tegra SoCs however, it is apparent to me that their default is EGL_BUFFER_PRESERVED.
To clarify Mali's position with regards to the Khronos GLES specifications - Mali is fully compliant.
I have not seen your code, but you are probably doing something wrong.
Maybe you swap the buffer, erase it, etc. where you must not.
Should we take into account of GPU while Coding ?
No way, The OpenGL API is a layer between your application and the hardware.
We are a team of developers working on a terrain visualization software over a virtual 3D globe. The project is aimed at mobile devices running android, mainly tablets and mobile phones. We have tested this in several devices and, while mobile phones seem to run the application fine (we haven't detected any issues on any of them), some tablets seem to have problems when drawing the textures in the screen.
For clarity purposes, im attaching a video that displays the problem, since its a little difficult to explain with words. This example shows a sphere divided in 200 sectors, each one with a different texture.
Texture problem video
As you can see, sometimes it looks like it is trying to draw two different textures in the same sector at the same time.
We have tested this in these devices:
Samsung Galaxy S SLC (ok)
HTC Desire (ok)
Nook Ebook Reader (ok)
Samsung Galaxy Tab 10.1 (doesnt work propperly)
Sony Tablet S (doesnt work either)
Samsung Galaxy Tab 7.0 (ok)
Im posting the critical code that may be involved in this. First the fragment shader that is used to draw the textures:
varying mediump vec2 TextureCoordOut;
uniform sampler2D Sampler;
....
gl_FragColor = texture2D (Sampler, TextureCoordOut);
Next, im posting the key instructions that are executed in OpenGL, since the code is spared in several big functions:
GLES20.glGenTextures(num, idTextures, 1); //declare 200 textures
...
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, idTextures[texture]); //texture binding
...
GLES20.glVertexAttribPointer(Attributes.Position, size, GLES20.GL_FLOAT, false, stride, fb);
...
GLES20.glVertexAttribPointer(Attributes.TextureCoord, size, GLES20.GL_FLOAT, false, stride, fb);
...
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, first, count);
Im sorry not to be able to provide more details, but after several weeks of debugging, we have no clue at all of what could be causing this. Im turning to you hoping for any leads, since we are completely lost right now. Thanks in advance.
I think I might know what causes you difficulties. The only devices where it doesn't work for you are Tegra 2 devices. I recently started working with a Tegra 2 device and noticed certain differences. Compared to other devices Nvidia seems to have introduced certain kinds of rounding errors where some things can behave differently from other GPUs. I had to go to extreme lengths and use workarounds to get it working the way I want with my complex shaders.
And what I could see on your video looks sort of like the z-buffer might not have enough resolution and that causes some kind of z-fighting: http://en.wikipedia.org/wiki/Z-fighting. I am not sure because I don't really see from your code/description if you are drawing triangles that are very close together and can cause this kind of behaviour in the z-buffer.
But maybe you can try to scale the vertices (x,y,z) somewhat up or down and see what happens to the flickering. If it changes it probably has something to do with that, if not, the problem might have another reason, but then a little more code would be nice.
It would also be interesting to try to narrow it down by not painting the whole sphere but just one or two triangles where the problem occurs and see if it still happens.
PS: A longer video would have been better, short clips suck on vimeo.
I have an OpenGL Live wallpaper that works fine on all phones except those with the PowerVR SGX series. This includes almost all Samsung phones and the Motorola Droid series. The wallpaper is nothing but a black screen on the PowerVR GPU phones. I have been racking my brain for a week trying to figure this problem out but have had no luck.
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
I have a list of all the extensions that are available with the PowerVR and the ones that are available with the Adreno. It seems that there are quite a few differences in available extensions but I do not know what functions go with what extensions (though I can somewhat guess). Here is a list of the functions that I use:
glLightfv
glMaterialfv
glDepthFunc
glEnableClientState
glViewport
glMatrixMode
glLoadIdentity
gluPerspective
glclearcolor
glclear
glTranslatef
glRotatef
glVertexPointer
glTexCoordPointer
glColor4f
glNormal3f
glDrawArrays
glTexParamterx
I am using Robert Green's GlWallPaperService and have tried this solution at Trying to draw textured triangles on device fails, but the emulator works. Why? . Does anybody have any idea why the PowerVR chips are giving me such a hard time and what I could do about it?
Removing EGL10.EGL_RED_SIZE, EGL10.EGL_GREEN_SIZE, and EGL10.EGL_BLUE_SIZE but leaving EGL10.EGL_DEPTH_SIZE, EGL10.EGL_NONE in the eglChooseConfig worked. I assume that the PowerVR chip processes RGB in a way that makes defining them a problem.
This probably won't help you, but I noticed:
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
To my knowledge, no current hardware supports indexed textures. Also, to use texture compression, you need to target a compressed texture format that is specifically supported by the device (which usually entails running a compressor on the host/development platform). SGX supports PVRTC and ETC but whether those are enabled depends on the platform
From my own experience with this GPU, it will offer GLES configurations, that once applied will not work (i.e. the GLES context will not be created). The workaround is to look at the GLSurfaceView code, roll out your own and try out each offered configuration, whether it works for creating a context.
I'm trying to develop an app that uses opengl on android, and ideally make it run on any phone as old as the original droid (or at least any phone that has OpenGL ES 2.0 support). Currently, I'm using a 2048x2048 ETC1 texture compression. It works fine on the Droid X I'm testing it on, but I currently don't have an original droid to test it on, and I can't find much data on this topic either. I know the G1 didn't do well with textures bigger than 512x512, and the droid seems to do fine with images as large as 1024x1024, but what about 2048x2048? (Again, etc1 compression, so it's about 2 MB large). Also, because ETC1 doesn't support alpha, I would like to load up another ETC1 texture to support an alpha channel. Is this a pipe dream?
Basically, I would like to know how much space I have to load texture data in android phones no older than the original droid, at least without the whole thing slowing down drastically.
You can query the MAX_TEXTURE_SIZE to get the maximum texture size, or you can look up your phone on http://glbenchmark.com (you can find the Droid info here). The G1 did not support GLES 2.0, AFAIK.
Loading up several textures should definitly work, especially when they are not more than 2 MB each. But you will of course be restricted by the size of the memory available.
Also, to have optimal performance you should mipmap your textures. Since you are using ETC, this must be done offline.
For a guide to how to use ETC1 with alpha, see here.