I'm working with the opengl es 3.0 api for android. I'm writing functionality for a bitmap text renderer, and, well, it works flawlessly on two of my devices, both samsung. but on my kindle fire 10 HD (7th generation) tablet, it's all messed up. it's sampling from wrong portions of the texture atlas, displaying wrong glyphs, and it won't display the entire message sometimes, and sometimes when i go to switch messages, via mapped buffers, it briefly displays the entire message at once before starting the animation of it. Anyhow, I seem to think it's related to degenerate triangle strips that I'm using, so I ask, is support for them not ubiquitous across all android devices supporting opengl es 3.0? I've had trouble before with the kindle fire, in shader-related stuff. It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
A degenerate triangle "should just work"; there is no feature here in the API for the hardware not to support. They are just triangles which happen to have zero area because of two coincident vertices.
They have historically be really common for any content using triangle strips, so to be honest I'd be surprised if they are broken.
It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
That's not a bug; that's what the the specification requires you to do. See "4.7.4. Default Precision Qualifiers" in the OpenGL ES 3.2 shader language specification.
Related
I wrote a web application that draws an interactive scene, including shadow mapping, using JavaScript and WebGL. I'd like to make sure that this site works on as many Android devices as reasonably possible. Shadow mapping works fine on desktop computers - both by using depth textures and by abuse a color texture to store the depth. But I haven't managed to make the site render the scene with shadow mapping on Android devices without major artifacts.
The problems are
According to webglstats.com, most Android devices do not support the WEBGL_depth_texture extension, so directly using the light source depth buffer as the shadow map does not work.
The workaround is to encode the depth of each fragment as a RGBA-value. While this works fine on desktop computers, the same code causes major artifacts on Android. My guess is that this is a precision issue: Either the precision of the WebGL-computed depth values is too low, and/or the floats in WebGL fragment shaders with mediump precision (According to the shader compiler error message on my Nexus 7 2012 Chrome, highp is not supported for fragment shaders) are actually only half floats and thus have a too low precision for splitting the depth value to RGBA values.
Are there working examples of WebGL shadow mapping that run fine on the majority of Android devices? Or is this just not reasonably possible? (Emulating a higher float accuracy through something like floating point expansions seems prohibitively expensive in a fragment shader).
I have created an android app for drawing of lines,circles.. by using GLSurfaceView in OpenGLES 2.0 like an Auto cad app.
The app works well with Google Nexus 7, in the sense that if we draw a line & then a circle the line doesn't get erased in surface view. But with Samsung Galaxy Note II, it is entirely different.
The line previously drawn before a circle being drawn, gets erased. i.e., each time if we draw a new line or circle, the previous one gets erased.I can draw only one image at a time. What I need is the same output which I get in Google Nexus 7 in Samsung Galaxy Note II.i.e.I want to draw more than one image in the GLSurfaceView at a time.
Note :
Both the Android OS in Google Nexus 7 and Samsung Galaxy Note II are Jelly Bean 4.2. But both devices are different GPU. Google Nexus 7 GPU is ULP GeForce & Samsung Galaxy Note II is Mali400MP.
Would this be an issue in the rendering of output of the Surfaceview ?
Should we take into account of GPU while Coding ?
Can anyone tell me why this problem of different output in different devices ?
Should we take into account of GPU while Coding ? No way, The OpenGL API is a layer between your application and the hardware.
This is largely correct for desktop graphics as all GPUs are immediate renderers, however, this is NOT the case in mobile graphics.
The Mali GPUs use tile-based immediate-mode rendering.
For this type of rendering, the framebuffer is divided into tiles of size 16 by 16 pixels. The Polygon List Builder (PLB) organizes input data from the application into polygon lists. There is a polygon list for each tile. When a primitive covers part of a tile, an entry, called a polygon list command, is added to the polygon list for the tile.
The pixel processor takes the polygon list for one tile and computes values for all pixels in that tile before starting work on the next tile. Because this tile-based approach uses a fast, on-chip tile buffer, the GPU only writes the tile buffer contents to the framebuffer in main memory at the end of each tile. Non-tiled-based, immediate-mode renderers generally require many more framebuffer accesses. The tile-based method therefore consumes less memory bandwidth, and supports operations such as depth testing, blending and anti-aliasing efficiently.
Another difference is the treatment of rendered buffers. Immediate renderers will "save" the content of your buffer, effectively allowing you to only draw differences in the rendered scene on top of what previously existed. This IS available in Mali, however, is not enabled by default as it can cause undesired effects if used incorrectly.
There is a Mali GLES2 SDK example on how to use "EGL Preserve" Correctly available in the GLES2 SDK here
The reason the Geforce ULP based nexus 7 works as intended is that, as an immediate based renderer, it defaults to preserving the buffers, whereas Mali does not.
From the Khronos EGL specification:
EGL_SWAP_BEHAVIOR
Specifies the effect on the color buffer of posting a surface with eglSwapBuffers. A value of EGL_BUFFER_PRESERVED indicates that color buffer contents are unaffected, while EGL_BUFFER_DESTROYED indicates that color buffer contents may be destroyed or changed by the operation.
*The initial value of EGL_SWAP_BEHAVIOR is chosen by the implementation.*
The default value for EGL_SWAP_BEHAVIOUR on the Mali platform is EGL_BUFFER_DESTROYED. This is due to the performance hit associated with having to fetch the previous buffer from memory before rendering the new frame, and storing it at the end as well as the consumption of bandwidth (which is also incredibly bad for battery life on mobile devices). I am unable to comment with certainty as to the default behavior of the Tegra SoCs however, it is apparent to me that their default is EGL_BUFFER_PRESERVED.
To clarify Mali's position with regards to the Khronos GLES specifications - Mali is fully compliant.
I have not seen your code, but you are probably doing something wrong.
Maybe you swap the buffer, erase it, etc. where you must not.
Should we take into account of GPU while Coding ?
No way, The OpenGL API is a layer between your application and the hardware.
I found a 3D graphics framework for Android called Rajawali and I am learning how to use it. I followed the most basic tutorial which is rendering a shpere object with a 1024x512 size jpg image for the texture. It worked fine on Galaxy Nexus, but it didn't work on the Galaxy Player GB70.
When I say it didn't work, I mean that the object appears but the texture is not rendered. Eventually, I changed some parameters that I use for the Rajawali framework when creating textures and got it to work. Here is what I found out.
The cause was coming from where the GL_TEXTURE_MIN_FILTER was being set. Among the following four values
GLES20.GL_LINEAR_MIPMAP_LINEAR
GLES20.GL_NEAREST_MIPMAP_NEAREST
GLES20.GL_LINEAR
GLES20.GL_NEAREST
the texture is only rendered when GL_TEXTURE_MIN_FILTER is not set to a filter using mipmap. So when GL_TEXTURE_MIN_FILTER is set to the last two it works.
Now here is the what I don't understand and am curious about. When I shrink the image which I'm using as the texture to size 512x512 the GL_TEXTURE_MIN_FILTER settings does not matter. All four settings of the min filter works.
So my question is, is there a requirement for the dimensions of the image when using min filter for the texture? Such as am I required to use an image that is square? Can other things such as the wrap style or the the configuration of the mag filter be a problem?
Or does it seem like a OpenGL implementation bug of the device?
Good morning, this a typical example of non-power of 2 textures.
Textures need to be power of 2 in their resolution for a multitude of reasons, this is a very common mistake and it did happen to everybody to fall in this pitfall :) too me too.
The fact that non power of 2 textures work smoothly on some devices/GPU, depends merely to the OpenGL drivers implementation, some GPUs support them clearly, some others don't, I strongly suggest you to go for pow2 textures in order to be able to guarantee the functioning on all the devices.
Last but not least, using non power of 2 textures can lead you to a cathastrophic scenarious in GPU memory utilization since, most of the drivers which accept non-powerof2 textures, need to rescale in memory the textures to the nearest higher power of 2 factor. For instance, having a texture of 520X520 could lead to an actual memory mapping of 1024X1024.
This is something you don't want because in real world "size matters", especially on mobile devices.
You can find a quite good explanation in the OpenGL Gold Book, the OpenGL ES 2.0:
In OpenGL ES 2.0, textures can have non-power-of-two (npot)
dimensions. In other words, the width and height do not need to be a
power of two. However, OpenGL ES 2.0 does have a restriction on the
wrap modes that can be used if the texture dimensions are not power of
two. That is, for npot textures, the wrap mode can only be
GL_CLAMP_TO_EDGE and the minifica- tion filter can only be GL_NEAREST
or GL_LINEAR (in other words, not mip- mapped). The extension
GL_OES_texture_npot relaxes these restrictions and allows wrap modes
of GL_REPEAT and GL_MIRRORED_REPEAT and also allows npot textures to
be mipmapped with the full set of minification filters.
I suggest you to evaluate this book since it does a quite decent coverage to this topic.
We are a team of developers working on a terrain visualization software over a virtual 3D globe. The project is aimed at mobile devices running android, mainly tablets and mobile phones. We have tested this in several devices and, while mobile phones seem to run the application fine (we haven't detected any issues on any of them), some tablets seem to have problems when drawing the textures in the screen.
For clarity purposes, im attaching a video that displays the problem, since its a little difficult to explain with words. This example shows a sphere divided in 200 sectors, each one with a different texture.
Texture problem video
As you can see, sometimes it looks like it is trying to draw two different textures in the same sector at the same time.
We have tested this in these devices:
Samsung Galaxy S SLC (ok)
HTC Desire (ok)
Nook Ebook Reader (ok)
Samsung Galaxy Tab 10.1 (doesnt work propperly)
Sony Tablet S (doesnt work either)
Samsung Galaxy Tab 7.0 (ok)
Im posting the critical code that may be involved in this. First the fragment shader that is used to draw the textures:
varying mediump vec2 TextureCoordOut;
uniform sampler2D Sampler;
....
gl_FragColor = texture2D (Sampler, TextureCoordOut);
Next, im posting the key instructions that are executed in OpenGL, since the code is spared in several big functions:
GLES20.glGenTextures(num, idTextures, 1); //declare 200 textures
...
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, idTextures[texture]); //texture binding
...
GLES20.glVertexAttribPointer(Attributes.Position, size, GLES20.GL_FLOAT, false, stride, fb);
...
GLES20.glVertexAttribPointer(Attributes.TextureCoord, size, GLES20.GL_FLOAT, false, stride, fb);
...
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, first, count);
Im sorry not to be able to provide more details, but after several weeks of debugging, we have no clue at all of what could be causing this. Im turning to you hoping for any leads, since we are completely lost right now. Thanks in advance.
I think I might know what causes you difficulties. The only devices where it doesn't work for you are Tegra 2 devices. I recently started working with a Tegra 2 device and noticed certain differences. Compared to other devices Nvidia seems to have introduced certain kinds of rounding errors where some things can behave differently from other GPUs. I had to go to extreme lengths and use workarounds to get it working the way I want with my complex shaders.
And what I could see on your video looks sort of like the z-buffer might not have enough resolution and that causes some kind of z-fighting: http://en.wikipedia.org/wiki/Z-fighting. I am not sure because I don't really see from your code/description if you are drawing triangles that are very close together and can cause this kind of behaviour in the z-buffer.
But maybe you can try to scale the vertices (x,y,z) somewhat up or down and see what happens to the flickering. If it changes it probably has something to do with that, if not, the problem might have another reason, but then a little more code would be nice.
It would also be interesting to try to narrow it down by not painting the whole sphere but just one or two triangles where the problem occurs and see if it still happens.
PS: A longer video would have been better, short clips suck on vimeo.
I have an OpenGL Live wallpaper that works fine on all phones except those with the PowerVR SGX series. This includes almost all Samsung phones and the Motorola Droid series. The wallpaper is nothing but a black screen on the PowerVR GPU phones. I have been racking my brain for a week trying to figure this problem out but have had no luck.
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
I have a list of all the extensions that are available with the PowerVR and the ones that are available with the Adreno. It seems that there are quite a few differences in available extensions but I do not know what functions go with what extensions (though I can somewhat guess). Here is a list of the functions that I use:
glLightfv
glMaterialfv
glDepthFunc
glEnableClientState
glViewport
glMatrixMode
glLoadIdentity
gluPerspective
glclearcolor
glclear
glTranslatef
glRotatef
glVertexPointer
glTexCoordPointer
glColor4f
glNormal3f
glDrawArrays
glTexParamterx
I am using Robert Green's GlWallPaperService and have tried this solution at Trying to draw textured triangles on device fails, but the emulator works. Why? . Does anybody have any idea why the PowerVR chips are giving me such a hard time and what I could do about it?
Removing EGL10.EGL_RED_SIZE, EGL10.EGL_GREEN_SIZE, and EGL10.EGL_BLUE_SIZE but leaving EGL10.EGL_DEPTH_SIZE, EGL10.EGL_NONE in the eglChooseConfig worked. I assume that the PowerVR chip processes RGB in a way that makes defining them a problem.
This probably won't help you, but I noticed:
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
To my knowledge, no current hardware supports indexed textures. Also, to use texture compression, you need to target a compressed texture format that is specifically supported by the device (which usually entails running a compressor on the host/development platform). SGX supports PVRTC and ETC but whether those are enabled depends on the platform
From my own experience with this GPU, it will offer GLES configurations, that once applied will not work (i.e. the GLES context will not be created). The workaround is to look at the GLSurfaceView code, roll out your own and try out each offered configuration, whether it works for creating a context.