I have an OpenGL Live wallpaper that works fine on all phones except those with the PowerVR SGX series. This includes almost all Samsung phones and the Motorola Droid series. The wallpaper is nothing but a black screen on the PowerVR GPU phones. I have been racking my brain for a week trying to figure this problem out but have had no luck.
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
I have a list of all the extensions that are available with the PowerVR and the ones that are available with the Adreno. It seems that there are quite a few differences in available extensions but I do not know what functions go with what extensions (though I can somewhat guess). Here is a list of the functions that I use:
glLightfv
glMaterialfv
glDepthFunc
glEnableClientState
glViewport
glMatrixMode
glLoadIdentity
gluPerspective
glclearcolor
glclear
glTranslatef
glRotatef
glVertexPointer
glTexCoordPointer
glColor4f
glNormal3f
glDrawArrays
glTexParamterx
I am using Robert Green's GlWallPaperService and have tried this solution at Trying to draw textured triangles on device fails, but the emulator works. Why? . Does anybody have any idea why the PowerVR chips are giving me such a hard time and what I could do about it?
Removing EGL10.EGL_RED_SIZE, EGL10.EGL_GREEN_SIZE, and EGL10.EGL_BLUE_SIZE but leaving EGL10.EGL_DEPTH_SIZE, EGL10.EGL_NONE in the eglChooseConfig worked. I assume that the PowerVR chip processes RGB in a way that makes defining them a problem.
This probably won't help you, but I noticed:
One difference between the different GPUs is their texture compression. Some of the things I have done in that regards is I have changed my texture image to a square of 256x256. Changed it from 8 bit to a 16 bit rgba and even tried indexed.
To my knowledge, no current hardware supports indexed textures. Also, to use texture compression, you need to target a compressed texture format that is specifically supported by the device (which usually entails running a compressor on the host/development platform). SGX supports PVRTC and ETC but whether those are enabled depends on the platform
From my own experience with this GPU, it will offer GLES configurations, that once applied will not work (i.e. the GLES context will not be created). The workaround is to look at the GLSurfaceView code, roll out your own and try out each offered configuration, whether it works for creating a context.
Related
I'm working with the opengl es 3.0 api for android. I'm writing functionality for a bitmap text renderer, and, well, it works flawlessly on two of my devices, both samsung. but on my kindle fire 10 HD (7th generation) tablet, it's all messed up. it's sampling from wrong portions of the texture atlas, displaying wrong glyphs, and it won't display the entire message sometimes, and sometimes when i go to switch messages, via mapped buffers, it briefly displays the entire message at once before starting the animation of it. Anyhow, I seem to think it's related to degenerate triangle strips that I'm using, so I ask, is support for them not ubiquitous across all android devices supporting opengl es 3.0? I've had trouble before with the kindle fire, in shader-related stuff. It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
A degenerate triangle "should just work"; there is no feature here in the API for the hardware not to support. They are just triangles which happen to have zero area because of two coincident vertices.
They have historically be really common for any content using triangle strips, so to be honest I'd be surprised if they are broken.
It won't work at all if I don't specify a precision, both for floats, and for sampler2DArray, that I've discovered thus far.
That's not a bug; that's what the the specification requires you to do. See "4.7.4. Default Precision Qualifiers" in the OpenGL ES 3.2 shader language specification.
I have this problem with an older device that has an android version I still like to support (2.3.5) where the texture sometimes works.
I have 5 textures loaded in memory from the start of the game (does not change and is not reloaded). Everything shows up fine in the tutorial but in the game it does not. The rendering process and object loading are exactly the same and they work perfectly on my newer device (Nexus 4) for all game modes and tutorial.
I load 4 texture of 1024x1024 and 1 texture of 512x512. The textures that are not working properly are the last loaded and bound textures. So it could be a memory issue, but how can I find this out? The OpenGL error function does not show any error during gameplay, even though the textures are not shown correctly.
Both devices support OpenGL ES 2.0.
The third and fourth textures do work in the tutorial part of the game, so the device is able to load at least the first four textures, which should indicate it is not the amount of textures being the problem.
The old device supports 1024x1024 textures according to the specs.
Changing all the textures to 512x512 shows the same issue, which should have worked if memory was an issue because you can hold 4 of those textures in 1 1024x1024 texture which already works perfect on all devices. (1x512 and 1x1024 memory space equals 5x512 space)
It works perfectly on the Nexus 4 so a coding error is unlikely.
OpenGL does not provide me with an error of some sort using the openglerror function call in the loading/setting up/rendering calls so that should mean all OpenGL stuff is fine.
The loading of my texturepool, the creation and loading of the objects (pool), the render function (including the shader) is in all modes exactly the same code so that does not effect the difference. I have debugged all the objects rendered with OpenGL to see if some data is corrupt or incorrect but it all is correct when I pass it to the render pipeline. This code is passing the integer '3' to the shader in the rendering code as texture id. The texture loaded at '3' should be the one I need, but somewhere in OpenGL it decides to use texture '1' and not higher at those moments, but in the tutorial it puts the same data to OpenGL but then OpenGL decides to use the '3' texture ID as passed and intended...
posting code is an issue due to the complexity of the engine which handles all loading/rendering etc. of the graphics part of the game. Posting all code of my game/engine seems a little bit overkill :s so if some part is needed for solving this issue I will post it.
I am basically out of ideas to try and solve this issue :( does anyone has any idea or suggestion to what I can try, or maybe a solution?
Fixed this (and appearently also my other posted) problem, it was a bug in the driver software of the GPU, the textureunit id was only taking once, more info: http://androidblog.reindustries.com/hack-bad-gpu-fix-not-using-correct-texture-opengl/
I found a 3D graphics framework for Android called Rajawali and I am learning how to use it. I followed the most basic tutorial which is rendering a shpere object with a 1024x512 size jpg image for the texture. It worked fine on Galaxy Nexus, but it didn't work on the Galaxy Player GB70.
When I say it didn't work, I mean that the object appears but the texture is not rendered. Eventually, I changed some parameters that I use for the Rajawali framework when creating textures and got it to work. Here is what I found out.
The cause was coming from where the GL_TEXTURE_MIN_FILTER was being set. Among the following four values
GLES20.GL_LINEAR_MIPMAP_LINEAR
GLES20.GL_NEAREST_MIPMAP_NEAREST
GLES20.GL_LINEAR
GLES20.GL_NEAREST
the texture is only rendered when GL_TEXTURE_MIN_FILTER is not set to a filter using mipmap. So when GL_TEXTURE_MIN_FILTER is set to the last two it works.
Now here is the what I don't understand and am curious about. When I shrink the image which I'm using as the texture to size 512x512 the GL_TEXTURE_MIN_FILTER settings does not matter. All four settings of the min filter works.
So my question is, is there a requirement for the dimensions of the image when using min filter for the texture? Such as am I required to use an image that is square? Can other things such as the wrap style or the the configuration of the mag filter be a problem?
Or does it seem like a OpenGL implementation bug of the device?
Good morning, this a typical example of non-power of 2 textures.
Textures need to be power of 2 in their resolution for a multitude of reasons, this is a very common mistake and it did happen to everybody to fall in this pitfall :) too me too.
The fact that non power of 2 textures work smoothly on some devices/GPU, depends merely to the OpenGL drivers implementation, some GPUs support them clearly, some others don't, I strongly suggest you to go for pow2 textures in order to be able to guarantee the functioning on all the devices.
Last but not least, using non power of 2 textures can lead you to a cathastrophic scenarious in GPU memory utilization since, most of the drivers which accept non-powerof2 textures, need to rescale in memory the textures to the nearest higher power of 2 factor. For instance, having a texture of 520X520 could lead to an actual memory mapping of 1024X1024.
This is something you don't want because in real world "size matters", especially on mobile devices.
You can find a quite good explanation in the OpenGL Gold Book, the OpenGL ES 2.0:
In OpenGL ES 2.0, textures can have non-power-of-two (npot)
dimensions. In other words, the width and height do not need to be a
power of two. However, OpenGL ES 2.0 does have a restriction on the
wrap modes that can be used if the texture dimensions are not power of
two. That is, for npot textures, the wrap mode can only be
GL_CLAMP_TO_EDGE and the minifica- tion filter can only be GL_NEAREST
or GL_LINEAR (in other words, not mip- mapped). The extension
GL_OES_texture_npot relaxes these restrictions and allows wrap modes
of GL_REPEAT and GL_MIRRORED_REPEAT and also allows npot textures to
be mipmapped with the full set of minification filters.
I suggest you to evaluate this book since it does a quite decent coverage to this topic.
I am learning to develop games with Android using libgdx, a framework for programming in Android using OpenGL ES and on desktop with Java using LWJGL. The device I am testing on (HTC Hero) quotes a maximum texture size of 1024 and a maximum stack depth of 2. However, when I create textures at this maximum size, they will not load, instead displaying a white square where the texture should be. The textures are this size because they are packed sprite sheets, and it is preferable to keep them at this size. With regards to the stack depth, the device will also show a white square if more than 1 texture is used simultaneously, so it seems as though the maximum values given by OpenGL ES are one step above that of the device's actual performance. Can anybody help me out? Thanks
Actually there are scales for texture in libgdx, like texture must be exponent of 2. so as you said you are running with sprites please go through once how to work with sprites in http://libgdx.googlecode.com/svn/trunk/tests/gdx-tests/src/com/badlogic/gdx/tests/. I hope it works
I find this question to be of value despite of its age.
In current devices going 1024x1024 is pretty much safe. But if it doesn't work on one of yours, go lower. 1 or 4 draw calls won't affect much performant wise.
In my live wallpaper I'm drawing 3 textured quads that covers the whole screen. On Nexus One I get 40fps. I'm looking for ways to improving performance.
The quads are blended on top of each other, textures are loaded from RGB_8888 bitmaps. Textures are 1024x1024.
I've got
glDisable(GL_DITHER);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
glDisable(GL10.GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Things I've tried, all resulting in the same 40fps:
Reduce texture size to 512x512 and 256x256
Use draw_texture extension
Disable blending
Change texture filtering from GL_LINEAR to GL_NEAREST
Use VBOs (desperate try, since there are just 3 quads...)
Run the drawing code in standalone activity (in case being a live wallpaper somehow affects performance)
If I draw 2 layers instead of 3, fps rises to 45 or so,
then drawing just 1 layer sees fps rise to 55. I guess I'm getting limited by fill rate, since turning off and on the potentially costly features result in the same fps, and the only thing that seems to improve fps is just drawing less...
I'm mulling over the idea of texture compression, but supporting the different compression formats doesn't seem like fun. ETC1 has no alpha channel which I need, and I'm not sure if PVRTC and ATITC can even be used from Java and OpenGL ES 1.0 or 1.1.
I'd be glad to hear ideas on what else to try.
I can give pointer to the current version of wallpaper and screenshots if that's of use.
You probably already thought of this, but just in case you didn't:
calling glClear at the start of your frame probably isn't necessary
you could do the first pass with blending disabled
Also, did you try doing it in 1 pass with a multi-texturing approach?
edit: And another thing: writing to the z-buffer is not needed, so either use a context without z-buffer, or disable depth writing with glDepthMask(GL_FALSE).
glCompressedTexImage2D is available in Java (and NDK), however available compression format depends on GPU.
The AndroidManifest.xml File > supports-gl-texture
PowerVR SGX - Nexus S, Galaxy S/Tab, DROID - PVRTC
Adreno - Nexus One, EVO - ATITC
Tegra2 - Xoom, Atrix - S3TC
If you use these compression format and want to support various Android devices, you must prepare for many compressed textures, but the GPU native compression texture should improve rendering performance.
GPU Profiling and callbacks in OpenGL ES
OpenGL ES, Changing texture format
from RGBA8888 to RGBA4444 will
improve fill rate?
The android opengl framework min3d is able to draw these objects or scenes at a full 60fps.
The framework is opensource and is available for download and use at: http://code.google.com/p/min3d/
I would recommend comparing your code to it to see what you have done wrong/differently in order to improve your performance.