Common reasons for OpenGL Textures not being rendered - android

I just heard from a user who says that my (Android OpenGL ES 2.0) app (a game) won't run on his HTC 1X+ handset. All he gets is the music and the banner ad at the top of the screen and nothing else.
Unfortunately I don't have an HTC 1X+ to test this on.
Some notes:
Although my textures are not power of 2, I'm only using GLES20.GL_CLAMP_TO_EDGE
From what I've read, the HTC 1X+ has a max Texture Size of 2048 x 2048 and it gets it's resources from the XHDPI folder (annoyingly), even so, I have only 1 texure that exceeds that size, all other objects displayed on my app's opening page use textures much smaller than this max amount, so something should be displayed.
I'm not using texture compression of any kind
My app runs quite happily on the 15 (aprox) other devices I, and others have tested it on - just the 1x (so far) is giving problems.
Can anyone point out some common issues with OpenGL ES 2.0 that could be causing these textures not to be rendered? Are there any quirks with certain Android versions or devices?
I haven't yet posted any code simply because the app works on most devices, and I'm not sure which parts of the code would be helpful, but if any code is required, please just ask.
Edit - including texture loading code
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex){
//Array for texture
int textures[] = new int[1];
try {
//texture name
GLES20.glGenTextures(1, textures, 0);
//Bind textures
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
//Set parameters for texture
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
//Apply texture
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
//clamp texture
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);
} catch (Exception e){
}
//Increase texture count
textureCount++;
//Return texture
return textures[0];
}

Have you checked the sampler properties?
The settings look correct, but why not also specify the GL_TEXTURE_WRAP_S setting? Use GLES20.glTexParameteri for integer values.
What type of bitmap are you using?
Try to force the internal format to GL_RGBA, GL_BGR, GL_RGB
Do you properly unbind or bind to a different texture?
Other texture settings may be causing havoc in other parts of the code..
Do you specify the correct texture unit in your shader?
Print out the shader's sampler attribute position so you know this is correct, and make sure to bind the texture to it explicitly during rendering.

Related

Why POT textures work slower than non-pot?

I use following function to load textures
public static int loadTexture(Bitmap bmp)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
//GLES20.glGenerateMipmap(textureHandle[0]);
//adapt texture to POT
int adaptedWidth= (int) Math.pow(2,Math.ceil(Math.log(bmp.getWidth())/Math.log(2d)));
int adaptedHeight= (int) Math.pow(2,Math.ceil(Math.log(bmp.getHeight())/Math.log(2d)));
Log.d("texture",adaptedWidth+","+adaptedHeight);
Bitmap tmp = Bitmap.createScaledBitmap(bmp, adaptedWidth, adaptedHeight, false);
Log.d("asize",tmp.getWidth()+","+tmp.getHeight());
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, tmp, 0);
//GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
tmp.recycle();
// Recycle the bitmap, since its data has been loaded into OpenGL.
//bmp.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
I got 14-17 fps with this code.Hovever if I load my bitmap(which is non POT) directly without adaptation to POT.FPS jumps to 28-30.I thought POT textures should work faster then non-POT.Is there explanation for this?
UPD:Rendering code:
#Override
public void onDrawFrame(GL10 gl) {
//curScale=modelMatrix[SCALE_X];
TimeMeasurer.reset();
long curTS= SystemClock.uptimeMillis();
long frameRenderTime=curTS-ts;
//Log.d("renderer","FPS:"+1000.0/frameRenderTime);
Log.d("renderer","frame render time:"+frameRenderTime);
ts=curTS;
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
if (piecesMesh!=null) {
Matrix.setIdentityM(MVPMatrix,0);
Matrix.multiplyMM(MVPMatrix,0,projMatrix,0,modelMatrix,0);
drawPassivePieces();
drawActivePieces();
if (helper!=null) {
drawHelper();
}
}
TimeMeasurer.measure("onDrawFrame execution time:");
}
private void drawPassivePieces() {
//shadows
shadowProgram.useProgram();
shadowProgram.setUniforms(MVPMatrix,textureMaskId);
shadowMesh.bindPieceData(shadowProgram,false);
shadowMesh.drawPieces(false);
shadowMesh.disableAttributes(shadowProgram);
//pieces
piecesProgram.useProgram();
piecesProgram.setUniforms(MVPMatrix, textureImageId, textureMaskId);
piecesMesh.bindPieceData(piecesProgram,false);
piecesMesh.drawPieces(false);
piecesMesh.disableAttributes(piecesProgram);
}
private void drawActivePieces() {
//shadows
shadowProgram.useProgram();
shadowProgram.setUniforms(MVPMatrix,textureMaskId);
shadowMesh.bindPieceData(shadowProgram,true);
shadowMesh.drawPieces(true);
shadowMesh.disableAttributes(shadowProgram);
//pieces
piecesProgram.useProgram();
piecesProgram.setUniforms(MVPMatrix, textureImageId, textureMaskId);
piecesMesh.bindPieceData(piecesProgram,true);
piecesMesh.drawPieces(true);
piecesMesh.disableAttributes(piecesProgram);
}
public void drawHelper() {
helperProgram.useProgram();
helper.bindData(helperProgram);
helper.draw();
helper.disableAttributes(helperProgram);
}
Without a detailed performance analysis, it's not really possible to do more than speculate.
One likely cause is that your rendering is limited by memory bandwidth of texture sampling. If you make the texture larger, the total amount of memory accessed is larger, causing the slowdown.
Or, very related to the above, your cache hit rate for texture sampling drops if the sampled texels are spread out farther in memory, which happens when you upscale the texture. Lower cache hit rate means slower performance.
You really shouldn't artificially make the texture larger than necessary, unless it's needed due to the limited NPOT support in ES 2.0, and the hardware you use does not advertise the OES_texture_npot extension. I doubt that anybody has made hardware that prefers POT textures in a long time.
There are big advantages to using POT textures. In OpenGLES 2.0 they allow you to use mipmaps and useful texture addressing modes like repeat. You also can utilize memory more efficiently because lots of implementations allocate memory as if your texture is POT anyway.
However, in this case where you just take a non-POT texture and scale it up, I'd expect performance to be slightly worse as a result. You're missing out on the big potential win because you're not using mipmaps. By using a larger texture you're just asking more of the GPU's texture cache because the useful parts of the image are now more spread out in memory than they were previously.
Another way to look at it is that in the absence of mipmapping, big textures are going to perform worse than little textures, and your rescaling process is just making your texture bigger.
I'm surprised the difference is so noticeable though - are you sure that your rescaling codepath isn't doing anything unexpected like resizing too large, or picking a different texture format or filter mode?

Antialiasing and texture filtering in native Android application using OpenGL ES

I am mapping a 360 degree photospheric image on a sphere with camera at centre of sphere. When we build the application using native Android SDK and OpenGL ES 2.0, we see jaggy edges like the one shown in below image. The jags are visible on the arms of sofa, edges of floor etc. when I see this view using cardboard device
On the other hand, the same image ( 4096X2048 resolution) is rendered perfectly in unity3d application for Android.
The magnification and minification filter for the texture that we are using is GL_LINEAR.
Source Code used for setting filtering paramenters for texture and generating Texture and mipmap:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mSphereTextureIds[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
Code for Multisample Antialiasing ( 4X MSAA)
cardboardView.setEGLConfigChooser(new MyConfigChooser());
cardboardView.setRenderer(renderer);
I have also turned on 4x Multisample anti aliasing. But even after doing this, there are jags inside texture. I used highp precision for both float and int in both of my shaders.
class MyConfigChooser implements CardboardView.EGLConfigChooser {
#Override
public EGLConfig chooseConfig(EGL10 egl, EGLDisplay display) {
int attribs[] = {
EGL10.EGL_LEVEL, 0,
EGL10.EGL_RENDERABLE_TYPE, 4, // EGL_OPENGL_ES2_BIT
EGL10.EGL_COLOR_BUFFER_TYPE, EGL10.EGL_RGB_BUFFER,
EGL10.EGL_RED_SIZE, 8,
EGL10.EGL_GREEN_SIZE, 8,
EGL10.EGL_BLUE_SIZE, 8,
EGL10.EGL_DEPTH_SIZE, 16,
EGL10.EGL_SAMPLE_BUFFERS, 1,
EGL10.EGL_SAMPLES, 2, // This is for 4x MSAA.
EGL10.EGL_NONE
};
EGLConfig[] configs = new EGLConfig[1];
int[] configCounts = new int[1];
egl.eglChooseConfig(display, attribs, configs, 1, configCounts);
if (configCounts[0] == 0) {
// Failed! Error handling.
Log.d("test","MSAA Failed............");
return null;
} else {
return configs[0];
}
}
Is there some post processing that Unity does on the textures for filtering during magnification that gives it a better look without aliasing effect ?
Can we do anisotropic filtering to smoothen the texture on magnification without blurring on Android ? If yes can we get a clue on how to do that ?
Is 4X the maximum MSAA that is currently supported on Android ? (I am using Nexus 5).
Thanks in advance
Apurv Nigam
Can we do anisotropic filtering to smoothen the texture on
magnification without blurring on Android?
Not in general - it's "really expensive" so not much mobile hardware supports it.
If yes can we get a clue on how to do that?
There is no support for it in "official" OpenGL ES; check the vendor extensions to see if one exists. For desktop GL it is supported via the GL_EXT_texture_filter_anisotropic extension - I've not seen a mobile GPU supporting it.
Is 4X the maximum MSAA that is currently supported on Android? (I am
using Nexus 5).
Depends on vendor - some devices support 8x (Mali-T760), some older devices don't support it at all (Tegra-1/2/3) - so YMMV.
I found the answer to my problem. I figured out that the image was being scaled down by Universal Image Loader library being used by me. So the bitmap that was being given to texImage2D call was already scaled down causing a pixelated texture. So I decoded my image using BitmapFactory using isScaled=false option.

OpenGL Texture Size Limits. Providing alternate resources for specific Android devices

Within my Android App, which is an OpenGL ES 2.0 game, I've included 4 sets of graphic resources like so
ldpi
mdpi
hdpi
xhdpi
Now, within my xhdpi folder, the largest asset I have is 2560 x 1838 as I'm targeting large screens here (for example the Google Nexus 10 tablet which gets its resources from the XHDPI folder). Up until now, everything has worked on the devices on which I've tested (Google Nexus 10, Samsung Galaxy S3, S4 & S5, Nexus 4 & 5 etc etc....)
I've recently heard from a user who has encountered problems running this on an HTC One X+ handset (I believe there are other reasons for this besides the one I'm talking about here so I have a separate question open for the other issues).
The thing is, according to: this, the maximum texture size for the this phone is 2048x2048, but then according to this this phone gets its resources from the XHDPI folder.
So, it won't display the textures from this atlas (it's an atlas of backgrounds containing 6 separate backgrounds of 1280*612.
Now I have two options that I am aware of to fix this:
Reduce the size of the backgrounds. Ruled out as this would compromise quality on larger-screen devices (like the nexus 10)
Split into 2 atlases would rather not do this as would like to keep all backgrounds in 1 atlas to optimise loading speed (and just to keep things tidy)
Are there any other options? Am I able to provide, within a sub folder of xhdpi another folder that fits within the 1X+'s limits? And if so, how do I get the device to grab resources from there?
I'm doing this blind as I don't have an 1X+ on which to test this, but from what I've read, I believe having assets larger than 2048 x 2048 is going to cause problems on this device.
This is my code for applying textures:
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex){
//Array for texture
int textures[] = new int[1];
try {
//texture name
GLES20.glGenTextures(1, textures, 0);
//Bind textures
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
//Set parameters for texture
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
//Apply texture
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
//clamp texture
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,GLES20.GL_CLAMP_TO_EDGE);
} catch (Exception e){
}
//Increase texture count
textureCount++;
//Return texture
return textures[0];
}
If you don't know if a texture load will work, you can do all the same operations, except use GL_PROXY_TEXTURE_2D (or 1D, 3D, etc.) instead of GL_TEXTURE_2D to check to see if the load will work for a given texture size and parameters. OpenGL attempts to perform the load, and and it will set all texture state to 0 if it doesn't work or there is another problem in a texture parameter. Then if the texture load fails (in your case due to the image being too big), have your program load a smaller scaled texture.
EDIT: I use iOS and my gl.h doesn't include GL_PROXY_TEXTURE_* and I can't find any reference in the OpenGL ES3 specification, so I'm not sure this will work for you with OpenGL.
Alternatively, get your max texture size (per dimension, e.g. width, height or depth) and load a suitable image sized using:
glGetIntegerv( GL_MAX_TEXTURE_SIZE, &size );
Afterward, check your image to ensure it worked.
GLuint name;
int width = 10000; // really bad width
int height = 512;
GLint size;
glGenTextures( 1, &name );
glBindTexture( GL_TEXTURE_2D, name );
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, 0);
GLenum error = glGetError();
if( error ) {
printf("Failed!");
}
else {
printf("Succeeded!");
}

Render to texture: Maximum number?

I hava a class in an Android App that holds a byte array as data source for a texture. I use a Framebuffer to render some stuff onto that texture and then render the texture on the screen. This works perfectly.
However, I can do this with 151 textures only. Instance #152 generates this error:
:0: PVRSRVAllocDeviceMem: Error 1 returned
:0: ComputeFrameBufferCompleteness: Can't create render surface.
Here is the code snippet (Constructor):
// Texture image bytes
imgBuf=ByteBuffer.allocateDirect(TEXEL_X*TEXEL_Y*3);
imgBuf.position(0);
// Fill the texture with an arbitrary color, so we see something
byte col=(byte)(System.nanoTime()%255);
for (int ii=0; ii<imgBuf.capacity(); ii+=3)
{ imgBuf.put(col);
imgBuf.put((byte)(col*3%255));
imgBuf.put((byte)(col*7%255));
}
imgBuf.rewind();
// Generate the texture
GLES20.glGenTextures(1,textureID,0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,textureID[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
// Associate a two-dimensional texture image with the byte buffer
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGB,TEXEL_X,
TEXEL_Y,0,GLES20.GL_RGB,GLES20.GL_UNSIGNED_BYTE,imgBuf);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
// Get framebuffer for later rendering to this texture
GLES20.glGenFramebuffers(1,frameBufID,0);
And here is the problem (Render to texture)
If I leave out this part, displaying hundreds of such textures works well, but then I cannot render anyting onto the texture :( If I keep it, it works fine with 151 textures.
// Bind frame buffer and specify texture as color attachment
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,frameBufID[0]);
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,textureID[0],0);
// Check status
int status=GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
Log.i(TAG,texNum+":"+status);
// Render some stuff on the texture
// ......
// (It does not matter. The status check fails even without rendering anything here)
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,0,0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,0);
I hope somebody can shed light upon this.
Thanks,
Ru
You are most probably using a device with PowerVR SGX540 GPU, they have this problem on android. Not on iOS, so this seems like a driver issue despite what their support says on their forum:
http://forum.imgtec.com/discussion/3026/glcheckframebufferstatus-returns-gl-framebuffer-unsupported-when-creating-too-many-fbo
If you really need to have more than 152 textures rendered to then you have to read out the pixels with glReadPixels(), delete the texture and fbo, and create a new texture by providing the data to glTexImage2D

Loading Textures fast into OpenGL 2.0

I am building a simple live wallpaper for Android. I am uploading the required texture into OpenGL ES 2.0 using the below code. I have loaded all my images into a single file of size 2048x2048. This below code takes about 900 to 1200 ms to load the texture. Is this a normal time or am I doing something wrong to make it slow?
I also try to clear the list of textures in Opengl every time the onSurfaceCreated is called in my renderer. Is this right to be done, or is there a way to simple check if the previously loaded texture is already in memory and if so avoid clearing and reloading? Please let me know your comments on this. Thank you.
Also on screen orientation change the OnSurfaceCreated is called. So the texture upload happens again. This is not a good idea. What is the work around?
public int addTexture(Bitmap texture) {
int bitmapFormat = texture.getConfig() == Config.ARGB_8888 ? GLES20.GL_RGBA : GLES20.GL_RGB;
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
int textureId = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmapFormat, texture, 0);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
return textureId;
}
A few ways you can improve performance.
Do not load the texture every time onSurfaceChanged is called. Initialize your textureId to -1 (in the constructor/surfaceCreated of your renderer) and check at the beginning of onSurfaceChanged if you have a different Id. When you call glGenTextures, you will get a positive number.
Do you need the Mipmaps? That might be the key point of your method here. Try without the line GLES20.glGenerateMipMap(GLES20.GL_TEXTURE_2D);
2048x2048 is huge. Especially for textures. Do you really need that much detail? Maybe 1024x1024 is enough.
Avoid RGB_888, use RGB_565 instead: you'll get almost the same visual quality for half the size.

Categories

Resources