I use following function to load textures
public static int loadTexture(Bitmap bmp)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
//GLES20.glGenerateMipmap(textureHandle[0]);
//adapt texture to POT
int adaptedWidth= (int) Math.pow(2,Math.ceil(Math.log(bmp.getWidth())/Math.log(2d)));
int adaptedHeight= (int) Math.pow(2,Math.ceil(Math.log(bmp.getHeight())/Math.log(2d)));
Log.d("texture",adaptedWidth+","+adaptedHeight);
Bitmap tmp = Bitmap.createScaledBitmap(bmp, adaptedWidth, adaptedHeight, false);
Log.d("asize",tmp.getWidth()+","+tmp.getHeight());
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, tmp, 0);
//GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
tmp.recycle();
// Recycle the bitmap, since its data has been loaded into OpenGL.
//bmp.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
I got 14-17 fps with this code.Hovever if I load my bitmap(which is non POT) directly without adaptation to POT.FPS jumps to 28-30.I thought POT textures should work faster then non-POT.Is there explanation for this?
UPD:Rendering code:
#Override
public void onDrawFrame(GL10 gl) {
//curScale=modelMatrix[SCALE_X];
TimeMeasurer.reset();
long curTS= SystemClock.uptimeMillis();
long frameRenderTime=curTS-ts;
//Log.d("renderer","FPS:"+1000.0/frameRenderTime);
Log.d("renderer","frame render time:"+frameRenderTime);
ts=curTS;
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
if (piecesMesh!=null) {
Matrix.setIdentityM(MVPMatrix,0);
Matrix.multiplyMM(MVPMatrix,0,projMatrix,0,modelMatrix,0);
drawPassivePieces();
drawActivePieces();
if (helper!=null) {
drawHelper();
}
}
TimeMeasurer.measure("onDrawFrame execution time:");
}
private void drawPassivePieces() {
//shadows
shadowProgram.useProgram();
shadowProgram.setUniforms(MVPMatrix,textureMaskId);
shadowMesh.bindPieceData(shadowProgram,false);
shadowMesh.drawPieces(false);
shadowMesh.disableAttributes(shadowProgram);
//pieces
piecesProgram.useProgram();
piecesProgram.setUniforms(MVPMatrix, textureImageId, textureMaskId);
piecesMesh.bindPieceData(piecesProgram,false);
piecesMesh.drawPieces(false);
piecesMesh.disableAttributes(piecesProgram);
}
private void drawActivePieces() {
//shadows
shadowProgram.useProgram();
shadowProgram.setUniforms(MVPMatrix,textureMaskId);
shadowMesh.bindPieceData(shadowProgram,true);
shadowMesh.drawPieces(true);
shadowMesh.disableAttributes(shadowProgram);
//pieces
piecesProgram.useProgram();
piecesProgram.setUniforms(MVPMatrix, textureImageId, textureMaskId);
piecesMesh.bindPieceData(piecesProgram,true);
piecesMesh.drawPieces(true);
piecesMesh.disableAttributes(piecesProgram);
}
public void drawHelper() {
helperProgram.useProgram();
helper.bindData(helperProgram);
helper.draw();
helper.disableAttributes(helperProgram);
}
Without a detailed performance analysis, it's not really possible to do more than speculate.
One likely cause is that your rendering is limited by memory bandwidth of texture sampling. If you make the texture larger, the total amount of memory accessed is larger, causing the slowdown.
Or, very related to the above, your cache hit rate for texture sampling drops if the sampled texels are spread out farther in memory, which happens when you upscale the texture. Lower cache hit rate means slower performance.
You really shouldn't artificially make the texture larger than necessary, unless it's needed due to the limited NPOT support in ES 2.0, and the hardware you use does not advertise the OES_texture_npot extension. I doubt that anybody has made hardware that prefers POT textures in a long time.
There are big advantages to using POT textures. In OpenGLES 2.0 they allow you to use mipmaps and useful texture addressing modes like repeat. You also can utilize memory more efficiently because lots of implementations allocate memory as if your texture is POT anyway.
However, in this case where you just take a non-POT texture and scale it up, I'd expect performance to be slightly worse as a result. You're missing out on the big potential win because you're not using mipmaps. By using a larger texture you're just asking more of the GPU's texture cache because the useful parts of the image are now more spread out in memory than they were previously.
Another way to look at it is that in the absence of mipmapping, big textures are going to perform worse than little textures, and your rescaling process is just making your texture bigger.
I'm surprised the difference is so noticeable though - are you sure that your rescaling codepath isn't doing anything unexpected like resizing too large, or picking a different texture format or filter mode?
Related
I just heard from a user who says that my (Android OpenGL ES 2.0) app (a game) won't run on his HTC 1X+ handset. All he gets is the music and the banner ad at the top of the screen and nothing else.
Unfortunately I don't have an HTC 1X+ to test this on.
Some notes:
Although my textures are not power of 2, I'm only using GLES20.GL_CLAMP_TO_EDGE
From what I've read, the HTC 1X+ has a max Texture Size of 2048 x 2048 and it gets it's resources from the XHDPI folder (annoyingly), even so, I have only 1 texure that exceeds that size, all other objects displayed on my app's opening page use textures much smaller than this max amount, so something should be displayed.
I'm not using texture compression of any kind
My app runs quite happily on the 15 (aprox) other devices I, and others have tested it on - just the 1x (so far) is giving problems.
Can anyone point out some common issues with OpenGL ES 2.0 that could be causing these textures not to be rendered? Are there any quirks with certain Android versions or devices?
I haven't yet posted any code simply because the app works on most devices, and I'm not sure which parts of the code would be helpful, but if any code is required, please just ask.
Edit - including texture loading code
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex){
//Array for texture
int textures[] = new int[1];
try {
//texture name
GLES20.glGenTextures(1, textures, 0);
//Bind textures
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
//Set parameters for texture
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
//Apply texture
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
//clamp texture
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);
} catch (Exception e){
}
//Increase texture count
textureCount++;
//Return texture
return textures[0];
}
Have you checked the sampler properties?
The settings look correct, but why not also specify the GL_TEXTURE_WRAP_S setting? Use GLES20.glTexParameteri for integer values.
What type of bitmap are you using?
Try to force the internal format to GL_RGBA, GL_BGR, GL_RGB
Do you properly unbind or bind to a different texture?
Other texture settings may be causing havoc in other parts of the code..
Do you specify the correct texture unit in your shader?
Print out the shader's sampler attribute position so you know this is correct, and make sure to bind the texture to it explicitly during rendering.
I hava a class in an Android App that holds a byte array as data source for a texture. I use a Framebuffer to render some stuff onto that texture and then render the texture on the screen. This works perfectly.
However, I can do this with 151 textures only. Instance #152 generates this error:
:0: PVRSRVAllocDeviceMem: Error 1 returned
:0: ComputeFrameBufferCompleteness: Can't create render surface.
Here is the code snippet (Constructor):
// Texture image bytes
imgBuf=ByteBuffer.allocateDirect(TEXEL_X*TEXEL_Y*3);
imgBuf.position(0);
// Fill the texture with an arbitrary color, so we see something
byte col=(byte)(System.nanoTime()%255);
for (int ii=0; ii<imgBuf.capacity(); ii+=3)
{ imgBuf.put(col);
imgBuf.put((byte)(col*3%255));
imgBuf.put((byte)(col*7%255));
}
imgBuf.rewind();
// Generate the texture
GLES20.glGenTextures(1,textureID,0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,textureID[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
// Associate a two-dimensional texture image with the byte buffer
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGB,TEXEL_X,
TEXEL_Y,0,GLES20.GL_RGB,GLES20.GL_UNSIGNED_BYTE,imgBuf);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,0);
// Get framebuffer for later rendering to this texture
GLES20.glGenFramebuffers(1,frameBufID,0);
And here is the problem (Render to texture)
If I leave out this part, displaying hundreds of such textures works well, but then I cannot render anyting onto the texture :( If I keep it, it works fine with 151 textures.
// Bind frame buffer and specify texture as color attachment
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,frameBufID[0]);
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,textureID[0],0);
// Check status
int status=GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
Log.i(TAG,texNum+":"+status);
// Render some stuff on the texture
// ......
// (It does not matter. The status check fails even without rendering anything here)
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,0,0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,0);
I hope somebody can shed light upon this.
Thanks,
Ru
You are most probably using a device with PowerVR SGX540 GPU, they have this problem on android. Not on iOS, so this seems like a driver issue despite what their support says on their forum:
http://forum.imgtec.com/discussion/3026/glcheckframebufferstatus-returns-gl-framebuffer-unsupported-when-creating-too-many-fbo
If you really need to have more than 152 textures rendered to then you have to read out the pixels with glReadPixels(), delete the texture and fbo, and create a new texture by providing the data to glTexImage2D
I am having problems with texture corruption on Android emulator (it runs fine on most android devices).
The picture above is a reference rendering produced by emulator running Android 4.1 Jelly Bean, everything looks as it should.
The second picture is captured in emulator running Android 1.6. Note the corruption of some of the disabled toolbar buttons (they are rendered with color 1f,1f,1f,0.5f)
The third picture is captured in the same emulator. The difference is that now score is rendered in the upper-right corner. Score is a bitmap font, it's texture is an alpha mask. Everything rendered after the score looses it's texture. Note that the previous screenshot also contained bitmap font rendered the same way (but using different texture).
A similar problem was present on one of the Samsung devices (I don't remember the model). When the floor texture was rendered, everything rendered after that lost texture. The problem did not manifest itself when I either a) did not bind the texture b) did bind the texture, but drew no triangles using it c) recreated the png asset from scratch.
Opengl settings:
gl.glDisable(GL10.GL_LIGHTING);
gl.glDisable(GL10.GL_CULL_FACE);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glDisable(GL10.GL_DITHER);
gl.glDepthMask(false);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBlendFunc(GL10.GL_ONE,GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glShadeModel(GL10.GL_FLAT);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
How textures are loaded:
public void doGLLoading(Engine renderer) {
GL10 gl=renderer.getGl();
int[] ids=new int[1];
gl.glGenTextures(1, ids,0);
id=ids[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, id);
Log.d("SpriteDraw", String.format("Texture %s has format %s",getPath(),bitmap.getConfig().toString()));
buildMipmap(gl, bitmap);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MIN_FILTER, minFilter);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MAG_FILTER, magFilter);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_S, textureWrapS);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_T, textureWrapT);
}
private void buildMipmap(GL10 gl, Bitmap bitmap) {
int level = 0;
int height = bitmap.getHeight();
int width = bitmap.getWidth();
while (height >= 1 || width >= 1) {
// First of all, generate the texture from our bitmap and set it to
// the according level
//TextureUtils.texImage2D(gl, GL10.GL_TEXTURE_2D, level, -1, bitmap, -1, 0);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, level, bitmap, 0);
if (height == 1 || width == 1) {
break;
}
// Increase the mipmap level
level++;
height /= 2;
width /= 2;
Bitmap bitmap2 = Bitmap.createScaledBitmap(bitmap, width, height,
true);
// Clean up
bitmap.recycle();
bitmap = bitmap2;
}
}
Notes: the font is rendered using gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); and GL10.glDrawArrays. The corruption affects not only the 1.6 emulator, but also the 2.x series of android, althought it is not as prominent (the alpha masks are still rendered incorrectly). All assets are correctly loaded as power of two bitmaps.
I suggest using 32bit textures, not grayscale, not "alphamasks" (what's that?)
Check the size of your texture, it should not exceed maximum size ( glGetInteger( GL_MAX_TEXTURE_SIZE ). Also make shure your textures are power of 2. Yes, you mentioned before they are, but if they are in scalable assets (drawable_x_dpi folders), they will be scaled by android. To avoid scaling, put them to "raw" folder.
Just for test, try to disable all filtering, including mipmaps - set GL_TEXTURE_WRAP_S, and GL_TEXTURE_WRAP_T to GL_NEAREST
I am building a simple live wallpaper for Android. I am uploading the required texture into OpenGL ES 2.0 using the below code. I have loaded all my images into a single file of size 2048x2048. This below code takes about 900 to 1200 ms to load the texture. Is this a normal time or am I doing something wrong to make it slow?
I also try to clear the list of textures in Opengl every time the onSurfaceCreated is called in my renderer. Is this right to be done, or is there a way to simple check if the previously loaded texture is already in memory and if so avoid clearing and reloading? Please let me know your comments on this. Thank you.
Also on screen orientation change the OnSurfaceCreated is called. So the texture upload happens again. This is not a good idea. What is the work around?
public int addTexture(Bitmap texture) {
int bitmapFormat = texture.getConfig() == Config.ARGB_8888 ? GLES20.GL_RGBA : GLES20.GL_RGB;
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
int textureId = textures[0];
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmapFormat, texture, 0);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR_MIPMAP_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
return textureId;
}
A few ways you can improve performance.
Do not load the texture every time onSurfaceChanged is called. Initialize your textureId to -1 (in the constructor/surfaceCreated of your renderer) and check at the beginning of onSurfaceChanged if you have a different Id. When you call glGenTextures, you will get a positive number.
Do you need the Mipmaps? That might be the key point of your method here. Try without the line GLES20.glGenerateMipMap(GLES20.GL_TEXTURE_2D);
2048x2048 is huge. Especially for textures. Do you really need that much detail? Maybe 1024x1024 is enough.
Avoid RGB_888, use RGB_565 instead: you'll get almost the same visual quality for half the size.
Why do my textures seemingly take up so much space?
My app, which uses opengl heavily, produces the following heap stats* during operation:
Used heap dump 1.8 MB
Number of objects 49,447
Number of classes 2,257
Number of class loaders 4
Number of GC roots 8,551
Format hprof
JVM version
Time 1:08:15 AM GMT+02:00
Date Oct 2, 2011
Identifier size 32-bit
But, when I use the task manager on my phone to look at the ram use of my application it says my app uses 44.42MB. Is there any relationship between heap size use and ram use? I think much of that 42MB must be my open GL textures, but I can't figure out why they take up so much space, because on disk all the files together take only take up 24MB (and they are not all loaded at the same time). And I'm even making many of them smaller by resizing the bitmap prior to texture loading. I also dynamically create some textures, but also destroy those textures after use.
I am using OpenGL 1.0 and Android 2.2, typical code that I use to load a texture looks like this:
static int set_gl_texture(Bitmap bitmap){
bitmap = Bitmap.createScaledBitmap(bitmap, 256, 256, true);
// generate one texture pointer
mGL.glGenTextures(1, mTextures, 0);
mGL.glBindTexture(GL10.GL_TEXTURE_2D, mTextures[0]); // A bound texture is
// an active texture
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0,GL10.GL_RGBA, bitmap, 0);
// create nearest filtered texture
mGL.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_LINEAR); // This is where the scaling algorithms are
mGL.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR); // This is where the scaling algorithms are
mGL.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
mGL.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
Log.v("GLSurfaceView", "Loading Texture Finished, Error Codes:"+mGL.glGetError());
return mTextures[0];
}
Code which I use to load bitmaps looks like the following:
public int load_texture(int res_id){
if(mBitmapOpts==null){
mBitmapOpts = new BitmapFactory.Options();
mBitmapOpts.inScaled = false;
}
mBtoLoad = BitmapFactory.decodeResource(MyApplicationObject.getContext().getResources(),res_id, mBitmapOpts);
assert mBtoLoad != null;
return GraphicsOperations.set_gl_texture(mBtoLoad);
}
*hprof file analyzed using mat, same data is generated by eclipse ddms
PNGs are compressed images. In order for OpenGL to use them, the pngs must be decompressed. This decompression will increase the memory size.
You may want to decrease the size of some of the textures somehow. Maybe instead of using 512x512 images, use 256x256 or 128x128. Some textures that you use may not need to be so large since they are going onto a mobile device with a limited screen size.