I'm facing a big problem. I'm using a Transformer tf101 tab with Android 4.0.3 on it.
My app is using a custom OpenGL ES 2.0 surface. I'm rendering multiple planes with textures. this textures are changing approx. 20 times per second and are updated by passing byte arrays. However, in certain cases, the screen begins flickering and does not render the new textures. Additional UI Elements are still responsive and do their work as intended. It seems the OpenGL context ignores all commands and is unresponsive.
When this happens, a few lines show up in my logCat:
08-20 10:31:15.390: D/NvOsDebugPrintf(2898): NvRmChannelSubmit: NvError_IoctlFailed with error code 1
followed by
08-20 10:31:15.390: D/NvOsDebugPrintf(2898): NvRmChannelSubmit failed (err = 13, SyncPointValue = 879005, returning = 0)
and a few of them:
08-20 10:31:15.390: D/NvOsDebugPrintf(2898): NvRmChannelSubmit failed (err = 196623, SyncPointValue = 0)
Here is how i create my Textured Plane:
m_nTextureStorage[0] = 0;
GLES20.glGenTextures( 1, m_nTextureStorage, 0);
GLES20.glActiveTexture( GLES20.GL_TEXTURE0 );
GLES20.glBindTexture( GLES20.GL_TEXTURE_2D, m_nTextureStorage[ 0 ] );
// Set filtering
GLES20.glTexParameterf( GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST );
GLES20.glTexParameterf( GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST );
GLES20.glTexParameteri( GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE );
GLES20.glTexParameteri( GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE );
and here is how i draw it:
GLES20.glEnable(GLES20.GL_TEXTURE_2D);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glActiveTexture( GLES20.GL_TEXTURE0 );
//GLES20.glUniformMatrix4fv(m_HMVPMatrixUniform, 1, false, mvpMatrix, 0);
GLES20.glUseProgram( m_nProgramHandle );
ByteBuffer oDataBuf = ByteBuffer.wrap( m_sTexture );
m_HTextureUniform = GLES20.glGetUniformLocation( m_nProgramHandle, "uTexture" );
m_HTextureCoordinate = GLES20.glGetAttribLocation( m_nProgramHandle, "TexCoordinate" );
GLES20.glUniform1iv( m_HTextureUniform, 2, m_nTextureStorage, 0 );
// get handle to the vertex shader's vPosition member
m_nPositionHandle = GLES20.glGetAttribLocation( m_nProgramHandle, "vPosition" );
// Prepare the triangle data
GLES20.glTexImage2D( GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE, 640, 480,
0, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, oDataBuf );
// Prepare the triangle data
GLES20.glVertexAttribPointer( m_nPositionHandle, 3, GLES20.GL_FLOAT, false, 12, m_oVertexBuffer );
GLES20.glEnableVertexAttribArray( m_nPositionHandle );
GLES20.glVertexAttribPointer( m_HTextureCoordinate, 2, GLES20.GL_FLOAT, false, 12, m_oTextureBuffer);
GLES20.glEnableVertexAttribArray( m_HTextureCoordinate );
m_nMVPMatrixHandle = GLES20.glGetUniformLocation( m_nProgramHandle, "uMVPMatrix");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv( m_nMVPMatrixHandle, 1, false, mvpMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
The OpenGL Renderer simply calls the draw function of my TexturedPlane by passing the mvpMatrix. Im not deleting any textures since i've read that the android system will take care of that automatically.
I think it has something todo with the GPU going OOM, but im not sure since i haven't found anything related to the posted error messages.
Thanks it advance!
UPDATE:
The Rendermode was set to RENDER_WHEN_DIRTY . After Changing it to RENDERMODE_CONTINOUSLY the problem disappears.. Weird. Since this is just a workaround and no solution, I'm still asking for help ;)
Leaving the Rendermode at continuously is no option, since this consumes to much processor time and makes no sense, since rendering is only necessary when new textures a generated.
Finally found the solution.
When i call glFlush() after every rendering circle, it works fine. I no render every plane on a different texture channel, it works perfectly so far. Thanks for the help.
When you call ..:
GLES20.glUniform1iv( m_HTextureUniform, 2, m_nTextureStorage, 0 );
I think that binds a particular texture unit to the sampler uniform. But you're passing a texture name/object/handle/whatever (which is not a texture unit.) Maybe it's just coincidence, and you only ever pass in 0 (which is possibly the texture name in m_nTextureStorage,) which is coincidentally the correct/desired value?
Or maybe this results in the user-space driver crashing.
Related
I have created 2 sphere's, and I am currently trying to texture them. However the texture coordinates seem to be a bit off.
The Texture (Just supposed to be a window or some kind of break):
The result of the texture being applied to both spheres (You can ignore the black lines not around the square, that's just the camera - yes it's an AR app):
Now I exported the Sphere using a perl script I found that converts obj files to a list of vertices and texture coords OBJ2OPENGL. And the obj file has been converted with 2160 Vertices.
When I render the sphere:
GLES20.glVertexAttribPointer(vertexHandle, 3, GLES20.GL_FLOAT,
false, 0, ufo_sphere.getVertices());
GLES20.glVertexAttribPointer(normalHandle, 3, GLES20.GL_FLOAT,
false, 0, ufo_sphere.getNormals());
GLES20.glVertexAttribPointer(textureCoordHandle, 3,
GLES20.GL_FLOAT, false, 0, ufo_sphere.getTexCoords());
GLES20.glEnableVertexAttribArray(vertexHandle);
GLES20.glEnableVertexAttribArray(normalHandle);
GLES20.glEnableVertexAttribArray(textureCoordHandle);
// activate texture 0, bind it, and pass to shader
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,
mTextures.get(textureIndex).mTextureID[0]);
GLES20.glUniform1i(texSampler2DHandle, 0);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(mvpMatrixHandle, 1, false,
modelViewProjection, 0);
//GLES20.glDrawElements(GLES20.GL_TRIANGLES, sphere.getNumObjectVertex(), GLES20.GL_UNSIGNED_SHORT, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, sphere.getNumObjectVertex()); // I have tried TRIANGLES and TRIANGLE_FAN as well
// disable the enabled arrays
GLES20.glDisableVertexAttribArray(vertexHandle);
GLES20.glDisableVertexAttribArray(normalHandle);
GLES20.glDisableVertexAttribArray(textureCoordHandle);
I can see at least one thing which seems to be wrong...
GLES20.glVertexAttribPointer(textureCoordHandle, 3,
GLES20.GL_FLOAT, false, 0, ufo_sphere.getTexCoords());
Texture coordinates typically have a size of 2 (u, v). So you could try this instead:
GLES20.glVertexAttribPointer(textureCoordHandle, 2,
GLES20.GL_FLOAT, false, 0, ufo_sphere.getTexCoords());
Also, in the link you provided they use the GL_TRIANGLES format instead of GL_TRIANGLE_STRIP so you should switch back to that.
My android app passes in an OpenGL texture2D to my OpenCL kernel, however when I use clCreateFromGLTexture2D() with mipmap level 1, I get CL_INVALID_MIPLEVEL.
I am creating my OpenGL texture and generating mipmaps like this:
GLES20.glGenTextures ( 2, targetTex, 0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 1, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
The texture is then rendered to by binding it with a FBO:
targetFramebuffer = IntBuffer.allocate(1);
GLES20.glGenFramebuffers(1, targetFramebuffer);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, targetFramebuffer.get(0));
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, targetTex[0], 0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
I then create a cl memory object like so:
mem_images[0] = clCreateFromGLTexture2D(m_clContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 1, in_tex, &err);
The memory object is created successfully when the miplevel is set to 0 and all the kernels work fine. It's only when the miplevel is set to that the above function call gives me CL_INVALID_MIPLEVEL error.
OpenCL specification docs (http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clCreateFromGLTexture2D.html) say that a possible reason could be that the OpenGL implementation does not support creating from non-zero mipmap levels. However I don't know if this is definitely the case and am not sure how to find out.
EDIT:
After Andon's reply I changed my OpenGL texture generation to following:
GLES20.glGenTextures ( 2, targetTex, 0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR_MIPMAP_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
and after drawing into the FBO which is bound with the texture, this is how I generate mipmaps:
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glFinish();
I still get the same error when trying to create a CL memory object from GL texture with mipmap level not equal to 0.
There is something very unusual about the dimensions of your second-level mipmap in this code. The dimensions of mipmap LOD N+1 should be LOD N{w,h}/2. You can have LODs with the same resolution in GL, but those LODs will not be mipmaps. To make your 2D texture mipmap complete, you need log2 (max (width,height))-1 LODs in addition to the base image 0, and each should be 1/4 the resolution (1/2 each dimension) of the last. I have to agree with CL on this one, LOD 1 is not really a mipmap.
Moreover, you are calling glGenerateMipmap (...) on a newly created texture that has no data store. That is a meaningless thing to do, glGenerateMipmap (...) will build your mip-chain for you starting with the base LOD image; at the time you called this, there was no base LOD.
Your call to GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); really ought to be moved to a point after you draw into your FBO. Done properly, you do not have to manually allocate storage for lower detail mipmap LODs. This function will take care of that for you. Thus, you should actually remove the code that allocates LOD 1 in the first place.
Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.
I am programming an Android 2d game using opengl es 2.0. After I draw my sprites to the backbuffer I draw lights to a FBO and try to blend it to the back buffer again.
When I draw the FBO to the framebuffer, even trasparent without any color, the framerates drops from 60 to 30 on a Samsung Galaxy w (it has an adreno 205 as gpu). I searched everywhere and tried everything, even if I draw a single sprite on the scene and blend a trasparent FBO texture to the screen the framerate drops. I tried other games with lighting effects on that phone and they run fine, almost every game is fine on that phone, I believe they use the framebuffer as well.
On the Galaxy SII (mali 400 gpu) runs fine, I am quite new to opengl so I believe I am making a mistake somewhere, I share my code.
// Create a framebuffer and renderbuffer
GLES20.glGenFramebuffers(1, fb, offset);
GLES20.glGenRenderbuffers(1, depthRb, offset);
// Create a texture to hold the frame buffer
GLES20.glGenTextures(1, renderTex, offset);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, renderTex[offset]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA,
screenWidth, screenHeight, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR);
//bind renderbuffer
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, depthRb[offset]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16,
screenWidth, screenHeight);
// bind the framebuffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[offset]);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, renderTex[offset], 0);
// specify depth_renderbufer as depth attachment
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT,
GLES20.GL_RENDERBUFFER, depthRb[0]);
// Check FBO status.
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
if ( status == GLES20.GL_FRAMEBUFFER_COMPLETE )
{
Log.d("GLGame framebuffer creation", "Framebuffer complete");
}
// set default framebuffer
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
I do this once on surface creation. Not sure if is correct. I keep the texture and framebuffer ids to switch to them when I need.
My drawing code:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId);
ShaderProgram program = glgame.getProgram();
//put vertices in the floatbuffer
mTriangleVertices.put(vertices, 0, len);
mTriangleVertices.flip();
GLES20.glVertexAttribPointer(program.POSITION_LOCATION, 2, GLES20.GL_FLOAT, false,
TRIANGLE_VERTICES_DATA_STRIDE_BYTES, mTriangleVertices);
//preparing parameter for texture position
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
GLES20.glEnableVertexAttribArray(program.POSITION_LOCATION);
//preparing parameter for texture coords
GLES20.glVertexAttribPointer(program.TEXTURECOORD_LOCATION, 2, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
//set projection matrix
GLES20.glEnableVertexAttribArray(program.TEXTURECOORD_LOCATION);
GLES20.glUniformMatrix4fv(program.MATRIX_LOCATION, 1, false, matrix, 0);
//draw triangle with indices to form a rectangle
GLES20.glDrawElements(GLES20.GL_TRIANGLES, numSprites * 6, GLES20.GL_UNSIGNED_SHORT,
indicesbuf);
//clear buffers
mTriangleVertices.clear();
mVertexColors.clear();
Everything is rendered on screen correctly, but the performance are ruined just when I draw the FBO texture.
Thank you very much for your help. I worked very hard on this and didn't find a solution.
According to qualcomm docs, you need to glclear after every glbindframebuffer, this is a problem related to tiled architecture, if you are switching framebuffers, data need to get copied from fastmem to normal memory to save current framebuffer and from slowmem to fast mem to get contents of newly binded frame, in case you are clearing just after glbind no data is copied from slowmem to fastmem and you are saving time, but you need to redesign your render pipeline often, so it will avoid reading data back and forth between slow and fast memory, so try to do glclear after each bind and it should help, you can also use adreno profiler to get additional information about problematic calls, but i doubt it will help with adreno200 i am trying to get two buffers for blur and i am ending with 10fps, bindframebuffer call can take up to 20msec if its not cleared, if it is it should end at 2ms.
I have an Android app which uses a bit of OpenGL running on a Galaxy Nexus with ICS. After turning on hardware acceleration and using my OpenGL activity some of those textures are stolen by the system and now in my listviews and other UI elements. Its as if my GL pointers obtained via GLES20.glGenTextures are not actually fresh pointers but rather overwriting ones used by the window renderer.
In any case there should be some sort of firewall or sandbox between the OS screen drawing system and my app, no?
Turning off hardwareAcceleration entirely displays fine, but the UI is choppy (and buttery smooth on 2.3 and lower either way). Turning it off/on activity by activity doesn't help either.
Screenshot (before/after)
Normally a repeating bitmap drawable, now an image (from camera in this case) I loaded into OpenGL in a different activity - http://a.yfrog.com/img532/9245/t81k.png
Gen/Load texture
int[] image = new int[1];
GLES20.glGenTextures(1, image, 0);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
Basic draw
GLES20.glViewport(0, 0, width, height);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
GLES20.glUseProgram(shader.program);
GLES20.glUniform1i(GLES20.glGetUniformLocation(shader.program, "imageTexture"), 0);
GLES20.glVertexAttribPointer(Attributes.VERTEX, 2, GLES20.GL_FLOAT, true, 0, squareVertices);
GLES20.glEnableVertexAttribArray(Attributes.VERTEX);
GLES20.glVertexAttribPointer(Attributes.TEXTUREPOSITON, 2, GLES20.GL_FLOAT, true, 0, textureVertices);
GLES20.glEnableVertexAttribArray(Attributes.TEXTUREPOSITON);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, image[0]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
Turns out I was calling GL commands on the UI thread at certain times. This interfered with the apps hardware accelerated display as suggested by Romain Guy at this post.