OpenGL Depth Buffer issue on Android - android

I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!

I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.

try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );

Related

Additive blending without glClear

I want to do additive blending on camera preview's surface texture binded to my opengl context.
I am getting weirdly rendered texture when i enable blending(20 noisy squares getting rendered), if I call GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); preview is proper but i lose my additive blending as i have cleared the buffer!
With glClear call
Without glClear call
I have no clue whats the problem is, i am newbie to opengl-es, Any suggestions?
If any piece of code is needed i can provide to better understand the issue.
putting relevant code only. Ask for any other part of code if necessary.
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
//compile shader, link program.... etc omitted for brevity
//enable additive blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);
GLES20.glBlendEquation(GLES30.GL_MAX);
}
public void onDrawFrame(GL10 unused) {
// Do not want to do this, but if i do preview is normal but i lose my blending
//GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
surfaceTexture.updateTexImage();
GLES20.glUseProgram(_onscreenShader);
int th = GLES20.glGetUniformLocation(_onscreenShader, "sTexture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, _rawVideoTexture);
GLES20.glUniform1i(th, 0);
GLES20.glVertexAttribPointer(_onscreenPositionAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pVertex);
GLES20.glVertexAttribPointer(_onscreenUVAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pTexCoord);
GLES20.glEnableVertexAttribArray(_onscreenPositionAttribute);
GLES20.glEnableVertexAttribArray(_onscreenUVAttribute);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
Most Android devices have a PowerVR, Mali, Adreno or Vivante GPU core which are all deferred, tiled-based renderers. This architecture requires the glClear operation at the start of each frame to tell the OpenGL ES driver when to clear internal triangle binning queues as well as the frame buffer. If you don't do the glClear, this caching design does not work properly and you will get weird results that will differ from one GPU type to another. So, you really must do the glClear.

GLES20 calls in onDrawFrame causes long delay

I have written a game in which when onDrawFrame() is called I update the game state first (game logic and the buffers for drawing) and then proceed to do the actual drawings. On the Moto G and Nexus 7 everything works smoothly and each onDrawFrame() call takes only between 1-5ms. However, on the Samsung Galaxy S3, 90% of the time the onDrawFrame() call takes as long as 30-50ms to update.
Further investigating the issue I found the problem wholly lies on the first render method which I attach below:-
(Edit: the block is now on glclear(); having removed the unnecessary calls to get the handles, please refer to the comments)
public void render(float[] m, FloatBuffer vertex, FloatBuffer texture, ShortBuffer indices, int TextureNumber) {
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, 0, vertex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mTexCoordLoc, 2, GLES20.GL_FLOAT, false, 0, texture);
GLES20.glEnableVertexAttribArray(mTexCoordLoc);
GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, m, 0);
int mSamplerLoc = GLES20.glGetUniformLocation(fhGraphicTools.sp_Image, "s_texture");
GLES20.glUniform1i(mSamplerLoc, TextureNumber);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
// Disable vertex arrays used
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTexCoordLoc);
}
The above method is called 5 times in each onDrawFrame() to draw things on different texture atlas. (the first 4 are actually the game's background which is 1 long rectangle). Having logged the time it takes in each line of the code I found that the lag I am having on the S3 always reside in one of the below lines:
int mPositionHandle = GLES20.glGetAttribLocation(fhGraphicTools.sp_Image, "vPosition");
or
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
The lag only occurs in the first call of render() as the subsequent calls takes about 1-2ms each. I have tried disabling the first render() call but then the second which before only take 1-2ms now become the source of the lag, under the same lines.
Does anyone have an idea of what is wrong in my code which the S3 could not handle? It seem that the S3 have problem commencing GL calls at the beginning of each onDrawFrame(), but why is there such behavior is puzzling me. What can be done to decrease the "startup" delay?
Many thanks for your patience in reading.
Edited code to take Selvin's recommendation.
I have fixed the issue when I moved all my png image files from /drawable to /drawable-nodpi the game speed goes back to normal on the S3.
I assume the auto-scaling done by Android causes the raw bitmap I supplied to opengl in "bad" sizes causing unnecessary texture filtering work on each onDrawFrame, causing the call to miss out vsync and giving a long delay for the glClear while waiting for next vsync. Please correct me if I am wrong, still a newbie to graphics.

Starfield optimization in libgdx

I want to create a static starfield in libgdx.
My first way was: create a Decal and a DecalBatch over it.
When I draw the Decal I use a Billboarding technic on the Decal
star.decal.setRotation(camera.direction, camera.up);
next: I wanted to animate the alphas on the decals, so I created on a random way some time:
star.decal.setColor(1, 1, 1, 0.6f+((float) Math.random()*0.4f) );
It is working, but my FPS went down from 55 FPS to 25 FPS (because of my 500-1000 stars)
Can I use only one batch call in any way? Maybe a particleMaterial with only one Vertex list and with a GL_POINT mode that is always face to front of my camera?
How can I do this in libgdx?
The Batch is way to complex than what you need , on every frame it needs to copy all the vertices of the sprites in another array and do calculations on them to find the scale rotation etc..
As you suspect GL_POINT sprites will be way faster and in a medium range device it should be able to render in 60 fps like 2000 points that have different position and color
here is some old code of mine ,its in c and it uses opengl es 1.1 and propably there will be a more simple way to do it in libgdx
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glEnable (GL_POINT_SPRITE_OES);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TXTparticle);
glTexEnvi(GL_POINT_SPRITE_OES, GL_COORD_REPLACE_OES, GL_TRUE);
glPointSize(30);
glColorPointer(4, GL_FLOAT, 32, particlesC);//particlesC the vertices color
glVertexPointer(3, GL_FLOAT, 24, particlesV);//particlesV the vertices
glDrawArrays(GL_POINTS, 0, vertvitLenght/6);
glDisable( GL_POINT_SPRITE_OES );
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

Android OpenGL ES 2.0 -- glReadPixels() and glTexImage2D() drawing a black texture?

I'm working on some Android code for caching and redrawing a framebuffer object's color buffer between the loss and recreation of EGL contexts. Development is primarily happening on a Xoom tablet running Honeycomb. Anyway, what I'm trying to do is store the result of calling glReadPixels() on the FBO in a direct ByteBuffer, then use that buffer with glTexImage2D() and draw it back into the (now cleared) framebuffer. All of this seems to work fine — the ByteBuffer contains the right values ([-1, 0, 0, -1] etc. for a pixel, according to Java's inability to understand unsigned bytes), no GlErrors seem to be thrown, and the quad is drawn to the right part of the screen (currently the top-left quarter of the framebuffer for testing purposes).
However, no matter what I try, glTexImage2D() always outputs a plain black texture. I've had some issues with this before — when displaying Bitmaps, I eventually gave up trying to use the basic GLES20.glTexImage2D() with Buffers and skipped to using GLUtils.glTexImage2D(), which processes the Bitmap for you. Unfortunately, that's less of an option here (I did actually try converting the ByteBuffer to a Bitmap so I could use GLUtils, without much success), so I've really run out of ideas.
Can anyone think of anything that could be causing glTexImage2D() to not correctly process a perfectly good ByteBuffer? Any and all suggestions would be welcome.
ByteBuffer pixelBuffer;
void storePixels() {
try {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbuf);
pixelBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
gfx.checkGlError("store Pixels");
}catch (OutOfMemoryError e) {
pixelBuffer = null;
}
}
void redrawPixels() {
GLES20.glBindFramebuffer(GL20.GL_FRAMEBUFFER, fbuf);
int[] texId = new int[1];
GLES20.glGenTextures(1, texId, 0);
int bufferTex = texId[0];
GLES20.glBindTexture(GL20.GL_TEXTURE_2D, bufferTex);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, repeatX ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, repeatY ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGBA, width, height, 0, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
gfx.drawTexture(bufferTex, width, height, Transform.IDENTITY, width/2, height/2, false, false, 1);
GLES20.glDeleteTextures(1, IntBuffer.wrap(new int[] {bufferTex}));
pixelBuffer = null;
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
}
gfx.drawTexture() builds a quad and draws it to the currently bound framebuffer, by the way. That code has been well-tested in other parts of my project — it shouldn't be the issue here.
For those of you playing along at home, this code is in fact totally valid. Remember when I swore blind that gfx.drawTexture() has been well-tested and shouldn't be the issue here"? Yeah, it was totally the issue there. I was buffering vertices to draw without actually flushing them through a glDrawElements() call. Whoops.

Categories

Resources