Using VBOs/IBOs in OpenGL ES 2.0 on Android - android

I am trying to create a simple test program on android (API 10) using OpenGL ES 2.0 to draw a simple rectangle. I can get this to work with float buffers referencing the vertices directly, but i would rather do it with VBOs/IBOs. I have looked for countless hours trying to find a simple explanation (tutorial), but have yet to come across one. My code compiles and runs just fine, but nothing is showing up on the screen other than the clear color.
Here are some code chunks to help explain how I have it set up right now.
Part of onSurfaceChanged():
int[] buffers = new int[2];
GLES20.glGenBuffers(2, buffers, 0);
rectVerts = buffers[0];
rectInds = buffers[1];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, (rectBuffer.limit()*4), rectBuffer, GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, (rectIndices.limit()*4), rectIndices, GLES20.GL_STATIC_DRAW);
Part of onDrawFrame():
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glEnableVertexAttribArray(0);
GLES20.glVertexAttribPointer(0, 3, GLES20.GL_FLOAT, false, 0, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, 6, GLES20.GL_INT, 0);

I don't see anything immediately wrong, but here's some ideas you can touch on.
1) 'Compiling and running fine' is a useless metric for an opengl program. Errors are reported through actively calling glGetError and checking compile and link status of the shaders with glGet(Shader|Program)iv. Do you check for errors anywhere?
2) You shouldn't be assuming that 0 is the correct index for vertices. It may work now but will likely break later if you change your shader. Get the correct index with glGetAttribLocation.
3) You're binding the verts buffer onDraw, but I don't see anything about the indices buffer. Is that always bound?
4) You could also try drawing your VBO with glDrawArrays to get the index buffer out of the equation, just to help debugging to see which part is wrong.
Otherwise what you have looks correct as far as I can tell in that small snippet. Maybe something else outside of it is going wrong.

Related

Issues combining multiple textures

I am working with multiple particle systems which I am attempting to composite together in a frame buffer to be used in another shader. Each compute shader is working when I run each of them on their own, but something breaks when I attempt to call more than one sequentially. I have tried this many ways so hopefully I am close with at least one of them.
Combining buffers via glBufferSubData:
VBO/VAO
//generate vbo and load data
GLES31.glGenBuffers(1, vbo, 0)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, vbo[0])
GLES31.glBufferData(GLES31.GL_SHADER_STORAGE_BUFFER, 4 * (particleCoords.size + particleCoords2.size + particleCoords3.size), null, GLES31.GL_STATIC_DRAW)
GLES31.glBufferSubData(GLES31.GL_SHADER_STORAGE_BUFFER, 0, particleCoords.size * 4, particleCoordBuffer)
GLES31.glBufferSubData(GLES31.GL_SHADER_STORAGE_BUFFER, particleCoords.size * 4, particleCoords2.size * 4, particleCoord2Buffer)
GLES31.glBufferSubData(GLES31.GL_SHADER_STORAGE_BUFFER, (particleCoords.size + particleCoords2.size) * 4, particleCoords3.size * 4, particleCoord3Buffer)
GLES31.glBindBufferBase(GLES31.GL_SHADER_STORAGE_BUFFER, 0, vbo[0])
// generate vao
GLES31.glGenVertexArrays(1, vao, 0)
Update positions with compute shader
GLES31.glBindVertexArray(vao[0])
GLES31.glBindBuffer(GLES31.GL_ARRAY_BUFFER, vbo[0])
GLES31.glEnableVertexAttribArray(ShaderManager.particlePositionHandle)
GLES31.glVertexAttribPointer(ShaderManager.particlePositionHandle, COORDS_PER_VERTEX, GLES31.GL_FLOAT, false, COORDS_PER_VERTEX * 4, computeOffsetForIndex(index) / COORDS_PER_VERTEX)//computeOffsetForIndex(index) * 4)
GLES31.glUseProgram(ShaderManager.particleComputeProgram)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, vbo[0])
GLES31.glUniform1i(ShaderManager.particleTimeHandle, time)
GLES31.glDispatchCompute((sizeForIndex(index) / COORDS_PER_VERTEX) / 128 + 1, 1, 1)
GLES31.glMemoryBarrier(GLES31.GL_ALL_BARRIER_BITS)
// cleanup
GLES31.glBindVertexArray(0)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, 0)
GLES31.glDisableVertexAttribArray(ShaderManager.particlePositionHandle)
GLES31.glBindBuffer(GLES31.GL_ARRAY_BUFFER, 0)
Draw particles
GLES31.glUseProgram(ShaderManager.particleDrawProgram)
GLES31.glClear(GLES31.GL_COLOR_BUFFER_BIT)
GLES31.glActiveTexture(GLES31.GL_TEXTURE0)
GLES31.glBindTexture(GLES31.GL_TEXTURE_2D, snowTexture[0])
GLES31.glUniform1i(ShaderManager.particleTextureHandle, 0)
GLES31.glEnableVertexAttribArray(ShaderManager.particlePositionHandle)
GLES31.glVertexAttribPointer(ShaderManager.particlePositionHandle, COORDS_PER_VERTEX, GLES31.GL_FLOAT, false, COORDS_PER_VERTEX * 4, computeOffsetForIndex(index) / COORDS_PER_VERTEX)
GLES31.glBindVertexArray(vao[0])
GLES31.glDrawArrays(GLES31.GL_POINTS, computeOffsetForIndex(index) / COORDS_PER_VERTEX, sizeForIndex(index) / COORDS_PER_VERTEX)
// cleanup
GLES31.glDisableVertexAttribArray(ShaderManager.particlePositionHandle)
GLES31.glBindVertexArray(0)
GLES31.glBindFramebuffer(GLES31.GL_FRAMEBUFFER, 0)
GLES31.glBindTexture(GLES31.GL_TEXTURE0, 0)
onDrawFrame
GLES31.glBindFramebuffer(GLES31.GL_FRAMEBUFFER, particleFramebuffer[0])
GLES31.glBindTexture(GLES31.GL_TEXTURE_2D, particleFrameTexture[0])
GLES31.glTexImage2D(GLES31.GL_TEXTURE_2D, 0, GLES31.GL_RGBA, screenWidth, screenHeight, 0, GLES31.GL_RGBA, GLES31.GL_UNSIGNED_BYTE, null)
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MAG_FILTER, GLES31.GL_LINEAR)
GLES31.glTexParameteri(GLES31.GL_TEXTURE_2D, GLES31.GL_TEXTURE_MIN_FILTER, GLES31.GL_LINEAR)
GLES31.glFramebufferTexture2D(GLES31.GL_FRAMEBUFFER, GLES31.GL_COLOR_ATTACHMENT0, GLES31.GL_TEXTURE_2D, particleFrameTexture[0], 0)
for (i in 0 until (ShaderManager.particleShaderInfo?.computeIds?.size ?: 0)) {
updateParticles(i)
drawParticles(i)
}
This results in the first particle system animating and drawing as anticipated. Any systems updated after the first do not animate and all draw at (0,0,0). Starting from index 1 has the same results only properly updating the first system and not the remaining. This makes me think something is off when running the shaders one after the other, but seems it shouldn't be an issue as the data from each call is unrelated.
Multiple VBO/VAO
What I thought made the most sense was using a separate VBO/VAO for each system, though it seems I'm missing something when it comes to swapping the bound buffer on the GL_SHADER_STORAGE_BUFFER. This attempt resulted in none of my systems updating correctly. I understand that glVertexAttribPointer only cares about what is currently bound, but perhaps persistence of the changed buffer is lost then?
// generate vbo and load data
GLES31.glGenBuffers(1, vbo, 0)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, vbo[0])
GLES31.glBufferData(GLES31.GL_SHADER_STORAGE_BUFFER, 4 * (particleCoords.size), particleCoordBuffer, GLES31.GL_STATIC_DRAW)
GLES31.glBindBufferBase(GLES31.GL_SHADER_STORAGE_BUFFER, 0, vbo[0])
// generate vao
GLES31.glGenVertexArrays(1, vao, 0)
//-------2----------
// generate vbo and load data
GLES31.glGenBuffers(1, vbo2, 0)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, vbo2[0])
GLES31.glBufferData(GLES31.GL_SHADER_STORAGE_BUFFER, 4 * particleCoords2.size, particleCoord2Buffer, GLES31.GL_STATIC_DRAW)
GLES31.glBindBufferBase(GLES31.GL_SHADER_STORAGE_BUFFER, 0, vbo2[0])
// generate vao
GLES31.glGenVertexArrays(1, vao2, 0)
//--------3-----------
// generate vbo and load data
GLES31.glGenBuffers(1, vbo3, 0)
GLES31.glBindBuffer(GLES31.GL_SHADER_STORAGE_BUFFER, vbo3[0])
GLES31.glBufferData(GLES31.GL_SHADER_STORAGE_BUFFER, 4 * particleCoords3.size, particleCoord3Buffer, GLES31.GL_STATIC_DRAW)
GLES31.glBindBufferBase(GLES31.GL_SHADER_STORAGE_BUFFER, 0, vbo3[0])
// generate vao
GLES31.glGenVertexArrays(1, vao3, 0)
The update and draw functions for this approach are the same as above other than the VBO and VAO being swapped out each call for the appropriate objects for the index. I understand that binding a buffer replaces the previous buffer, so perhaps the persisted data is also lost at that point?
Hopefully I'm on the right track, but would welcome a better approach. Thanks for taking a look!
UPDATE
It appears my issue is not actually related to the compute shaders or the data structure being used. It seems the issue actually lies with the sampling of the frame buffer textures and mixing them in the final draw call.
I am currently mixing in a color based on the alpha channel in the frame buffer textures but the output is puzzling. Doing this with each of the three textures successfully adds tex1 and tex2, but not tex3.
outColor = mix(outColor, vec4(1.0), texture2D(tex1, f_texcoord).a);
outColor = mix(outColor, vec4(1.0), texture2D(tex2, f_texcoord).a);
outColor = mix(outColor, vec4(1.0), texture2D(tex3, f_texcoord).a);
Commenting out the mixing of tex1 results in the last two mixes working as expected. This very much confuses me and I'm not sure what the cause could be. Clearly the textures are assigned to the correct indexes as I can access them properly, it's just that I'm unable to add all three. I'm also sampling two other textures in this shader that are working as expected.
Any thoughts would be greatly appreciated!

Additive blending without glClear

I want to do additive blending on camera preview's surface texture binded to my opengl context.
I am getting weirdly rendered texture when i enable blending(20 noisy squares getting rendered), if I call GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); preview is proper but i lose my additive blending as i have cleared the buffer!
With glClear call
Without glClear call
I have no clue whats the problem is, i am newbie to opengl-es, Any suggestions?
If any piece of code is needed i can provide to better understand the issue.
putting relevant code only. Ask for any other part of code if necessary.
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
//compile shader, link program.... etc omitted for brevity
//enable additive blending
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE);
GLES20.glBlendEquation(GLES30.GL_MAX);
}
public void onDrawFrame(GL10 unused) {
// Do not want to do this, but if i do preview is normal but i lose my blending
//GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
surfaceTexture.updateTexImage();
GLES20.glUseProgram(_onscreenShader);
int th = GLES20.glGetUniformLocation(_onscreenShader, "sTexture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, _rawVideoTexture);
GLES20.glUniform1i(th, 0);
GLES20.glVertexAttribPointer(_onscreenPositionAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pVertex);
GLES20.glVertexAttribPointer(_onscreenUVAttribute, 2, GLES20.GL_FLOAT, false, 4 * 2, pTexCoord);
GLES20.glEnableVertexAttribArray(_onscreenPositionAttribute);
GLES20.glEnableVertexAttribArray(_onscreenUVAttribute);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
}
Most Android devices have a PowerVR, Mali, Adreno or Vivante GPU core which are all deferred, tiled-based renderers. This architecture requires the glClear operation at the start of each frame to tell the OpenGL ES driver when to clear internal triangle binning queues as well as the frame buffer. If you don't do the glClear, this caching design does not work properly and you will get weird results that will differ from one GPU type to another. So, you really must do the glClear.

GLES20 calls in onDrawFrame causes long delay

I have written a game in which when onDrawFrame() is called I update the game state first (game logic and the buffers for drawing) and then proceed to do the actual drawings. On the Moto G and Nexus 7 everything works smoothly and each onDrawFrame() call takes only between 1-5ms. However, on the Samsung Galaxy S3, 90% of the time the onDrawFrame() call takes as long as 30-50ms to update.
Further investigating the issue I found the problem wholly lies on the first render method which I attach below:-
(Edit: the block is now on glclear(); having removed the unnecessary calls to get the handles, please refer to the comments)
public void render(float[] m, FloatBuffer vertex, FloatBuffer texture, ShortBuffer indices, int TextureNumber) {
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, 0, vertex);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mTexCoordLoc, 2, GLES20.GL_FLOAT, false, 0, texture);
GLES20.glEnableVertexAttribArray(mTexCoordLoc);
GLES20.glUniformMatrix4fv(mtrxhandle, 1, false, m, 0);
int mSamplerLoc = GLES20.glGetUniformLocation(fhGraphicTools.sp_Image, "s_texture");
GLES20.glUniform1i(mSamplerLoc, TextureNumber);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
// Disable vertex arrays used
GLES20.glDisableVertexAttribArray(mPositionHandle);
GLES20.glDisableVertexAttribArray(mTexCoordLoc);
}
The above method is called 5 times in each onDrawFrame() to draw things on different texture atlas. (the first 4 are actually the game's background which is 1 long rectangle). Having logged the time it takes in each line of the code I found that the lag I am having on the S3 always reside in one of the below lines:
int mPositionHandle = GLES20.glGetAttribLocation(fhGraphicTools.sp_Image, "vPosition");
or
GLES20.glDrawElements(GLES20.GL_TRIANGLES, indices.capacity(), GLES20.GL_UNSIGNED_SHORT, indices);
The lag only occurs in the first call of render() as the subsequent calls takes about 1-2ms each. I have tried disabling the first render() call but then the second which before only take 1-2ms now become the source of the lag, under the same lines.
Does anyone have an idea of what is wrong in my code which the S3 could not handle? It seem that the S3 have problem commencing GL calls at the beginning of each onDrawFrame(), but why is there such behavior is puzzling me. What can be done to decrease the "startup" delay?
Many thanks for your patience in reading.
Edited code to take Selvin's recommendation.
I have fixed the issue when I moved all my png image files from /drawable to /drawable-nodpi the game speed goes back to normal on the S3.
I assume the auto-scaling done by Android causes the raw bitmap I supplied to opengl in "bad" sizes causing unnecessary texture filtering work on each onDrawFrame, causing the call to miss out vsync and giving a long delay for the glClear while waiting for next vsync. Please correct me if I am wrong, still a newbie to graphics.

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

OpenGL Depth Buffer issue on Android

I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!
I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.
try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );

Categories

Resources