When I run my program on an emulator, it displays an ImageView splash screen, then a black screen for the remainder of the application, which uses GLSurfaceViews. The OGL runs well on my phone. I have tested the program on two computers (low and high performance) and neither displays the GLSurfaceViews. I have also tested the emulator using some of the OGL demos from the Google apidemos interweb site and the demos don't display on either computer. My program uses OGL es 1.1, however I have also tested using OGL es 1.0 to no avail. How might I display ogl on an emulator? Thanks.
Here's an example of some simple square rendering code that doesn't work on the emulator
public void onDrawFrame(GL10 gl) {
//This works
gl.glClearColor(_red, _green, _blue, 1.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
//This doesn't
float vertices[] = { .5f, .5f, 0, .5f, -.5f, 0, -.5f, .5f, 0, -.5f, -.5f, 0 };
FloatBuffer vertexSquareBuffer = ByteBuffer.allocateDirect(4 * 3 * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexSquareBuffer.put(vertices);
vertexSquareBuffer.position(0);
gl.glColor4f(1, 1, 0, 0.5f);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexSquareBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0,4);
}
Well there are some possibilities. Firstly lets see if something that should work, does work on your emulator. Can you please go here and try this tutorial and run it through your emulator: http://www.droidnova.com/android-3d-game-tutorial-part-i,312.html (Do this bit first)
That way we know if the problem is with your code or the emulator.
After that you might need to look to see if there are all the required shared object libraries present.
Let me know in the comments how you go.
Related
Following code is drawing two different cube (red/green) on android studio with JNI using opengl-ES. On the virtual device, the result is correct. However, on real device, the result looks like strange.
The structure(means 3d model and its view on 2d) is correct. but the color is different to AVD. In addition it looks like the depth test is not working. What's the problem?
Simply speaking,
AVD : gives correct result (red, green cube with depth test on with specific camera pose)
real device : gives strange result (same camera pose to AVD. but color is different. two green cubes are there. also depth test not working)
float color1[] = {1.0f, 0.0f, 0.0f};
float color2[] = {0.0f, 1.0f, 0.0f};
int mColorHandle1;
int mColorHandle2;
glViewport(0,1280,720,1280);
glEnable(GL_DEPTH_TEST);
glUniform4f(mColorHandle1, color1[0], color1[1], color1[2], 1.0f);
glVertexAttribPointer(gvPositionHandle, 3, GL_FLOAT, GL_FALSE, 0, gTriangleVertices1);
glEnableVertexAttribArray(gvPositionHandle);
glDrawArrays(GL_TRIANGLES, 0, 36);
glUniform4f(mColorHandle2, color2[0], color2[1], color2[2], 1.0f);
glDrawArrays(GL_TRIANGLES, 36, 36);
glDisableVertexAttribArray(gvPositionHandle);
glDisable(GL_DEPTH_TEST);
I am trying to make a simple 2D game. I have noticed that the onDrawFrame() function is called in 60 FPS with vsync. However for some reason on older devices if rendering time exceeds 1ms then it will skip a frame every few frame and the animation doesn't look smooth at all. And I am talking about devices that aren't too old...
90% of the drawing code looks like this:
gl.glBindTexture(GL11.GL_TEXTURE_2D, MainProgram.glSurfaceView.renderer.ResourceIdToTexture(R.drawable.tank));
gl.glFrontFace(GL11.GL_CW);
gl.glVertexPointer(3, GL11.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL11.GL_FLOAT, 0, tankTextureBuffer);
float[] glCoords = World.WorldCoordsToGLCoords(tank.GetPosition().PosX(), tank.GetPosition().PosY());
float[] glDims = {0.025f, 0.025f};
gl.glPushMatrix();
gl.glTranslatef(glCoords[0], glCoords[1]+glDims[1], -1.0f);
gl.glScalef(glDims[0], glDims[1], 1.0f);
float[] tc = tank.GetOwner().GetColor().asFloatArray();
gl.glColor4f(tc[0],tc[1],tc[2],tc[3]);
gl.glDrawArrays(GL11.GL_TRIANGLE_STRIP, 0, vertex.length / 3);
gl.glColor4f(1f, 1f, 1f, 1f);
gl.glPopMatrix();
Still, it took 1 millisecond to process all of the drawings to the screen and I have noticed that the FPS (I have measured the number of timer per second the surfaceView.drawFrame is called) drops from 60 to around 30 and it looks almost disgusting.
How do other games have such smooth animations even on the oldest devices?
And how do I achieve this with GLES11?
iOS Code:
I have working code on iOS which prepares a 3D transformation for a UIView's layer:
CATransform3D t = CATransform3DIdentity;
t.m34 = -1.0f/500.0f;
t = CATransform3DTranslate(t, 10.0f, 20.0f, 0.0f);
t = CATransform3DRotate(t, 0.25f * M_PI, -1.0f, 0.0f, 0.0f);
I'm trying to port the above code to Android. I'm trying to prepare an android.view.animation.Transformation t which will do the same thing. It will be executed by ViewGroup.getChildStaticTransformation(View v, Transformation t).
unfinished Android Code:
t.clear();
t.setTransformationType(Transformation.TYPE_MATRIX);
android.graphics.Camera camera = new android.graphics.Camera();
// set perspective (m34) here.. how??
camera.translate(10.0f, 20.0f, 0.0f);
camera.rotateX(-1.0f * Math.toDegrees(0.25 * Math.PI));
camera.getMatrix(t.getMatrix());
My main issue:
The main problem is that I'm not sure how set the perspective t.m34 = -1.0f/500.0f in Android. Reading the docs is rather cryptic and my best bet is using Camera.setLocation(). Also, the docs say nothing about units, so what would be an appropriate value?
Another issue is that setLocation() is only available from API 12, so I would really need to set it manually in the Matrix instead (or via some transformation). Any ideas how?
Final comment:
I'm aware that there are probably more issues.. like the translate() units, transformation order and generally the issue that we transform the camera in Android but the object in iOS. I will get to all of these later :)
I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!
I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.
try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );
I'm currently learning OpenGL ES programming on Android (2.1). I started with the obligatory rotating cube. It's rotating fine but I can't get the depth buffer to work. The polygons are always displayed in the order the GL commands render them. I do this during initialization of GL:
gl.glClearColor(.5f, .5f, .5f, 1);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glClearDepthf(1f);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glDepthFunc(GL10.GL_LEQUAL);
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT, GL10.GL_NICEST);
On surface-change I do this:
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100f);
When I enable backface culling then everything looks correct. But backface culling is only a speed-optimization so it should also work with only the depth buffer or not? So what is missing here?
Found it myself. It wasn't GL code, it was android code:
view.setEGLConfigChooser(false);
The "false" in this line explicitly says that no Z-Buffer should be allocated. After switching it to "true" it worked perfectly.
I was using the GL2JNIView provided in the hello-gl2 sample in NDK r10,
and I was also having this issue.
The problem was that when creating the GL2JNIView object, i didn't specify the depth size on the constructor, so that the private class ConfigChooser could find the right EGL configuration.
public GameJNIView(Context context, boolean translucent, int depth, int stencil){...}
public GameJNIView(Context context){...}
myView = new GL2JNIView(this, false, 16, 0);
instead of
myView = new GL2JNIView(this);