Android glGetFloatv called unimplemented "OpenGL ES API" [duplicate] - android

I'm learning Open GL ES and would like to get a more intuitive interface with 3D objects than the one suggested by google in the TouchRotateActivity sample.
In order to do that, I would like to multiply my Modelview matrix by the ModelView matrix in the previous state.
But I encounter the following problem : getFloatv function returns 0 values in my float array, and I don't understand why (my ModelView matrix is not empty : if it was, I wouldn't get my cube on the screen).
Could someone help me to figure out what the problem is? Here are the changes in the code .
private float[] previous;
public CubeRenderer() {
mCube = new Cube();
previous = new float[16];
}
public void onDrawFrame(GL10 gl) {
GL11 gl11 = (GL11) gl;
gl11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
gl11.glMatrixMode(GL11.GL_MODELVIEW);
gl11.glLoadIdentity();
gl11.glTranslatef(0, 0, -3.0f);
gl11.glRotatef(mAngleX, 0, 1, 0);
gl11.glRotatef(mAngleY, 1, 0, 0);
gl11.glEnableClientState(GL11.GL_VERTEX_ARRAY);
gl11.glEnableClientState(GL11.GL_COLOR_ARRAY);
/*if(!previous.equals(new float[16]))
gl11.glMultMatrixf(previous, 0);*/
gl11.glGetFloatv(GL11.GL_MODELVIEW_MATRIX, previous, 0);
Log.d("taille matrice",Integer.toString(previous.length));
for(int i=0; i<previous.length;i++)
Log.d(Integer.toString(i),Float.toString(previous[i]));
mCube.draw(gl11);
}
Thank you in advance.

Depending on your device you might be using the PixelFlinger software GL renderer, which unfortunately does not implement glGetFloat, at least as of version 1.2. Checking the logcat output should reveal messages to this effect if this is the case.
The solution is to handle the matrices yourself so there's no need to retrieve them from OpenGL in the first place. Like so.

I don't program in Java, so for all I know, your problem could be in the way the memory is being passed to glGetFloatv. In any case, I found this page floating around out there, maybe it will help you.

Related

Add more teapots on an Image Target using Vuforia

I'm a newbie in Android Vuforia AR development. After google and vuforia forum has no results, I come here and need your suggestions. I successful replace a teapot by my own 3d object, now i need to add another teapots into "stones" target, like this image link? Have you ever work with this case? Please give me some traces to begin.
Thanks and best Regards!
Are you using Unity? Here are two suggestions:
You can programmatically instantiate prefabs on an image target following this code, just add additional transforms:
https://developer.vuforia.com/forum/faq/unity-how-can-i-dynamically-attach-my-3d-model-image-target
Alternatively, in your Scene Hierarchy, you can make additional GameObjects children of the ImageTarget prefab (probably the easiest way), and adjust their position using the Scene Editor.
First, grab a fresh copy of the modelview matrix before transforming it. Second, bind your modelViewProjectionMatrix before using it.
modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(trackable->getPose());
SampleUtils::rotatePoseMatrix(5.0f, 0.0f, 0.0f, 1.0f, &modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
&modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
&modelViewMatrix.data[0] ,
&modelViewProjection.data[0]);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
(GLfloat*)&modelViewProjection.data[0] );
SampleUtils::checkGlError("ImageTargets renderFrame");
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
(const GLvoid*) &teapotIndices[0]);

Porting iOS CATransform3D perspective (field m34) to Android using android.graphics.Camera

iOS Code:
I have working code on iOS which prepares a 3D transformation for a UIView's layer:
CATransform3D t = CATransform3DIdentity;
t.m34 = -1.0f/500.0f;
t = CATransform3DTranslate(t, 10.0f, 20.0f, 0.0f);
t = CATransform3DRotate(t, 0.25f * M_PI, -1.0f, 0.0f, 0.0f);
I'm trying to port the above code to Android. I'm trying to prepare an android.view.animation.Transformation t which will do the same thing. It will be executed by ViewGroup.getChildStaticTransformation(View v, Transformation t).
unfinished Android Code:
t.clear();
t.setTransformationType(Transformation.TYPE_MATRIX);
android.graphics.Camera camera = new android.graphics.Camera();
// set perspective (m34) here.. how??
camera.translate(10.0f, 20.0f, 0.0f);
camera.rotateX(-1.0f * Math.toDegrees(0.25 * Math.PI));
camera.getMatrix(t.getMatrix());
My main issue:
The main problem is that I'm not sure how set the perspective t.m34 = -1.0f/500.0f in Android. Reading the docs is rather cryptic and my best bet is using Camera.setLocation(). Also, the docs say nothing about units, so what would be an appropriate value?
Another issue is that setLocation() is only available from API 12, so I would really need to set it manually in the Matrix instead (or via some transformation). Any ideas how?
Final comment:
I'm aware that there are probably more issues.. like the translate() units, transformation order and generally the issue that we transform the camera in Android but the object in iOS. I will get to all of these later :)

Using VBOs/IBOs in OpenGL ES 2.0 on Android

I am trying to create a simple test program on android (API 10) using OpenGL ES 2.0 to draw a simple rectangle. I can get this to work with float buffers referencing the vertices directly, but i would rather do it with VBOs/IBOs. I have looked for countless hours trying to find a simple explanation (tutorial), but have yet to come across one. My code compiles and runs just fine, but nothing is showing up on the screen other than the clear color.
Here are some code chunks to help explain how I have it set up right now.
Part of onSurfaceChanged():
int[] buffers = new int[2];
GLES20.glGenBuffers(2, buffers, 0);
rectVerts = buffers[0];
rectInds = buffers[1];
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER, (rectBuffer.limit()*4), rectBuffer, GLES20.GL_STATIC_DRAW);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glBufferData(GLES20.GL_ELEMENT_ARRAY_BUFFER, (rectIndices.limit()*4), rectIndices, GLES20.GL_STATIC_DRAW);
Part of onDrawFrame():
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, rectVerts);
GLES20.glEnableVertexAttribArray(0);
GLES20.glVertexAttribPointer(0, 3, GLES20.GL_FLOAT, false, 0, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, rectInds);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, 6, GLES20.GL_INT, 0);
I don't see anything immediately wrong, but here's some ideas you can touch on.
1) 'Compiling and running fine' is a useless metric for an opengl program. Errors are reported through actively calling glGetError and checking compile and link status of the shaders with glGet(Shader|Program)iv. Do you check for errors anywhere?
2) You shouldn't be assuming that 0 is the correct index for vertices. It may work now but will likely break later if you change your shader. Get the correct index with glGetAttribLocation.
3) You're binding the verts buffer onDraw, but I don't see anything about the indices buffer. Is that always bound?
4) You could also try drawing your VBO with glDrawArrays to get the index buffer out of the equation, just to help debugging to see which part is wrong.
Otherwise what you have looks correct as far as I can tell in that small snippet. Maybe something else outside of it is going wrong.

OpenGL Depth Buffer issue on Android

I am developing a 3D Rendering Engine for Android. I have experienced some issues with the depth buffer. I am drawing some cubes, one big and two small ones that will fall on top of the bigger one. While rendering I have seen that obviously something with the depth buffer is wrong, as seen in this screen shot:
This screen shot was taken on an HTC Hero (running Android 2.3.4) with OpenGL ES 1.1 The whole application is (still) targeted at OpenGL ES 1.1. It does look the same on the Emulator.
These are the calls in my onSurfaceCreated method in the renderer:
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
Log.d(TAG, "onsurfacecreated method called");
int[] depthbits = new int[1];
gl.glGetIntegerv(GL_DEPTH_BITS, depthbits, 0);
Log.d(TAG, "Depth Bits: " + depthbits[0]);
gl.glDisable(GL_DITHER);
gl.glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
gl.glClearColor(1, 1, 1, 1);
gl.glClearDepthf(1f);
gl.glEnable(GL_CULL_FACE);
gl.glShadeModel(GL_SMOOTH);
gl.glEnable(GL_DEPTH_TEST);
gl.glMatrixMode(GL_PROJECTION);
gl.glLoadMatrixf(
GLUtil.matrix4fToFloat16(mFrustum.getProjectionMatrix()), 0);
setLights(gl);
}
The GL Call for the depth bits returns 16 on the device and 0 on the emulator. It would've made sense if it only didn't work on the emulator since there obviously is no depth buffer present. (I've tried setting the EGLConfigChooser to true, so it would create a Config with as close to 16 bits depth buffer as possible, but that didn't work on the emulator. It wasn't necessary on the device.)
In my onDrawFrame method I make the following OpenGL Calls:
gl.glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
gl.glClearDepthf(1);
And then for each of the cubes:
gl.glEnableClientState(GL_VERTEX_ARRAY);
gl.glFrontFace(GL_CW);
gl.glVertexPointer(3, GL_FIXED, 0, mVertexBuffer);
// gl.glColorPointer(4, GL_FIXED, 0, mColorBuffer);
gl.glEnableClientState(GL_NORMAL_ARRAY);
gl.glNormalPointer(GL_FIXED, 0, mNormalBuffer);
// gl.glEnable(GL_TEXTURE_2D);
// gl.glTexCoordPointer(2, GL_FLOAT, 0, mTexCoordsBuffer);
gl.glDrawElements(GL_TRIANGLES, mIndexBuffer.capacity(),
GL_UNSIGNED_SHORT, mIndexBuffer);
gl.glDisableClientState(GL_NORMAL_ARRAY);
gl.glDisableClientState(GL_VERTEX_ARRAY);
What am I missing? If more code is needed just ask.
Thanks for any advice!
I got it to work correctly now. The problem was not OpenGL. It was (as Banthar mentioned) a problem with the projection matrix. I am managing the projection matrix myself and the calculation of the final matrix was somehow corrupted (or at least not what OpenGL expects). I can't remember where I got the algorithm for my calculation, but once I changed it to the way OpenGL calculates the projection matrix (or directly call glFrustumf(...)) it worked fine.
try enabling:
-glDepthFunc(GL_LEQUAL)
-glDepthMask( true );

First Person Camera rotation in 3D

I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back
There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}

Categories

Resources