I'm trying to write an VR application using opengl on Android. I know it's very simple with Google Cardboard SDK but I want to do it entirely in OpenGL to understand clearly. Now, I have something that I am not clearly. I hope someone help me to clarify.
What is off-axis and on-axis projection? Do Google Cardboard use off-axis projection?
I know that in order to create stereo view for VR, camera should be translate d/2 with d is distance between two eyes. I tried something like this
Matrix.setLookAtM(mViewMatrix, 0, 1, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);//translate 1 according x axis for right eye
Matrix.setLookAtM(mViewMatrix, 0, -1, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);//translate -1 according x axis for left eye
Now, suppose real value of d is 5 cm or d/2 = 2.5cm. How I have to translate camera to correctly? I don't know map 5cm in real world into OpenGL coordinate.
I'm looking forward to the help. Sorry because of my bad English. Thank you!
Related
I'm beginning in OpenGL ES on Android (Java);
I'm following this tutorial : http://developer.android.com/training/graphics/opengl/projection.html
Things basically go well, except that I have some trouble with matrix-multiplication that I didn't have on desktop OpenGL (where I used GLM instead of java/android framework)
My projection+view matrix does'nt work correctly when I do projection * view * vec4(..coords..)
The only way for my geometry to be shown is to multiply by right :
vec4(...) * MVP, where MVP is Projection*View.
It shouldn't work (as far as I'm concerned) but... it does. At least a little bit. I get massive clipping problems by using this way, but it's still better than nothing :D
My problem is kinda similar to this (and same tutorial, but it looks like it got updated) : Android OpenGL weirdness with the setLookAtM method
But it looks like I have the opposite problem now. And I've tried every combination I've imagined to solve this (transpose MVP in GLSL, multiply view by projection and not the other way around, etc.)
Here's my code
//GLSL :
gl_Position = vec4(vPosition, 1.0) * MVP;
Where MVP is mMVPMatrix, which is defined as ...
Matrix.setLookAtM(mCameraMatrix, 0, 8, 8, -8, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mCameraMatrix, 0);
And projection is
float ratio = (float) width / height;
android.opengl.Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
Any idea ?
Problem solved : I just had to set my MVP Matrix to Identity before computing it, and this, every frame
I started using OpenGL 2.0 by following the official Android tutorial: http://developer.android.com/training/graphics/opengl/index.html . Basically, I could do mostly copy and paste for my game which uses just a few colored polygons: https://play.google.com/store/apps/details?id=de.timedout.mosaic.app
I tested the game on several devices and it worked. However, I also just tested it on a Samsung Galaxy S3 (Android 4.1.2) and there it only shows the background and no polygons. However, I do not get any error message. Ok, so I went back to the official tutorial and tested the code for both OpenGL 1.0 and OpenGL 2.0 on the Galaxy S3 which leads to the same result: it draws the background but no polygons! At the same time other OpenGL benchmarks from the market run well on the device. Does that mean that the official code from the tutorial does have some flaw? - I am a total OpenGL beginner, does someone have a hint for me with this problem?
Ok, I found the reason. The shapes are positioned on the z=0.0f plane.
static float triangleCoords[] = { // in counterclockwise order:
0.0f, 0.622008459f, 0.0f, // top
-0.5f, -0.311004243f, 0.0f, // bottom left
0.5f, -0.311004243f, 0.0f // bottom right
};
The camera is defined as:
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
I understand that the camera is on the same z=0.0f plane as the polygons. On some devices it displays the objects, on the Galaxy S3 the GPU obviously does not display them.
A simple solution is to slightly move the camera back or the object to the foreground. I chose to move the objects slightly:
static float triangleCoords[] = { // in counterclockwise order:
0.0f, 0.622008459f, 0.00001f, // top
-0.5f, -0.311004243f, 0.00001f, // bottom left
0.5f, -0.311004243f, 0.00001f // bottom right
};
Now it works like a charm on all my devices. I hope this can help anyone who has the same trouble.
I am very confused about the use on GLU.gLookAt(eyeX,eyeY,eyeZ,Xpos,Ypos,Zpos,upX,upY,upZ) method. All I want is to zoom the 3d cube.
When I increase/decrease value of eyeZ, the camera moves forward/backward to the cube. Its all fine up to a certain limit of eyeZ, but when I increase the eyeZ value beyond that limit, it starts reverting the effect i.e. instead of zooming in it starts zooming out.
I might not be good in openGL to understand above method but could anyone tell me whats the basic reason behind this.
I referred to this link
http://jerome.jouvie.free.fr/opengl-tutorials/Tutorial8.php
If you want I can post my code over here..
public void onDrawFrame(GL10 gl)
{
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
//gl.glTranslatef(xPos, yPos, -zoomFactor);
GLU.gluLookAt(gl, eyeX, eyeZ, eyeZ, 0f, 0f, 0f , 0f, 1f, 0f);
gl.glRotatef(mAngleX, 0, 1, 0);
gl.glRotatef(mAngleY, -1, 0, 0);
// Draw the model
cube.draw(gl);
}
this is the method where i am using gLookAt method..
GLU.gluLookAt(gl, eyeX, eyeZ, eyeZ, 0f, 0f, 0f , 0f, 1f, 0f); is a function that puts your camera looking at a particular spot, in this case its (0,0,0) (i think, cant remember which way round the parameters are, but im assuming the last 3 are your up vector). So if you move your camera towards what you are looking at, eventually it will go through it and out the other side, and since you are using GLU.gluLookAt it will turn to face the object behind it, thus giving you the impression that you are zooming out when you carry on moving in the same direction.
I am showing a textured sqad, centered around [0,0,-10], width and height = 10000. The camera is positioned at [0,0,0] and looks down the negative z-axis (eyepoint=[0,0,0], center=[0,0,-1]):
GLU.gluLookAt(gl, 0f, 0f, 0f, 0f, 0f, -1f, 0f, 1f, 0f);
Lighting and Depth-Test are disabled.
In orthographic mode, the squad is displayed perfectly, with texture and all - I can even zoom and pan around.
However, when switching to perspective mode, via:
GLU.gluPerspective(gl, 60.0f, w / h, 1.0f, 1000.0f);
then the view is just blank. Has anybody got any idea what could cause this?
UPDATE:
Using glFrustum instead of gluPerspective, it works:
gl.glFrustumf(-scaledHalfW, scaledHalfW, -scaledhalfH, scaledhalfH, 1.0f, 100.0f);
But why does gluPerspective not show anything?
Is w / h an integer division maybe?
I am displaying a quad in a pseudo-2D canvas via OpenGL.
To do so, I use orthographic projection via:
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(-ratio, ratio, -1, 1, 0, 10000);
The coordinates of the displayed quad are:
float[] quadCoords = {-10.0f, -10.0f, 5.0f,
10.0f, -10.0f, 5.0f,
10.0f, 10.0f, 5.0f,
-10.0f, 10.0f, 5.0f};
This quad is rendered as 2 triangles (I spare you the code). I am also applying a texture, which is working nicely.
The "camera" is defined before rendering the quad like so:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, -10.0f, -10.0f, -5, -10.0f, -10.0f, 0f, 0f, 1.0f, 0.0f);
As you can see, the viewport centers at [-10, -10, 0], which should be centered at the bottom left corner of the quad. However, when rendering the scene, it looks like this:
This appears to be the RIGHT bottom corner - but it is not. I checked it, and it turns out the X axis is flipped. Am I doing something wrong with gluLookAt? Or have I missed something?
Ok that's a little silly, but I found the answer minutes after writing this question (hours after it occurred):
The "camera" is looking at the backside of the quad. Assigning "0" for all z-coordinates of the quad and "+1" for the z-coordinate of the eyepoint in gluLookAt fixed it.