find if matrices intersect for culling test? - android

I am developing a 2d game where I cant rotate my view using the rotation sensor and view different textures on screen.
I am scattering all the textures using this method :
public void position(ShaderProgram program, float[] rotationMatrix , float[] projectionMatrix , float longitude , float latitude , float radius)
{
this.radius = radius;
viewMat = new float[MATRIX_SIZE];
mvpMatrix = new float[MATRIX_SIZE];
// correct coordinate system to fit landscape orientation
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, viewMat);
//correct the axis so that the direction of Y axis is to the sky and Z is to the front
Matrix.rotateM(viewMat, 0, -90f, 1f, 0f, 0f);
// first rotation - longitude
Matrix.rotateM(viewMat, 0, longitude, 0f, 1f, 0f);
//second rotation - latitude
Matrix.rotateM(viewMat, 0, latitude, 1f, 0f, 0f);
// used to control the distance of viewing the texture (currently only z translation is used)
Matrix.translateM(viewMat, 0 , 0f , 0f , radius);
//multiply the adjusted view matrix with projection matrix
Matrix.multiplyMM(mvpMatrix, 0, projectionMatrix, 0, viewMat, 0);
//send mvp matrix to shader
GLES20.glUniformMatrix4fv(program.getMatrixLocation(), 1, false, mvpMatrix, 0);
}
however when I render large amount of textures , the framerate becomes very laggy . so I thought about using culling.
how should I perform the culling test after I have a different view matrix for every texture?
what I mean is , how do I compare if the matrix that represent where I'm viewing right now intersects with the matrix represents each texture so I'll decide if to draw it or not ?

There are many ways on doing this but each of them will need more then just a matrix. A matrix (assuming the center of the object is at 0,0 without applying any matrix) alone will not handle cases where you may see only a part of the object.
You may define boundaries of the original object with 8 points such as a cube. Imagine if you draw these 8 points with the same matrix as the object the points will appear around the object so that they can define a surface which will box the object itself.
So these points may then be multiplied with your resulting matrix (the whole MVP matrix) which will project them to the openGL drawable part of the coordinate system. Now you only need to check that if any of these points is inside [-1,1] in every axis then you must draw the object. So x, y and z must be between -1 and 1.
Update:
Actually that will not be enough as the intersection may happen even if all of the 8 points are outside those coordinates. You will need a proper algorithm to find the intersection of the 2 shapes...

Related

Mapping real-world coordinate to OpenGL coordinate system

I'm currently trying to implement an AR-browser based on indoor maps, but I'm facing several problems, let's take a look at the figure:
In this figure, I've already changed the coordinate to OpenGL's right-handed coordinate system.
In our real-world scenario,
given the angle FOV/2 and the camera height h then I can get nearest visible point P(0,0,-n).
Given the angle B and the camera height h then I can get a point Q(0,0,-m) between nearest visible point and longest visible point.
Here comes a problem: when I finished setup my vertices(including P and Q) and use the method Matrix.setLookAtM like
Matrix.setLookAtM(modelMatrix, 0, 0f,h,0f,0f,-2000f,0f,0f,1f,0f);
the aspect ratio is incorrect.
If the camera height h is set to 0.92 and FOV is set to 68 degrees, n should be 1.43, But in OpenGL the coordinate of the nearest point is not (0,0,-1.43f). So I'm wondering how to fix this problem, how to map real-world coordinate to OpenGL's coordinate system?
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.
Model matrix:
The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions of the mesh to the world space.
View matrix:
The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space. In the coordinat system on the viewport, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
The view matrix can be set up by Matrix.setLookAtM
Projection matrix:
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
The perspective projection matrix can be set up by Matrix.perspectiveM
You can set up a separate view matrix and a separate projection matrix and finally multiply them. The aspect ratio and the field of view are parameters to [Matrix.perspectiveM]:
Matrix viewM = new Matrix();
Matrix.setLookAtM(viewM, 0, 0, 0, 0f, 0f,-2000f, 0f, 0f, 1.0f, 0.0f);
Matrix prjM = new Matrix();
Matrix.perspectiveM(prjM, 0, fovy, aspect, zNear, zFar);
Matrix viewPrjM = new Matrix();
Matrix.multiplyMM(viewPrjM, 0, prjM, 0, viewM, 0);
Thank to #Rabbid76's support, I finally figure it out myself.
Figure 1: Real-life scenario
Figure 2: OpenGL scenario
In real-life, if we are facing north, we coordinate system would be like:
x point to the east
y point to the north
z point to the sky
so given a camera held by a user, assuming its height is 1.5 meter and its field of view is 68 degrees, we can reference the nearest visible point is located at P(0,2.223,0). We can set the angle B to 89 degrees, so segment QP will be the visible ground on the smartphone screen.
How can we map the coordinate of real-life to OpenGL coordinate system? I found that we must go through several steps:
Assign the camera position to be the origin (e.g. C in figure2).
Due to OpenGL always draw from (1,1) to (-1,-1), we must assign the distance from C to C' to be 1, so that C' is (0, -1, 0).
Finally, we calculate the aspect ratio with camera height in real-life and segment C, C' in OpenGL, and apply it to other coordinates.
By doing stuff above, we can map real-world coordinate to OpenGL coordinate system magically.

Rotating an image based texture in opengl android

I am drawing an image based texture using opengl in android and trying to rotate it about its center.
But the result is not as expected and it appears skewed.
First screen grab is the texture drawn without rotation and the second one is the one drawn with 10 degree rotation.
Code snippet is as below:
mViewWidth = viewWidth;//View port width
mViewHeight = viewHeight;//View port height
float ratio = (float) viewWidth / viewHeight;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
.....
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 5, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.setRotateM(mRotationMatrix, 0, 10, 0, 0, 1.0f);
Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, temp, 0, mRotationMatrix, 0);
GLES20.glUniformMatrix4fv(mRotationMatrixHandle , 1, false, mRotationMatrix, 0);
And in shader:
....
" gl_Position = uMVPMatrix*a_position;\n"
....
The black area in the first screen grab is the area of GLSurfaceView and the grey area is the area where I am trying to draw the image.
The image is already at origin and I think there is no need to translate before rotating it.
The basic problem is that you're scaling your geometry to adjust for the screen aspect ratio before you apply the rotation.
It might not be obvious that you're actually scaling the geometry. But by calculating the coordinates you use for drawing to adjust for the aspect ratio, you are effectively applying a non-uniform scaling transformation to the geometry. And if you then rotate the result, it will get distorted.
What you need to do is apply the rotation before you scale. This will require some reorganization of your current code. Since you apply the scaling before you pass the coordinates to OpenGL, and then do the rotation in the shader, you can't easily change the order. You either have to:
Apply both transformations, in the proper order, to the input coordinates before you pass them to OpenGL, and remove the rotation from the shader code.
Apply both transformations, in the proper order, in the shader code. To do this, you would not modify the input coordinates to adjust to the aspect ratio, and pass a scaling factor into the shader instead.
For the first option, applying a 2D rotation in your own code is easy enough, and it looks like you only have 4 vertices, so there is no efficiency concern. Still the second options is certainly more elegant. So instead of scaling the coordinates in your client code, pass a scaling factor as a uniform into the shader. Then, in the GLSL code, apply the rotation first, and scale the resulting coordinates.
Another option is that you build the complete transformation matrix (again based on applying the individual transformations in the correct order), and pass that matrix into the shader.

OpenGL ES translation and rotation

Wasnt sure whether to post this here or in GameDev, but since it's not really game development i decided to ask here.
I'm trying OpenGL ES 2 on Android and right now i have a simple setup. I load an object from a .obj file, display it on the screen, then i can rotate the camera around the object using touch controlls. My viewMatrix is setup like this:
double[] dist = {DISTANCE * Math.sin(yawAngle) * Math.abs(Math.cos(pitchRollAngle)),
DISTANCE * Math.sin(pitchRollAngle),
DISTANCE * Math.cos(yawAngle) * Math.abs(Math.cos(pitchRollAngle))};
Matrix.setLookAtM(viewMatrix, 0, (float) dist[0], (float) dist[1], (float) dist[2], 0f, 0f, 0f, 0f, 1.0f, 0.0f);
And my projection matrix is just this:
Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1, 1, 3, 100);
I set the yaw / pitchRoll angle from touch events. Now this works ok, when the object is in the center of the screen, i can rotate around like i should. But if i try to move the object, say, 1 unit on the X axis like this:
float[] modelMatrix = new float[16];
Matrix.setIdentityM(modelMatrix, 0);
Matrix.translateM(modelMatrix, 0, 1, 0, 0);
And then multiply all of them like this:
float[] MVPMatrix = new float[16];
Matrix.multiplyMM(MVPMatrix, 0, modelMatrix, 0, viewMatrix, 0);
Matrix.multiplyMM(MVPMatrix, 0, projectionMatrix, 0, MVPMatrix, 0);
The object spins around on its place, but i want it to rotate around the (0, 0, 0) point. What am i doing wrong?
This is one of the most general questions about matrix multiplication. In your case you need to first rotate the object and then translate it. So if you translate first to X and then rotate, the object will appear at position X but rotated around its own axis. If you rotate first and then translate by X the object will not appear at X but at the point gotten by rotating the X itself. This is an expected result and is how it works.
So to understand what happens: The matrix actually consists of 3 base vectors and a center. When identity the base vectors are (1,0,0), (0,1,0), (0,0,1), (0,0,0). Now when multiplying this matrix with some transformation the base vectors are actually transformed. That results is that the matrix is containing its own coordinate system in which the transformations seem "logical". This results that the rotation matrix will never change the center of the object.
I know this is all too complicated but it is actually very easy to imagine the effect: Take it as if matrix multiplication were actually commands to a character looking from a first person view. So when you say "go forward" (translate) you take a step forward, now "turn 90 degrees" and you turn 90 degrees and are still on the same location, now "go forward again" you take another step forward but this is actually not the same direction as it was on the beginning...
So what you do is you say "go forward by 1", now turn ANGLE degrees. This results in the object being kept in the same location and spinning around its own axis.
And what you should do is say "turn toward your goal" (rotate by ANGLE) now "go forward by 1" and maybe even "turn back by -ANGLE" so you face the same direction as you did in the beginning.
I hope this explanation will help you.
I remember having a similar problem in Delphi 2010 when I had to use OpenGL there. The trick was to place the object back to <0, 0, 0>, applythe rotation and then place it back at its original position before drawing a frame.
I can't recall how I did it, nor do I have access to that code anymore as it belonged to my previous employer, but that's what I remember.
The problem was in this line:
Matrix.multiplyMM(MVPMatrix, 0, modelMatrix, 0, viewMatrix, 0);
I just switched the model and view matrix so that they are multiplied the other way around, and it works! Like this:
Matrix.multiplyMM(MVPMatrix, 0, viewMatrix, 0, modelMatrix, 0);
Thanks to #Zubaja for pointing me in the right direction!

How do I rotate a triangle around its vertex located at (0,0,0) in OpenGL 2

I'm trying to make a hexagon with 6 triangles using rotation and translation. Rather than making multiple translate calls, I instead want to translate the triangle downward once and rotate around the Z axis at 60 degrees six times (my sketch may help with that explanation: http://i.imgur.com/SrrXcA3.jpg). After repeating the drawTriangle() and rotate() methods six times, I should have a hexagon.
Currently my code looks like this:
public void onDrawFrame(GL10 unused)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); //start by clearing the screen for each frame
GLES20.glUseProgram(mPerVertexProgramHandle); //tell OpenGL to use the shader program we've compiled
//Get pointers to the program's variables. Instance variables so we can break apart code
mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle, "uMVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aPosition");
mColorHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aColor");
//Prepare the model matrix!
Matrix.setIdentityM(mModelMatrix, 0); //start modelMatrix as identity (no transformations)
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.577350269f, 0.0f); //shift the triangle down the y axis by -0.577350269f so that its top point is at 0,0,0
drawTriangle(mModelMatrix); //draw the triangle with the given model matrix
Matrix.rotateM(mModelMatrix, 0, 60f, 0.0f, 0.0f, 1.0f);
drawTriangle(mModelMatrix);
}
Here's my problem: it appears my triangle isn't rotating around (0,0,0), but instead it rotates around the triangle's center (as shown in this picture: http://i.imgur.com/oiLFSCE.png).
Is it possible for to rotate triangle around (0,0,0), where its vertex is located?
Are you really be sure that your constant -0.577350269f is the correct value for the triangle center?
Also your code looks unfinish (You use an mvp handle but never use it in the code), could you provide more information?

Android OpenGL ES - use glTranslatef or update vertices directly

Would there be a performance increase if my app was to modifying the objects vertices instead of using glTranslatef?
The vertices of the NPC object are set as the following; this allows them to be 1/10th of the screen width because of a previous call to gl.glScalef()
protected float[] vertices = {
0f, 0f, -1f, //Bottom Left
1f, 0f, -1f, //Bottom Right
0f, 1f, -1f, //Top Left
1f, 1f, -1f //Top Right
};
At the moment I have a collection of NPC objects which are drawn on the screen, when they move the X and Y values are updated, which my onDraw accesses to draw the NPCs in the correct place.
onDraw(GL10 gl){
for(int i=0; i<npcs.size(); i++){
NPC npc = npcs.get(i);
npc.move();
translate(npc.x, npc.y);
npc.draw(gl);
}
}
translate(x,y) - pushes and pops the matrix while calling the method gl.glTranslatef() making calculations in relation to the screen size and ratio
npc.draw(gl) - enables client state and draws arrays
Would there be an increase in performance if the move function changed the vertices and of the NPC object? for example;
move(){
// ... do normal movement calculations
float[] vewVertices = {
x, y, 0f,
x+npc.width, y, z,
x, y+npc.height, z,
x+npc.width, y+npc.height, z
}
vertexBuffer.put(newVertices);
vertexBuffer.position(0);
}
I am about to create a short test to see if I can see any performance increase, but I wanted to ask if anyone had any previous experience with this.
The best way is simply to use the translate function since a transformation of the model view matrix during a translation consists in the manipulation of 3 float values while a change in the vertices information is directly proportional to the number of vertices you have.
With all the due respect, the way you proposed is very inconvenient and you should stick to matrix manipulation in place of vertices manipulation.
You can refer to this document for more information about matrix changes during translation operations:
http://www.songho.ca/opengl/gl_transform.html
Cheers
Maurizio
After creating a test state in my current Open GL app there seems to be no performance increase when changing the vertices directly, over using gl.glTranslatef()
Like Maurizio Benedetti pointed out, you will start to see a difference only when your vertex count is sufficiently large.

Categories

Resources