Let's say i have an OpenGL 4x4 Matrix i use for some transformation, inside my call i use "translate" many times but then, at the end, i want to "wrap" that translation around a specific size, so, in 2D terms, let's say that i translate X by 210, then i want to wrap that translation into a "50 width" box, resulting in a translation of 10 (210 % 50).
Since i need to convert the coordinates into screen pixels i init my Matrix in this way:
private float[] mScreenMatrix = {
2f / width, 0f, 0f, 0f,
0f, -2f / height, 0f, 0f,
0f, 0f, 0f, 0f,
-1f, 1f, 0f, 1f
};
So, if width is "50" and i call Matrix.translateM(210,0,0) how can i then "wrap" this Matrix so the final translation on x is just 10?
You can't (without doing extra work) because that wrap introduces modulo arithmetic (or a toroidal topology) which doesn't match the way OpenGL's NDC space (which roughly translate to the volume you can see in the window) is laid out. When a primitive reaches out of NDC space it gets clipped so that what remains is within NDC space.
So in order to get a toroidal topology you have to duplicate primitives that get clipped by the NDC and reintroduce them to appear the opposite end of NDC. The only ways to do this is either by explicit submission of extra geometry or by using the geometry shader to create such geometry in-situ.
If you are using orthogonal projection then you can render to a texture and then wrap it as you wish in the second render pass. If you are using perspective projection, still you can use the same method, but the result will be unrealistic.
Related
I am developing a 2d game where I cant rotate my view using the rotation sensor and view different textures on screen.
I am scattering all the textures using this method :
public void position(ShaderProgram program, float[] rotationMatrix , float[] projectionMatrix , float longitude , float latitude , float radius)
{
this.radius = radius;
viewMat = new float[MATRIX_SIZE];
mvpMatrix = new float[MATRIX_SIZE];
// correct coordinate system to fit landscape orientation
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, viewMat);
//correct the axis so that the direction of Y axis is to the sky and Z is to the front
Matrix.rotateM(viewMat, 0, -90f, 1f, 0f, 0f);
// first rotation - longitude
Matrix.rotateM(viewMat, 0, longitude, 0f, 1f, 0f);
//second rotation - latitude
Matrix.rotateM(viewMat, 0, latitude, 1f, 0f, 0f);
// used to control the distance of viewing the texture (currently only z translation is used)
Matrix.translateM(viewMat, 0 , 0f , 0f , radius);
//multiply the adjusted view matrix with projection matrix
Matrix.multiplyMM(mvpMatrix, 0, projectionMatrix, 0, viewMat, 0);
//send mvp matrix to shader
GLES20.glUniformMatrix4fv(program.getMatrixLocation(), 1, false, mvpMatrix, 0);
}
however when I render large amount of textures , the framerate becomes very laggy . so I thought about using culling.
how should I perform the culling test after I have a different view matrix for every texture?
what I mean is , how do I compare if the matrix that represent where I'm viewing right now intersects with the matrix represents each texture so I'll decide if to draw it or not ?
There are many ways on doing this but each of them will need more then just a matrix. A matrix (assuming the center of the object is at 0,0 without applying any matrix) alone will not handle cases where you may see only a part of the object.
You may define boundaries of the original object with 8 points such as a cube. Imagine if you draw these 8 points with the same matrix as the object the points will appear around the object so that they can define a surface which will box the object itself.
So these points may then be multiplied with your resulting matrix (the whole MVP matrix) which will project them to the openGL drawable part of the coordinate system. Now you only need to check that if any of these points is inside [-1,1] in every axis then you must draw the object. So x, y and z must be between -1 and 1.
Update:
Actually that will not be enough as the intersection may happen even if all of the 8 points are outside those coordinates. You will need a proper algorithm to find the intersection of the 2 shapes...
I am drawing an image based texture using opengl in android and trying to rotate it about its center.
But the result is not as expected and it appears skewed.
First screen grab is the texture drawn without rotation and the second one is the one drawn with 10 degree rotation.
Code snippet is as below:
mViewWidth = viewWidth;//View port width
mViewHeight = viewHeight;//View port height
float ratio = (float) viewWidth / viewHeight;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
.....
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 5, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.setRotateM(mRotationMatrix, 0, 10, 0, 0, 1.0f);
Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, temp, 0, mRotationMatrix, 0);
GLES20.glUniformMatrix4fv(mRotationMatrixHandle , 1, false, mRotationMatrix, 0);
And in shader:
....
" gl_Position = uMVPMatrix*a_position;\n"
....
The black area in the first screen grab is the area of GLSurfaceView and the grey area is the area where I am trying to draw the image.
The image is already at origin and I think there is no need to translate before rotating it.
The basic problem is that you're scaling your geometry to adjust for the screen aspect ratio before you apply the rotation.
It might not be obvious that you're actually scaling the geometry. But by calculating the coordinates you use for drawing to adjust for the aspect ratio, you are effectively applying a non-uniform scaling transformation to the geometry. And if you then rotate the result, it will get distorted.
What you need to do is apply the rotation before you scale. This will require some reorganization of your current code. Since you apply the scaling before you pass the coordinates to OpenGL, and then do the rotation in the shader, you can't easily change the order. You either have to:
Apply both transformations, in the proper order, to the input coordinates before you pass them to OpenGL, and remove the rotation from the shader code.
Apply both transformations, in the proper order, in the shader code. To do this, you would not modify the input coordinates to adjust to the aspect ratio, and pass a scaling factor into the shader instead.
For the first option, applying a 2D rotation in your own code is easy enough, and it looks like you only have 4 vertices, so there is no efficiency concern. Still the second options is certainly more elegant. So instead of scaling the coordinates in your client code, pass a scaling factor as a uniform into the shader. Then, in the GLSL code, apply the rotation first, and scale the resulting coordinates.
Another option is that you build the complete transformation matrix (again based on applying the individual transformations in the correct order), and pass that matrix into the shader.
I'm trying to make 2D graphics for my Android app that consists of six thin rectangles that each take up about 1/6th of the screen in width and equal the screen's height. I'm not sure the right way to determine the bounds of the x and y OpenGL coordinate plane on screen. Eventually I will need to write logic that tests which of the 6 rectangles a touch event occurs in, so I have been trying to solve this problem by remapping OpenGL's coordinate plane into the device's screen coordinate plane (where the origin (0,0) is at the top left of the screen instead of the middle.
I declare one of my six rectangles like so:
private float vertices1[] = {
2.0f, 10.0f, 0.0f, // 0, Top Left
2.0f, -1.0f, 0.0f, // 1, Bottom Left
4.0f, -1.0f, 0.0f, // 2, Bottom Right
4.0f, 10.0f, 0.0f, // 3, Top Right
};
but since i'm not sure what the visible limits are on the x and y planes (in the OpenGL coordinate system) I have no concrete way of knowing what vertices my rectangle needs to be instantiated with to occupy 1/6th of the display. Whats the ideal way to do this?
I've tried approaches such as using glOrthoof() to remap OpenGL's coordinates into easy to work with device screen coordinates:
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
but when I do my rectangle dissapears completely.
You certainly don't want to use a perspective projection for 2D graphics. That just doesn't make much sense. A perspective projection is for... well, creating a perspective projection, which is only useful if your objects are actually placed in 3D space.
Even worse, you have two calls to set up a perspective matrix:
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
While that's legal, it rarely makes sense. What essentially happens if you do this is that both projections are applied in succession. So the first thing to do is get rid of the gluPerspective() call.
To place your 6 rectangles, you have a few options. Almost the easiest one is to not apply any transformations at all. This means that you will specify your input coordinates in normalized device coordinates (aka NDC), which is a range of [-1.0, 1.0] in both the x- and y-direction. So for 6 rectangles rendered side by side, you would use a y-range of [-1.0, 1.0] for all the rectangles, and an x-range of [-1.0, -2.0/3.0] for the first, [-2.0/3.0, -1.0/3.0] for the second, etc.
Another option is that you use an orthographic projection that makes specifying the rectangles even more convenient. For example, a range of [0.0, 6.0] for x and [0.0, 1.0] for y would make it particularly easy:
gl.glOrthof(0.0f, 6.0f, 0.0f, 1.0f, -1.0f, 1.0f);
Then all rectangles have a y-range of [0.0, 1.0], the first rectangle has a x-range of [0.0, 1.0], the second rectangle [1.0, 2.0], etc.
BTW, if you're just starting with OpenGL, I would pass on ES 1.x, and directly learn ES 2.0. ES 1.x is a legacy API at this point, and I wouldn't use it for any new development.
Would there be a performance increase if my app was to modifying the objects vertices instead of using glTranslatef?
The vertices of the NPC object are set as the following; this allows them to be 1/10th of the screen width because of a previous call to gl.glScalef()
protected float[] vertices = {
0f, 0f, -1f, //Bottom Left
1f, 0f, -1f, //Bottom Right
0f, 1f, -1f, //Top Left
1f, 1f, -1f //Top Right
};
At the moment I have a collection of NPC objects which are drawn on the screen, when they move the X and Y values are updated, which my onDraw accesses to draw the NPCs in the correct place.
onDraw(GL10 gl){
for(int i=0; i<npcs.size(); i++){
NPC npc = npcs.get(i);
npc.move();
translate(npc.x, npc.y);
npc.draw(gl);
}
}
translate(x,y) - pushes and pops the matrix while calling the method gl.glTranslatef() making calculations in relation to the screen size and ratio
npc.draw(gl) - enables client state and draws arrays
Would there be an increase in performance if the move function changed the vertices and of the NPC object? for example;
move(){
// ... do normal movement calculations
float[] vewVertices = {
x, y, 0f,
x+npc.width, y, z,
x, y+npc.height, z,
x+npc.width, y+npc.height, z
}
vertexBuffer.put(newVertices);
vertexBuffer.position(0);
}
I am about to create a short test to see if I can see any performance increase, but I wanted to ask if anyone had any previous experience with this.
The best way is simply to use the translate function since a transformation of the model view matrix during a translation consists in the manipulation of 3 float values while a change in the vertices information is directly proportional to the number of vertices you have.
With all the due respect, the way you proposed is very inconvenient and you should stick to matrix manipulation in place of vertices manipulation.
You can refer to this document for more information about matrix changes during translation operations:
http://www.songho.ca/opengl/gl_transform.html
Cheers
Maurizio
After creating a test state in my current Open GL app there seems to be no performance increase when changing the vertices directly, over using gl.glTranslatef()
Like Maurizio Benedetti pointed out, you will start to see a difference only when your vertex count is sufficiently large.
I am trying to understand the basic concepts around the co-ordinates system in OpenGL so I have been making a test application from guides online.
Presently I have drawn a simple Square to the screen, using simple Co-ordinates of:
-1.0f, 1.0f, 0.0f, // 0, Top Left
-1.0f, -1.0f, 0.0f, // 1, Bottom Left
1.0f, -1.0f, 0.0f, // 2, Bottom Right
1.0f, 1.0f, 0.0f, // 3, Top Right
In my application I run the following code:
GLU.gluPerspective(gl, 45.0f, (float) width / (float) height, 0.1f, 100.0f);
My basic understanding here is that the code is setting the viewing port angle to 45 degrees and the width to height ratio of the window size.
Another thing I am doing is setting the viewing position as -4 units on the Z axis:gl.glTranslatef(0, 0, -4);
This is what the result looks like in Landscape...
And in Portrait...
My questions are:
How does the co-ordinate system work, how many Pixels does one unit represent? How does changing the orientation and width to height ratio effect the equation?
If I wanted to draw a square the size of the screen, with a View Port of 45 degrees and a Viewing position of z-4... how does one figure out the required width and height in units?
I'll try to answer the best I can.
There wouldn't be any reason to change the width to height ratio or 45 degree angle. Doing it the way you have it keeps the things from being stretched horizontally or vertically in an unusual way. Because you are using a perspective view, you have 3D space with depth as apposed to an Orthographic view where there is no depth. In doing glTranslatef(0,0,-4) what you've actually done is changed the MODELVIEW Matix, moving it 4 in the negative z direction, presumably before actually drawing the square. By default, the "camera" is sitting at 0,0,0 with Y (up) as the upward direction.
You may be able to translate 3D space to pixels, but with a Perspective view type, I'm not at all sure you'd really want or need to. A 2D Orthographic view would be a different story, though, as many people use OpenGL for 2D games as well. Wanting a square exactly the size of the screen, Orthographic is probably the way to go, and you should be able to with a few Google searches be able to figure out your pixel density to 2d space comparison.