Android OpenGL ES - use glTranslatef or update vertices directly - android

Would there be a performance increase if my app was to modifying the objects vertices instead of using glTranslatef?
The vertices of the NPC object are set as the following; this allows them to be 1/10th of the screen width because of a previous call to gl.glScalef()
protected float[] vertices = {
0f, 0f, -1f, //Bottom Left
1f, 0f, -1f, //Bottom Right
0f, 1f, -1f, //Top Left
1f, 1f, -1f //Top Right
};
At the moment I have a collection of NPC objects which are drawn on the screen, when they move the X and Y values are updated, which my onDraw accesses to draw the NPCs in the correct place.
onDraw(GL10 gl){
for(int i=0; i<npcs.size(); i++){
NPC npc = npcs.get(i);
npc.move();
translate(npc.x, npc.y);
npc.draw(gl);
}
}
translate(x,y) - pushes and pops the matrix while calling the method gl.glTranslatef() making calculations in relation to the screen size and ratio
npc.draw(gl) - enables client state and draws arrays
Would there be an increase in performance if the move function changed the vertices and of the NPC object? for example;
move(){
// ... do normal movement calculations
float[] vewVertices = {
x, y, 0f,
x+npc.width, y, z,
x, y+npc.height, z,
x+npc.width, y+npc.height, z
}
vertexBuffer.put(newVertices);
vertexBuffer.position(0);
}
I am about to create a short test to see if I can see any performance increase, but I wanted to ask if anyone had any previous experience with this.

The best way is simply to use the translate function since a transformation of the model view matrix during a translation consists in the manipulation of 3 float values while a change in the vertices information is directly proportional to the number of vertices you have.
With all the due respect, the way you proposed is very inconvenient and you should stick to matrix manipulation in place of vertices manipulation.
You can refer to this document for more information about matrix changes during translation operations:
http://www.songho.ca/opengl/gl_transform.html
Cheers
Maurizio

After creating a test state in my current Open GL app there seems to be no performance increase when changing the vertices directly, over using gl.glTranslatef()

Like Maurizio Benedetti pointed out, you will start to see a difference only when your vertex count is sufficiently large.

Related

find if matrices intersect for culling test?

I am developing a 2d game where I cant rotate my view using the rotation sensor and view different textures on screen.
I am scattering all the textures using this method :
public void position(ShaderProgram program, float[] rotationMatrix , float[] projectionMatrix , float longitude , float latitude , float radius)
{
this.radius = radius;
viewMat = new float[MATRIX_SIZE];
mvpMatrix = new float[MATRIX_SIZE];
// correct coordinate system to fit landscape orientation
SensorManager.remapCoordinateSystem(rotationMatrix, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, viewMat);
//correct the axis so that the direction of Y axis is to the sky and Z is to the front
Matrix.rotateM(viewMat, 0, -90f, 1f, 0f, 0f);
// first rotation - longitude
Matrix.rotateM(viewMat, 0, longitude, 0f, 1f, 0f);
//second rotation - latitude
Matrix.rotateM(viewMat, 0, latitude, 1f, 0f, 0f);
// used to control the distance of viewing the texture (currently only z translation is used)
Matrix.translateM(viewMat, 0 , 0f , 0f , radius);
//multiply the adjusted view matrix with projection matrix
Matrix.multiplyMM(mvpMatrix, 0, projectionMatrix, 0, viewMat, 0);
//send mvp matrix to shader
GLES20.glUniformMatrix4fv(program.getMatrixLocation(), 1, false, mvpMatrix, 0);
}
however when I render large amount of textures , the framerate becomes very laggy . so I thought about using culling.
how should I perform the culling test after I have a different view matrix for every texture?
what I mean is , how do I compare if the matrix that represent where I'm viewing right now intersects with the matrix represents each texture so I'll decide if to draw it or not ?
There are many ways on doing this but each of them will need more then just a matrix. A matrix (assuming the center of the object is at 0,0 without applying any matrix) alone will not handle cases where you may see only a part of the object.
You may define boundaries of the original object with 8 points such as a cube. Imagine if you draw these 8 points with the same matrix as the object the points will appear around the object so that they can define a surface which will box the object itself.
So these points may then be multiplied with your resulting matrix (the whole MVP matrix) which will project them to the openGL drawable part of the coordinate system. Now you only need to check that if any of these points is inside [-1,1] in every axis then you must draw the object. So x, y and z must be between -1 and 1.
Update:
Actually that will not be enough as the intersection may happen even if all of the 8 points are outside those coordinates. You will need a proper algorithm to find the intersection of the 2 shapes...

Finding/Remapping bounds of OpenGL ES coordinate plane

I'm trying to make 2D graphics for my Android app that consists of six thin rectangles that each take up about 1/6th of the screen in width and equal the screen's height. I'm not sure the right way to determine the bounds of the x and y OpenGL coordinate plane on screen. Eventually I will need to write logic that tests which of the 6 rectangles a touch event occurs in, so I have been trying to solve this problem by remapping OpenGL's coordinate plane into the device's screen coordinate plane (where the origin (0,0) is at the top left of the screen instead of the middle.
I declare one of my six rectangles like so:
private float vertices1[] = {
2.0f, 10.0f, 0.0f, // 0, Top Left
2.0f, -1.0f, 0.0f, // 1, Bottom Left
4.0f, -1.0f, 0.0f, // 2, Bottom Right
4.0f, 10.0f, 0.0f, // 3, Top Right
};
but since i'm not sure what the visible limits are on the x and y planes (in the OpenGL coordinate system) I have no concrete way of knowing what vertices my rectangle needs to be instantiated with to occupy 1/6th of the display. Whats the ideal way to do this?
I've tried approaches such as using glOrthoof() to remap OpenGL's coordinates into easy to work with device screen coordinates:
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
but when I do my rectangle dissapears completely.
You certainly don't want to use a perspective projection for 2D graphics. That just doesn't make much sense. A perspective projection is for... well, creating a perspective projection, which is only useful if your objects are actually placed in 3D space.
Even worse, you have two calls to set up a perspective matrix:
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
While that's legal, it rarely makes sense. What essentially happens if you do this is that both projections are applied in succession. So the first thing to do is get rid of the gluPerspective() call.
To place your 6 rectangles, you have a few options. Almost the easiest one is to not apply any transformations at all. This means that you will specify your input coordinates in normalized device coordinates (aka NDC), which is a range of [-1.0, 1.0] in both the x- and y-direction. So for 6 rectangles rendered side by side, you would use a y-range of [-1.0, 1.0] for all the rectangles, and an x-range of [-1.0, -2.0/3.0] for the first, [-2.0/3.0, -1.0/3.0] for the second, etc.
Another option is that you use an orthographic projection that makes specifying the rectangles even more convenient. For example, a range of [0.0, 6.0] for x and [0.0, 1.0] for y would make it particularly easy:
gl.glOrthof(0.0f, 6.0f, 0.0f, 1.0f, -1.0f, 1.0f);
Then all rectangles have a y-range of [0.0, 1.0], the first rectangle has a x-range of [0.0, 1.0], the second rectangle [1.0, 2.0], etc.
BTW, if you're just starting with OpenGL, I would pass on ES 1.x, and directly learn ES 2.0. ES 1.x is a legacy API at this point, and I wouldn't use it for any new development.

Wrap OpenGL Matrix translation

Let's say i have an OpenGL 4x4 Matrix i use for some transformation, inside my call i use "translate" many times but then, at the end, i want to "wrap" that translation around a specific size, so, in 2D terms, let's say that i translate X by 210, then i want to wrap that translation into a "50 width" box, resulting in a translation of 10 (210 % 50).
Since i need to convert the coordinates into screen pixels i init my Matrix in this way:
private float[] mScreenMatrix = {
2f / width, 0f, 0f, 0f,
0f, -2f / height, 0f, 0f,
0f, 0f, 0f, 0f,
-1f, 1f, 0f, 1f
};
So, if width is "50" and i call Matrix.translateM(210,0,0) how can i then "wrap" this Matrix so the final translation on x is just 10?
You can't (without doing extra work) because that wrap introduces modulo arithmetic (or a toroidal topology) which doesn't match the way OpenGL's NDC space (which roughly translate to the volume you can see in the window) is laid out. When a primitive reaches out of NDC space it gets clipped so that what remains is within NDC space.
So in order to get a toroidal topology you have to duplicate primitives that get clipped by the NDC and reintroduce them to appear the opposite end of NDC. The only ways to do this is either by explicit submission of extra geometry or by using the geometry shader to create such geometry in-situ.
If you are using orthogonal projection then you can render to a texture and then wrap it as you wish in the second render pass. If you are using perspective projection, still you can use the same method, but the result will be unrealistic.

How do I rotate a triangle around its vertex located at (0,0,0) in OpenGL 2

I'm trying to make a hexagon with 6 triangles using rotation and translation. Rather than making multiple translate calls, I instead want to translate the triangle downward once and rotate around the Z axis at 60 degrees six times (my sketch may help with that explanation: http://i.imgur.com/SrrXcA3.jpg). After repeating the drawTriangle() and rotate() methods six times, I should have a hexagon.
Currently my code looks like this:
public void onDrawFrame(GL10 unused)
{
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); //start by clearing the screen for each frame
GLES20.glUseProgram(mPerVertexProgramHandle); //tell OpenGL to use the shader program we've compiled
//Get pointers to the program's variables. Instance variables so we can break apart code
mMVPMatrixHandle = GLES20.glGetUniformLocation(mPerVertexProgramHandle, "uMVPMatrix");
mPositionHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aPosition");
mColorHandle = GLES20.glGetAttribLocation(mPerVertexProgramHandle, "aColor");
//Prepare the model matrix!
Matrix.setIdentityM(mModelMatrix, 0); //start modelMatrix as identity (no transformations)
Matrix.translateM(mModelMatrix, 0, 0.0f, -0.577350269f, 0.0f); //shift the triangle down the y axis by -0.577350269f so that its top point is at 0,0,0
drawTriangle(mModelMatrix); //draw the triangle with the given model matrix
Matrix.rotateM(mModelMatrix, 0, 60f, 0.0f, 0.0f, 1.0f);
drawTriangle(mModelMatrix);
}
Here's my problem: it appears my triangle isn't rotating around (0,0,0), but instead it rotates around the triangle's center (as shown in this picture: http://i.imgur.com/oiLFSCE.png).
Is it possible for to rotate triangle around (0,0,0), where its vertex is located?
Are you really be sure that your constant -0.577350269f is the correct value for the triangle center?
Also your code looks unfinish (You use an mvp handle but never use it in the code), could you provide more information?

Pixel based collision detection problem with OpenGLES 2.0 under Android

This is my first post here, therefore apologize for any blunders.
I'm developing a simple action game with the usage of OpenGL ES 2.0 and Android 2.3. My game framework on which I'm currently working on is based on two dimensional sprites which exists in three dimensional world. Of course my world entities possess information such as position within the imaginary world, rotational value in form of float[] matrix, OpenGL texture handle as well as Android's Bitmap handle (I'm not sure if the latter is necessary as I'm doing the rasterisation with the usage of OpenGl machine, but for the time being it is just there, for my convenience). This is briefly the background, now to the problematic issue.
Presently I'm stuck with the pixel based collision detection as I'm not sure which object (here OGL texture, or Android Bitmap) I need to sample. I mean, I've already tried to sample Android's Bitmap, but it completely didn't worked for me - many run-time crashes in relation to reading outside of the bitmap. Of course to be able to read the pixels from the bitmap, I've used Bitmap.create method to obtain properly rotated sprite. Here's the code snippet:
android.graphics.Matrix m = new android.graphics.Matrix();
if(o1.angle != 0.0f) {
m.setRotate(o1.angle);
b1 = Bitmap.createBitmap(b1, 0, 0, b1.getWidth(), b1.getHeight(), m, false);
}
Another issue, which might add to the problem, or even be the main problem, is that my rectangle of intersection (rectangle indicating two dimensional space mutual for both objects) is build up from parts of two bounding boxes which were computed with the usage of OpenGL matrices Matrix.multiplyMV functionality (code below). Could it be, that those two Android and OpenGL matrices computation methods aren't equal?
Matrix.rotateM(mtxRotate, 0, -angle, 0, 0, 1);
// original bitmap size, equal to sprite size in it's model space,
// as well as in world's space
float[] rect = new float[] {
origRect.left, origRect.top, 0.0f, 1.0f,
origRect.right, origRect.top, 0.0f, 1.0f,
origRect.left, origRect.bottom, 0.0f, 1.0f,
origRect.right, origRect.bottom, 0.0f, 1.0f
};
android.opengl.Matrix.multiplyMV(rect, 0, mtxRotate, 0, rect, 0);
android.opengl.Matrix.multiplyMV(rect, 4, mtxRotate, 0, rect, 4);
android.opengl.Matrix.multiplyMV(rect, 8, mtxRotate, 0, rect, 8);
android.opengl.Matrix.multiplyMV(rect, 12, mtxRotate, 0, rect, 12);
// computation of object's bounding box (it is necessary as object has been
// rotated second ago and now it's bounding rectangle doesn't match it's host
float left = rect[0];
float top = rect[1];
float right = rect[0];
float bottom = rect[1];
for(int i = 4; i < 16; i += 4) {
left = Math.min(left, rect[i]);
top = Math.max(top, rect[i+1]);
right = Math.max(right, rect[i]);
bottom = Math.min(bottom, rect[i+1]);
};
Cheers,
first note that there is a bug in your code. You can not use Matrix.multiplyMV() with source and destination vector being the same (the function will correctly calculate an x coordinate which it will overwrite in the source vector. However, it needs the original x to calculate the y, z and w coordinates - which are in turn flawed). Also note that it would be easier for you to use bounding spheres for the first detection collision step, as they do not require such a complicated code to perform matrix transformation.
Then, the collision detection. You should not read from bitmaps nor textures. What you should do is to build a silhouette for your object (that is pretty easy, silhouette is just a list of positions). After that you need to build convex objects that fill the (non-convex) silhouette. It can be acheived by eg. ear clipping algorithm. It may not be the fastest, but it is very easy to implement and will be done only one time. Once you have the convex objects, you can transform their coordinates using a matrix and detect collisions with your world (there are many nice articles on ray-triangle intersections you can use), and you get the same precision as if you were to use pixel-based collision detection.
I hope it helps ...

Categories

Resources