Move pixels around in OpenGL-ES (Android) - android

I have a texture that I can render in OpenGL-ES with an orthogonal identity matrix:
gst_gl_shader_set_uniform_matrix_4fv(shader, "u_transformation", 1, FALSE, identity_matrix);
I want to move "the pixels around": half of the top is going to the left, half of the bottom is going to the right as shown on the image below. Is there an "easy" way to do that? I'm on Android.
On this related answered question How to crop/clip in OpenGL using u_transformation, I was able to keep the top part 'a' or the bottom part 'e'. Would there be a way to do a "double gst_gl_shader_set_uniform_matrix_4fv" after "cutting" the scene in two?

The transformation that you want here cannot be represented by a transformation matrix. Matrices can only represent certain classes of transformations. In 3D space:
A 3x3 matrix represents a linear transformation. Typical examples include rotation, scaling, mirroring, shearing.
A 4x3 matrix represents an affine transformation. On top of the above, this includes translations.
If you extend the 3D space to homogenous coordinates with 4 components, a 4x4 matrix can represent additional classes of transformations, like projections.
The transformation in your sketch is none of the above. So applying a matrix to your vertices will not be able to do this.
So what can you do? Two options come to mind:
If you can easily draw the two parts (top/bottom, left/right) separately, you can obviously do that, and simply change the transformation between rendering the two parts.
Apply the logic in your shader code.
For option 2, you could do this either in the vertex or fragment shader. If you have no primitives that cross the boundary between the two parts, handling it in the vertex shader would be more efficient. Otherwise, similar logic can be used in the fragment shader.
Sketching the critical parts for the vertex shader case, let's say you currently have the following that gives you the arrangement in the left side of your sketch:
// Calculate output position and store it in variable "pos".
gl_Position = pos;
To get the second arrangement, the logic could look like this (completely untested...):
if (pos.y > 0.0) {
gl_Position = vec4(0.5 * pos.x - 0.5, 2.0 * pos.y - 1.0, pos.zw)
} else {
gl_Position = vec4(0.5 * pos.x + 0.5, 2.0 * pos.y + 1.0, pos.zw);
}
The idea is that you check whether the vertex is in the top or bottom half, and scale/shift it accordingly to map the top half of the coordinate space into the left half, and the bottom half of the coordinate space into the right half.
This could be streamlined some more by replacing the conditional with a sign operation:
float s = sign(pos.y);
gl_Position = vec4(0.5 * pos.x - sign * 0.5, 2.0 * pos.y - sign, pos.zw);
Some more care will be needed if pos.w is not 1.0, which happens if you e.g. applied a perspective projection to your vertices. In that case, you'll have to incorporate the division by w in the calculations above.

The formula described in Reto answers 'semi' work as they only produce the "a" on the left or the "e" on the right but not both at the same time.
The solution I found is to double the number of vertices and indices and play around with the vertices coordinates like this:
static const GLfloat vertices[] = {
1.0f, 1.0f, 0.0f, 1.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f, 0.0f,
0.0f, -1.0f, 0.0f, 0.0f, 0.5f,
1.0f, -1.0f, 0.0f, 1.0f, 0.5f,
0.0f, 1.0f, 0.0f, 1.0f, 0.5f,
-1.0f, 1.0f, 0.0f, 0.0f, 0.5f,
-1.0f, -1.0f, 0.0f, 0.0f, 1.0f,
0.0f, -1.0f, 0.0f, 1.0f, 1.0f
};
static const GLushort indices[] = { 0, 1, 2, 0, 2, 3, 4, 5, 6, 4, 6, 7 };

Related

Drawing A rectangle based on user Touch in android OpenGL

I am able to draw a rectangle in android openGL using
private float vertices[] = {
-1.0f, 0.8f, 0.0f, // V1 - bottom left
-1.0f, 1.0f, 0.0f, // V2 - top left
1.0f, 0.8f, 0.0f, // V3 - bottom right
1.0f, 1.0f, 0.0f // V4 - top right
};
But i want to give vertices based on user Touch using event.getX() and event.getY(). Those values seem to make the rectangle go out of the screen.
How do i see that the rectangle is drawn where the user touches?

Finding/Remapping bounds of OpenGL ES coordinate plane

I'm trying to make 2D graphics for my Android app that consists of six thin rectangles that each take up about 1/6th of the screen in width and equal the screen's height. I'm not sure the right way to determine the bounds of the x and y OpenGL coordinate plane on screen. Eventually I will need to write logic that tests which of the 6 rectangles a touch event occurs in, so I have been trying to solve this problem by remapping OpenGL's coordinate plane into the device's screen coordinate plane (where the origin (0,0) is at the top left of the screen instead of the middle.
I declare one of my six rectangles like so:
private float vertices1[] = {
2.0f, 10.0f, 0.0f, // 0, Top Left
2.0f, -1.0f, 0.0f, // 1, Bottom Left
4.0f, -1.0f, 0.0f, // 2, Bottom Right
4.0f, 10.0f, 0.0f, // 3, Top Right
};
but since i'm not sure what the visible limits are on the x and y planes (in the OpenGL coordinate system) I have no concrete way of knowing what vertices my rectangle needs to be instantiated with to occupy 1/6th of the display. Whats the ideal way to do this?
I've tried approaches such as using glOrthoof() to remap OpenGL's coordinates into easy to work with device screen coordinates:
gl.glViewport(0, 0, width, height);
// Select the projection matrix
gl.glMatrixMode(GL10.GL_PROJECTION);
// Reset the projection matrix
gl.glLoadIdentity();
// Calculate the aspect ratio of the window
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
// Select the modelview matrix
gl.glMatrixMode(GL10.GL_MODELVIEW);
// Reset the modelview matrix
gl.glLoadIdentity();
but when I do my rectangle dissapears completely.
You certainly don't want to use a perspective projection for 2D graphics. That just doesn't make much sense. A perspective projection is for... well, creating a perspective projection, which is only useful if your objects are actually placed in 3D space.
Even worse, you have two calls to set up a perspective matrix:
GLU.gluPerspective(gl, 45.0f,(float) width / (float) height,0.1f, 100.0f);
gl.glOrthof(0.0f,width,height, 0.0f, -1.0f, 5.0f);
While that's legal, it rarely makes sense. What essentially happens if you do this is that both projections are applied in succession. So the first thing to do is get rid of the gluPerspective() call.
To place your 6 rectangles, you have a few options. Almost the easiest one is to not apply any transformations at all. This means that you will specify your input coordinates in normalized device coordinates (aka NDC), which is a range of [-1.0, 1.0] in both the x- and y-direction. So for 6 rectangles rendered side by side, you would use a y-range of [-1.0, 1.0] for all the rectangles, and an x-range of [-1.0, -2.0/3.0] for the first, [-2.0/3.0, -1.0/3.0] for the second, etc.
Another option is that you use an orthographic projection that makes specifying the rectangles even more convenient. For example, a range of [0.0, 6.0] for x and [0.0, 1.0] for y would make it particularly easy:
gl.glOrthof(0.0f, 6.0f, 0.0f, 1.0f, -1.0f, 1.0f);
Then all rectangles have a y-range of [0.0, 1.0], the first rectangle has a x-range of [0.0, 1.0], the second rectangle [1.0, 2.0], etc.
BTW, if you're just starting with OpenGL, I would pass on ES 1.x, and directly learn ES 2.0. ES 1.x is a legacy API at this point, and I wouldn't use it for any new development.

android opengl texture overlapping

i am following nehe's tutorials.
i intent to make a menu or at least buttons with opengl, yet object overlap on the menu
my code on the drawFrame function in the renderer
gl.glLoadIdentity();
gl.glScalef(0.05f, 0.05f, 0.05f);
gl.glTranslatef(0.0f, 0.0f, z-zKonum);
gl.glRotatef(xAcisi, 1.0f, 0.0f, 0.0f);
gl.glRotatef(yAcisi, 0.0f, 1.0f, 0.0f);
dokukup.ciz(gl);
gl.glLoadIdentity();
gl.glTranslatef(3.6f, -1.5f, z);
tusYukari.ciz(gl);
gl.glLoadIdentity();
gl.glTranslatef(2.5f, -1.5f, z);
tusAsagi.ciz(gl);
how do i get my menu buttons dominant(always on the top) on the overlapping?
You can get the buttons to appear always on top by drawing the buttons last and disabling depth testing when drawing the buttons. Then make sure to enable depth testing again before drawing the next frame so that your 3D geometry renders properly.
In your drawFrame function you would do the following steps:
Enable depth testing
Draw the main scene geometry
Disable depth testing
Draw the buttons

draw opengl texture at full screen

I want to draw opengl texture at full screen.
(texture : 128x128 ===> device screen : 320x480)
Below code works good, but texture is small.
I have to use only glFrustumf function(not glOrthof function).
How can I draw texture in full screen size?
// this is android source code
float ratio = (float) screenWidth / screenHeight;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
GLU.gluLookAt(gl, 0.0f, 0.0f, -2.5f, // eye
0.0f, 0.0f, 0.0f, // center
0.0f, 1.0f, 0.0f); // up
// draw blah blah
Why do you have to use glFrustum only? Switching to glOrtho for drawing the background, then switching to glFrustum for regular drawing would be the canonical solution.
BTW: gluLookAt must happen in the modelview matrix, not in the projection matrix like you do right now. As it stands your code is broken and if you were a student in one of my OpenGL classes I'd give you negative points for this cardinal error.

Poor shading problem in Android OpenGL ES

I set up diffuse lighting:
private float[] lightAmbient = { 0.5f, 0.5f, 0.5f, 1.0f };
private float[] lightDiffuse = { 1.0f, 1.0f, 1.0f, 1.0f };
private float[] lightPosition = { 0.0f, 0.0f, 2.0f, 1.0f };
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_AMBIENT, lightAmbientBuffer);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_DIFFUSE, lightDiffuseBuffer);
gl.glLightfv(GL10.GL_LIGHT0, GL10.GL_POSITION, lightPositionBuffer);
gl.glEnable(GL10.GL_LIGHT0);
gl.glShadeModel(GL10.GL_SMOOTH);
But I get triangulated shading or flat color on the cube located at origin ( center) and rotated 45 deg around x and y. So the cube is directly in front of the light. Any reasons why I am getting such poor results? Attached is the cube image.
OpenGL ES calculates colors at the vertices of each triangle. The color is then interpolated across the triangle, Ideally the vertices should calculate the same colors between the two triangles but a variety of situations could cause it not to.
It appears are though your cube edges are modeled with two triangles. You could decompose the cube side into more triangles, but that adds more memory storage and could slow down drawing.
You could also move to OpenGL ES 2.0 and write a shader, which can properly interpolate the colors across the surface, but that will require rewriting the entire pipeline. OGL ES doesn't let you mix old style and shader based implementations.

Categories

Resources