Android OpenGL shader - show image at 1:1 size - android

Hi I have a 512x512 texture that I would like to display within my GlSurfaceview at a 100% scale at a 1:1 pixel for pixel view.
I have having troubles achieving this and require some assistance.
Every combination of settings in OnSurfaceChanged and onDrawFrame result in a scaled image.
Can someone pls direct me to an example where this is possible.
private float[] mProjectionMatrix = new float[16];
// where mWidth and mHeight are set to 512
public void onSurfaceChanged(GL10 gl, int mWidth, int mHeight) {
GLES20.glViewport(0, 0, mWidth, mHeight);
float left = -1.0f /(1/ScreenRatio );
float right = 1.0f /(1/ScreenRatio );
float bottom = -1.0f ;
float top = 1.0f ;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused ) {
....stuff here
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0, 0, 1);
Matrix.rotateM(mModelMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f);
drawCube();
}
many thanks,

There's various options. The simplest IMHO is to not apply any view/projection transformations at all. Then draw a textured quad with a range of (-1.0, 1.0) for both the x- and y-coordinates. That would get your texture to fill the entire view. Since you want it displayed in a 512x512 part of the view, you can set the viewport to cover only that area:
glViewport(0, 0, 512, 512);
Another possibility is that you reduce the range of your input coordinates to map to a 512x512 area of the screen. Or scale the coordinates in the vertex shader.
You didn't specify what version of OpenGL ES you use. In ES 3.0, you could also use glBlitFramebuffer() to copy the texture to your view.

Related

Learning camera OpenGl ES 2.0

I am trying to understand how camera works on OpenGL ES, so I am tryng to look at the same point with the two differents types, Matrix.frustumM and Matrix.orthoM
I will like to know what exactly I am doing when use Matrix.frustumM or orthoM, I know that I apply them to the ProjectionMatrix but I dont understand what defines the parameters(left,right,bottom,top,near,far of what? it is supposed to be the screen of the phone? ) same with orthoM
I want to draw a square on the screen on 0,0,0 with 1f of height and weight(like 2D just to test the cameras)
but if I do onSurfaceCreated
final float eyeX = 2f;
final float eyeY = 5f;
final float eyeZ = 8f;
final float lookX = 2f;
final float lookY = 5f;
final float lookZ = 0.0f;
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
onSurfaceChanged
GLES20.glViewport(0, 0, width, height);
// Create a new perspective projection matrix. The height will stay the
// same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 25.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
That is what i saw onn phone
Draw function:
public void dibujarBackground()
{
// Draw a plane
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mBackgroundDataHandle);
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0.0f,2.0f, 0.0f);
drawBackground();
}
private void drawBackground()
{
coordinate.drawBackground(mPositionHandle, mNormalHandle, mTextureCoordinateHandle);
// This multiplies the view matrix by the model matrix, and stores the
// result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mMVPMatrix, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(mMVMatrixHandle, 1, false, mMVPMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mMVPMatrix, 0);
GLES20.glUniform3f(mLightPosHandle,Light.mLightPosInEyeSpace[0], Light.mLightPosInEyeSpace[1], Light.mLightPosInEyeSpace[2]);
GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, 6);
}
Coords of the square:
final float[] backgroundPositionData = {
// In OpenGL counter-clockwise winding is default.
0f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 1f, 0.0f,
0f, 0f, 0.0f,
1f, 0f, 0.0f,
1f, 1f, 0.0f,
};
final float[] backgroundNormalData = {
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f,
0.0f, 0.0f, 1.0f, };
final float[] backgroundTextureCoordinateData = {
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f, };
Overall what you get in the end is a single matrix which is used to multiply the positions so that the visible fragments are in range [-1,1] in all 3 dimensions. That means if you use no matrix or use the identity the coordinates will need to be in this range to be visible. So the 3 matrix computations you are using are actually only conveniences to help you achieve a correct transformation:
Ortho is an orthographical transformation. This means the visual representation of x and y screen coordinates are not effected by the z coordinate at all. Visually that means the object does not appear smaller when it is further. The values you insert into this convenience method are border values (left, right, top, bottom) which means a rectangle with same coordinates will take exactly the full screen. These values are mostly used to be the same as your view coordinate system (left = 0, right = screenWidth, top = 0, bottom = screenHeight). Also there are near and far parameters which represent the clipping planes so that positions smaller then near or further then far are not visible. This projection is mostly used for 2D drawing.
Frustum matrix is designed so that the x and y coordinates are reduced with increasing z. This means an object will appear smaller when further. The border parameters are connected to the near parameter so that the rectangle with border coordinates having z at near will appear as full screen. The near must be larger then zero in this case or the result is unpredictable. The far promoter is just a clipping plane but same as with ortho the pixels are clipped if z value is smaller then near or larger then far. The border parameters are best computed with the field of view (angle) and screen aspect ratio. You use the tang function to compute border parameters to get the desired effect. This method is mostly used for 3D drawing.
LookAt is a convenience which is used to transform all the objects to such positions and orientations that they appear to be effected by the camera position. Though this method is defined with vectors you may imagine it having a vector position and rotations. What this does it creates a matrix that will rotate all the objects by -rotations and translate them by -position.
Overall the usage then is pretty simple. Each position should first be multiplied by the model matrix which is the matrix representing the model position in your scene. Then multiplied by the matrix received with lookAt to simulate the camera. Then multiplied by the projection matrix which in most cases is either the ortho or the frustum. The optimization then is to multiply the matrices first on the CPU and then have the positions multiplied by them on the GPU. Some variations then persist where you split the matrix to the "model view matrix" and the "projection matrix". This is used to compute things like lighting effect where the position must not be effected by the projection matrix.

Android OpenGL weirdness with the setLookAtM method

As a beginner to android and openGL 2.0 es, I'm testing simple things and see how it goes.
I downloaded the sample at http://developer.android.com/training/graphics/opengl/touch.html .
I changed the code to check if I could animate a rotation of the camera around the (0,0,0) point, the center of the square.
So i did this:
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Set the camera position (View matrix)
long time = SystemClock.uptimeMillis() % 4000L;
float angle = ((float) (2*Math.PI)/ (float) 4000) * ((int) time);
Matrix.setLookAtM(mVMatrix, 0, (float) (3*Math.sin(angle)), 0, (float) (3.0f*Math.cos(angle)), 0 ,0, 0, 0f, 1.0f, 0.0f);
// Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjMatrix, 0, mVMatrix, 0);
// Draw square
mSquare.draw(mMVPMatrix);
}
I expected the camera to look always to the center of the square (the (0,0,0) point) but that's not what happens. The camera is indeed rotating around the square but the square does not stay in the center of the screen.. instead it is moving along the X axis...:
I also expected that if we gave the eyeX and eyeY the same values as centerX and centerY,like this:
Matrix.setLookAtM(mVMatrix, 0, 1, 1, -3, 1 ,1, 0, 0f, 1.0f, 0.0f);
the square would keep it's shape (I mean, your field of vision would be dragged but along a plane which would be paralel to the square), but that's also not what happens:
This is my projection matrix:
float ratio = (float) width / height;
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjMatrix, 0, -ratio, ratio, -1, 1, 2, 7);
What is going on here?
Looking at the source code to the example you downloaded, I can see why you're having that problem, it has to do with the order of the matrix multiplication.
Typically in OpenGL source you see matrices set up such that
transformed vertex = projMatrix * viewMatrix * modelMatrix * input vertex
However in the source example program that you downloaded, their shader is setup like this:
" gl_Position = vPosition * uMVPMatrix;"
With the position on the other side of the matrix. You can work with OpenGL in this way, but it requires that you reverse the lhs/rhs of your matrix multiplications.
Long story short, in your case, you should change your shader to read:
" gl_Position = uMVPMatrix * vPosition;"
and then I believe you will get the expected behavior.

Android OpenGL ES 2.0 screen coordinates to world coordinates

I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it.
This is my attempt....
public void getWorldFromScreen(float x, float y) {
int viewport[] = { 0, 0, width , height};
float startY = ((float) (height) - y);
float[] near = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] far = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mv = new float[16];
Matrix.multiplyMM(mv, 0, mViewMatrix, 0, mModelMatrix, 0);
GLU.gluUnProject(x, startY, 0, mv, 0, mProjectionMatrix, 0, viewport, 0, near, 0);
GLU.gluUnProject(x, startY, 1, mv, 0, mProjectionMatrix, 0, viewport, 0, far, 0);
float nearX = near[0] / near[3];
float nearY = near[1] / near[3];
float nearZ = near[2] / near[3];
float farX = far[0] / far[3];
float farY = far[1] / far[3];
float farZ = far[2] / far[3];
}
The numbers I am getting don't seem right, is this the right way to utilize this method? Does it work for OpenGL ES 2.0? Should I make the Model Matrix an identity matrix before these calculations (Matrix.setIdentityM(mModelMatix, 0))?
As a follow up, if this is correct, how do I pick the output Z? Basically, I always know at what distance I want the world coordinates to be at, but the Z parameter in GLU.gluUnProject appears to be some kind of interpolation between the near and far plane. Is it just a linear interpolation?
Thanks in advance
/**
* Calculates the transform from screen coordinate
* system to world coordinate system coordinates
* for a specific point, given a camera position.
*
* #param touch Vec2 point of screen touch, the
actual position on physical screen (ej: 160, 240)
* #param cam camera object with x,y,z of the
camera and screenWidth and screenHeight of
the device.
* #return position in WCS.
*/
public Vec2 GetWorldCoords( Vec2 touch, Camera cam)
{
// Initialize auxiliary variables.
Vec2 worldPos = new Vec2();
// SCREEN height & width (ej: 320 x 480)
float screenW = cam.GetScreenWidth();
float screenH = cam.GetScreenHeight();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (screenH - touch.Y());
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X()) * 2.0f / screenW - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / screenH - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Print("Proj", getCurrentProjection(gl));
Print("Model", getCurrentModelView(gl));
Matrix.multiplyMM(
transformMatrix, 0,
getCurrentProjection(gl), 0,
getCurrentModelView(gl), 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
// Divide by the 3rd component to find
// out the real position.
worldPos.Set(
outPoint[0] / outPoint[3],
outPoint[1] / outPoint[3]);
return worldPos;
}
Algorithm is further explained here.
Hopefully my question (and answer) should help you out:
How to find absolute position of click while zoomed in
It has not only the code but also diagrams and diagrams and diagrams explaining it :) Took me ages to figure it out as well.
IMHO one doesn't need to re-implement this function...
I experimented with Erol's solution and it worked, so thanks a lot for it Erol.
Furthermore, I played with
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
and it works fine as well in my tiny noob example 2D OpenGL ES 2.0 project:
public void onSurfaceChanged(GL10 unused, int width, int height) {...

frustumM / setLookAtM in OpenGL ES 2.0 on Android

I am playing around with OpenGL on the Android platform using the OpenGL ES 2.0 tutorial as a basis. The code in question is:
public void onSurfaceChanged( GL10 unused, int width, int height )
{ GLES20.glViewport( 0, 0, width, height );
float ratio = (float) width / height;
Matrix.frustumM( mProjMatrix, 0, -ratio, ratio, -1, 1, 1, 9.9999f );
Matrix.setLookAtM( mVMatrix, 0, 0.0f, 0.0f, -1.0f, 0.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f );
muMVPMatrixHandle = GLES20.glGetUniformLocation( mProgram, "uMVPMatrix" );
}
In particular, the "far" parameter to frustumM. Whenever that parameter is 10 the image (a triangle) does not appear. Any other value is OK. Why?
I have done a fair bit of reading - all to no avail. Even my friend (aka Google) has not been able to help me.
Thanks in advance.
When you calculate your projection matrix you specify the far plane to be at 9.9999f,
which is why an eyeZ value of 10 or larger in the view matrix calculation will result in an empty image. You are looking outside of your frustum.
Please take a look at this article: http://www.lighthouse3d.com/tutorials/view-frustum-culling/

Android OpenGL - difficulties with simple 2D transformations

I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
I need to rotate it around Z axis, preserving its size.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Here's a picture.
So far I managed to achieve first 2 tasks:
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, (float)width, (float)height, 0f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
But when I'm trying to rotate my box around x or y, nasty thing are happening with my box and there is no perspective effect. I tried to use some other function instead of glRotate (glFrustum, glPerspective, gluLookAt, applying "skewing" matrix), but I couldn't make them work properly.
I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
For a perspective you also need some length for the lens, which determines the FOV. The FOV is the ratio of viewing plane distance to visible extents. In the case of the near plane it thus becomes {left,right,top,bottom}/near. For the sake of simplicity we assume horizontal FOV and a symmetric projection i.e.
FOV = 2*|left|/near = 2*|right|/near = extent/distance
or if you're more into angles
FOV = 2*tan(angular FOV / 2)
For a 90° FOV the length of the lens is half the width of the focal plane. Your focal plane is 240x320 pixels, so 120 to the left and right and 160 to the top and bottom. OpenGL does not really have a focus, but we can say that the middle plane between near and far is the "focal".
So let's say the object will have in average a extent of about the order of magnitude of visible plane limits, i.e. for a visible plane of 240x360, an object will have in average a size of ~200px. It thus makes sense the distance of near to far clipping to be 200, so +- 100 about the focal plane. So for a FOV of 90° the focal plane has distance
2*tan(90°/2) = extent/distance
2*tan(45°) = 2 = 240/distance
2*distance = 240
distance = 120
120, thus near and far clipping distances are 20 and 220.
Last but not least the near clip plane limits must be scaled by near_distance/focal_distance = 20/120
So
left = -120 * 20/120 = -20
right = 120 * 20/120 = 20
bottom = -180 * 20/120 = -30
top = 180 * 20/120 = 30
So this gives us the glFrustum parameters:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-20, 20, -30, 30, 20, 220);
And last but not least we must move the world origin into the "focal" plane
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -120);
I need to rotate it around Z axis, preserving its size.
done.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Perspective does not preserve size. That's what's makes it a perspective. You can use a very long lens, i.e. small FOV.
Code Update
As a general pro-tip: Do all OpenGL operations in the drawing handler. Don't set the projection in the reshape handler. It's ugly and as soon as you want to have some HUD or other kind of overlay you'll have to discard it anyway. So here's how to change it:
public void onDrawFrame(GL10 gl) {
// fov, extents are parameters set somewhere else
// 2*tan(fov/2.) = width/distance =>
float distance = width/(2.*tan(fov));
float near = distance - extent/2;
float far = distance + extent/2;
if(near < 1.) {
near = 1.;
}
float left = (-width/2) * near/distance;
float right = ( width/2) * near/distance;
float bottom = (-height/2) * near/distance;
float top = ( height/2) * near/distance;
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustum(left, right, bottom, top, near, far);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -focal);
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int new_width, int new_height) {
width = new_width;
height = new_height;
}
You need to use a perspective projection matrix and then use your model-view matrix to get the position and scaling right.
You're using an orthogonal projection (glOrthof()), which explicitly disables perspective.
It's opposite is glFrustum(), often wrapped by gluPerspective()/ which is easier to use but requires the GLU library.

Categories

Resources