Android- Finding range of on screen coordinates OpenGL ES - android

How can I determine the bounds of the x and y coordinate planes displayed on screen in my OpenGL ES program?
I need to fill the screen with 6 identical shapes, all equal in width & height, but in order to do this I must determine what values the x and y coordinate range across (so I can properly set the shape's vertices). In other words, I need a programmatic way to find out the value of -x and x, and -y and y.
Whats the simplest way to do this? Should I be manipulating/reading the projection matrix or the modelView matrix? Neither?
I know onSurfaceChanged() has access to the layout's height and width, but i'm not certain if these parameters are necessary to find the bounds of the on-screen coordinate bounds.
Below are the code snippets that show how I configure the frustum with the modelView and projection matrices:
public void onSurfaceCreated(GL10 glUnused, EGLConfig config)
{
// Enable depth testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
// Position the eye in front of the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -0.5f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = -5.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
// Set the view matrix. This matrix can be said to represent the camera position.
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
...
}
public void onSurfaceChanged(GL10 glUnused, int width, int height)
{
// Set the OpenGL viewport to the same size as the surface.
GLES20.glViewport(0, 0, width, height);
layoutWidth = width; //test: string graphics
layoutHeight = height; //test: string graphics
// Create a new perspective projection matrix. The height will stay the same
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1.0f;
final float far = 10.0f;
screenSixth = findScreenSixths(width);
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}

This problem of yours seems a bit strange. I could understand you want to manipulate the matrix so you will see all the objects on the screen but what you are asking is how to place the spheres so they are on the screen. In this case the on-screen projection of the spheres does not change and there is no reason to use the same matrix as you do for the rest of the scene. You may simply start with the identity and add a frustum to get the correct projection. After that the border values (left, right...) will be at border of your screen for Z value of near. So place the spheres at places such as (left+radius, top+radius, near).
If you still need some specific position of the spheres due to some interaction with other objects then you will most likely need to check the on-screen projections of the billboarded bounding square of the sphere. That means creating a square with the center the same as sphere and a width of twice the radius. Then multiply these square positions with the billboarded version of the sphere matrix. The billboarding can be found over the web to do properly but unless you do some strange operations on the matrices it usually works by simply setting the top left 3x3 part of the matrix to identity.

Related

OpenGL inversion in y-axis

I'm using the OpenGL touch events to move shapes but what happens is the shapes on the opposite side of the screen move (x-axis). So if you try to move a shape at the bottom, then a shape at the top will move inside. The top right corner is (0,480) and the bottom left (800,0). I've tried changing the numbers round inthe view matrix but it hasnt worked. Why is this happening?
Im sure I've set my view and projection matrices correctly. Here they are.
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
// Set the background clear color to gray.
GLES20.glClearColor(0.5f, 0.5f, 0.5f, 0.5f);
GLES20.glFrontFace(GLES20.GL_CCW); // Counter-clockwise winding.
GLES20.glEnable(GLES20.GL_CULL_FACE);// Use culling to remove back faces.
GLES20.glCullFace(GLES20.GL_BACK);// What faces to remove with the face culling.
GLES20.glEnable(GLES20.GL_DEPTH_TEST);// Enable depth testing
// Position the eye behind the origin.
final float eyeX = 0.0f;
final float eyeY = 0.0f;
final float eyeZ = -3.0f;
// We are looking toward the distance
final float lookX = 0.0f;
final float lookY = 0.0f;
final float lookZ = 0.0f;
// Set our up vector. This is where our head would be pointing were we holding the camera.
final float upX = 0.0f;
final float upY = 1.0f;
final float upZ = 0.0f;
Matrix.setLookAtM(mViewMatrix, 0, eyeX, eyeY, eyeZ, lookX, lookY, lookZ, upX, upY, upZ);
Matrix.setIdentityM(mViewMatrix, 0);
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
// Sets the current view port to the new size.
GLES20.glViewport(0, 0, width, height);
float RATIO = (float) width / (float) height;
// this projection matrix is applied to object coordinates in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -RATIO, RATIO, -1, 1, 1, 10);
Matrix.setIdentityM(mProjectionMatrix, 0);
}
Update
The view seems to render properly. And the shape will appear on the screen where i want them to, if i translate them, or change the vertex coordinates slightly. Whats not right is how it registers the touch events. Any ideas?
This is how i check the touch events.
if(shapeW < minX){minX = shapeW;}
if(shapeW > maxX){maxX = shapeW;}
if(shapeH < minY){minY = shapeH;}
if(shapeH > maxY){maxY = shapeH;}
//Log.i("Min&Max" + (i / 4), String.valueOf(minX + ", " + maxX + ", " + minY + ", " + maxY));
if(minX < MyGLSurfaceView.touchedX && MyGLSurfaceView.touchedX < maxX && minY < MyGLSurfaceView.touchedY && MyGLSurfaceView.touchedY < maxY)
{
xAng[j] = xAngle;
yAng[j] = yAngle;
Log.i("cube "+j, " pressed");
}
From the origin, the z-axis is positive coming towards you and negative going away into the screen. So if my assumption is correct that your shapes are drawn in the z = 0 plane, your eye is actually positioned behind them. Hence if you move an object one way it appears to move the other way. Try using a positive value for eyeZ instead.
For example, eye = (0, 0, 3), look = (0, 0, 0) would position the eye out of the origin towards you looking down into the screen. In contrast, using eye = (0, 0, -3), look = (0, 0, 0) would put the eye into the screen looking back out of it.

Android OpenGL shader - show image at 1:1 size

Hi I have a 512x512 texture that I would like to display within my GlSurfaceview at a 100% scale at a 1:1 pixel for pixel view.
I have having troubles achieving this and require some assistance.
Every combination of settings in OnSurfaceChanged and onDrawFrame result in a scaled image.
Can someone pls direct me to an example where this is possible.
private float[] mProjectionMatrix = new float[16];
// where mWidth and mHeight are set to 512
public void onSurfaceChanged(GL10 gl, int mWidth, int mHeight) {
GLES20.glViewport(0, 0, mWidth, mHeight);
float left = -1.0f /(1/ScreenRatio );
float right = 1.0f /(1/ScreenRatio );
float bottom = -1.0f ;
float top = 1.0f ;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused ) {
....stuff here
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0, 0, 1);
Matrix.rotateM(mModelMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f);
drawCube();
}
many thanks,
There's various options. The simplest IMHO is to not apply any view/projection transformations at all. Then draw a textured quad with a range of (-1.0, 1.0) for both the x- and y-coordinates. That would get your texture to fill the entire view. Since you want it displayed in a 512x512 part of the view, you can set the viewport to cover only that area:
glViewport(0, 0, 512, 512);
Another possibility is that you reduce the range of your input coordinates to map to a 512x512 area of the screen. Or scale the coordinates in the vertex shader.
You didn't specify what version of OpenGL ES you use. In ES 3.0, you could also use glBlitFramebuffer() to copy the texture to your view.

Android OpenGL ES 2.0 screen coordinates to world coordinates

I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it.
This is my attempt....
public void getWorldFromScreen(float x, float y) {
int viewport[] = { 0, 0, width , height};
float startY = ((float) (height) - y);
float[] near = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] far = { 0.0f, 0.0f, 0.0f, 0.0f };
float[] mv = new float[16];
Matrix.multiplyMM(mv, 0, mViewMatrix, 0, mModelMatrix, 0);
GLU.gluUnProject(x, startY, 0, mv, 0, mProjectionMatrix, 0, viewport, 0, near, 0);
GLU.gluUnProject(x, startY, 1, mv, 0, mProjectionMatrix, 0, viewport, 0, far, 0);
float nearX = near[0] / near[3];
float nearY = near[1] / near[3];
float nearZ = near[2] / near[3];
float farX = far[0] / far[3];
float farY = far[1] / far[3];
float farZ = far[2] / far[3];
}
The numbers I am getting don't seem right, is this the right way to utilize this method? Does it work for OpenGL ES 2.0? Should I make the Model Matrix an identity matrix before these calculations (Matrix.setIdentityM(mModelMatix, 0))?
As a follow up, if this is correct, how do I pick the output Z? Basically, I always know at what distance I want the world coordinates to be at, but the Z parameter in GLU.gluUnProject appears to be some kind of interpolation between the near and far plane. Is it just a linear interpolation?
Thanks in advance
/**
* Calculates the transform from screen coordinate
* system to world coordinate system coordinates
* for a specific point, given a camera position.
*
* #param touch Vec2 point of screen touch, the
actual position on physical screen (ej: 160, 240)
* #param cam camera object with x,y,z of the
camera and screenWidth and screenHeight of
the device.
* #return position in WCS.
*/
public Vec2 GetWorldCoords( Vec2 touch, Camera cam)
{
// Initialize auxiliary variables.
Vec2 worldPos = new Vec2();
// SCREEN height & width (ej: 320 x 480)
float screenW = cam.GetScreenWidth();
float screenH = cam.GetScreenHeight();
// Auxiliary matrix and vectors
// to deal with ogl.
float[] invertedMatrix, transformMatrix,
normalizedInPoint, outPoint;
invertedMatrix = new float[16];
transformMatrix = new float[16];
normalizedInPoint = new float[4];
outPoint = new float[4];
// Invert y coordinate, as android uses
// top-left, and ogl bottom-left.
int oglTouchY = (int) (screenH - touch.Y());
/* Transform the screen point to clip
space in ogl (-1,1) */
normalizedInPoint[0] =
(float) ((touch.X()) * 2.0f / screenW - 1.0);
normalizedInPoint[1] =
(float) ((oglTouchY) * 2.0f / screenH - 1.0);
normalizedInPoint[2] = - 1.0f;
normalizedInPoint[3] = 1.0f;
/* Obtain the transform matrix and
then the inverse. */
Print("Proj", getCurrentProjection(gl));
Print("Model", getCurrentModelView(gl));
Matrix.multiplyMM(
transformMatrix, 0,
getCurrentProjection(gl), 0,
getCurrentModelView(gl), 0);
Matrix.invertM(invertedMatrix, 0,
transformMatrix, 0);
/* Apply the inverse to the point
in clip space */
Matrix.multiplyMV(
outPoint, 0,
invertedMatrix, 0,
normalizedInPoint, 0);
if (outPoint[3] == 0.0)
{
// Avoid /0 error.
Log.e("World coords", "ERROR!");
return worldPos;
}
// Divide by the 3rd component to find
// out the real position.
worldPos.Set(
outPoint[0] / outPoint[3],
outPoint[1] / outPoint[3]);
return worldPos;
}
Algorithm is further explained here.
Hopefully my question (and answer) should help you out:
How to find absolute position of click while zoomed in
It has not only the code but also diagrams and diagrams and diagrams explaining it :) Took me ages to figure it out as well.
IMHO one doesn't need to re-implement this function...
I experimented with Erol's solution and it worked, so thanks a lot for it Erol.
Furthermore, I played with
Matrix.orthoM(mtrxProjection, 0, left, right, bottom, top, near, far);
and it works fine as well in my tiny noob example 2D OpenGL ES 2.0 project:
public void onSurfaceChanged(GL10 unused, int width, int height) {...

Android: Can I have layers in a live wallpaper?

I want to create a live wallpaper, and I want to have the bottom background slide together with swiping homescreen pages, while another layer stays always on top of the background and under the app icons.
Is this possible and how can this be done?
You'll have to override public void onOffsetsChanged (float xOffset, float yOffset, float xOffsetStep, float yOffsetStep, int xPixelOffset, int yPixelOffset)
Using the value of xOffset, you can define a source rectangle that extracts part of your bitmap and draws that part on the screen.
This image should give you an understanding of how xOffset works:
Assuming that there are 5 homescreen pages,
If your picture is of size 960 x 800 (width x height) and if you want to draw a portion of size 480 x 800 every time, then you can define a source rectangle whose co-ordinates would be:
x1 = xOffset * (960 - 480); y1 = 0; x2 = x1 + 480, y2 = 800;
Then your destination rectangle would be the portion of the screen you want to draw onto.
You can then use public void drawBitmap (Bitmap bitmap, Rect src, Rect dst, Paint paint) method to draw the bitmap onto the screen.
I had used this technique a long time ago. I didn't check this in code before posting, and there may be alternatives (like using canvas.translate()). But hopefully this should help you get started. :)

Android OpenGL - difficulties with simple 2D transformations

I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
I need to rotate it around Z axis, preserving its size.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Here's a picture.
So far I managed to achieve first 2 tasks:
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, (float)width, (float)height, 0f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
But when I'm trying to rotate my box around x or y, nasty thing are happening with my box and there is no perspective effect. I tried to use some other function instead of glRotate (glFrustum, glPerspective, gluLookAt, applying "skewing" matrix), but I couldn't make them work properly.
I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
For a perspective you also need some length for the lens, which determines the FOV. The FOV is the ratio of viewing plane distance to visible extents. In the case of the near plane it thus becomes {left,right,top,bottom}/near. For the sake of simplicity we assume horizontal FOV and a symmetric projection i.e.
FOV = 2*|left|/near = 2*|right|/near = extent/distance
or if you're more into angles
FOV = 2*tan(angular FOV / 2)
For a 90° FOV the length of the lens is half the width of the focal plane. Your focal plane is 240x320 pixels, so 120 to the left and right and 160 to the top and bottom. OpenGL does not really have a focus, but we can say that the middle plane between near and far is the "focal".
So let's say the object will have in average a extent of about the order of magnitude of visible plane limits, i.e. for a visible plane of 240x360, an object will have in average a size of ~200px. It thus makes sense the distance of near to far clipping to be 200, so +- 100 about the focal plane. So for a FOV of 90° the focal plane has distance
2*tan(90°/2) = extent/distance
2*tan(45°) = 2 = 240/distance
2*distance = 240
distance = 120
120, thus near and far clipping distances are 20 and 220.
Last but not least the near clip plane limits must be scaled by near_distance/focal_distance = 20/120
So
left = -120 * 20/120 = -20
right = 120 * 20/120 = 20
bottom = -180 * 20/120 = -30
top = 180 * 20/120 = 30
So this gives us the glFrustum parameters:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-20, 20, -30, 30, 20, 220);
And last but not least we must move the world origin into the "focal" plane
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -120);
I need to rotate it around Z axis, preserving its size.
done.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Perspective does not preserve size. That's what's makes it a perspective. You can use a very long lens, i.e. small FOV.
Code Update
As a general pro-tip: Do all OpenGL operations in the drawing handler. Don't set the projection in the reshape handler. It's ugly and as soon as you want to have some HUD or other kind of overlay you'll have to discard it anyway. So here's how to change it:
public void onDrawFrame(GL10 gl) {
// fov, extents are parameters set somewhere else
// 2*tan(fov/2.) = width/distance =>
float distance = width/(2.*tan(fov));
float near = distance - extent/2;
float far = distance + extent/2;
if(near < 1.) {
near = 1.;
}
float left = (-width/2) * near/distance;
float right = ( width/2) * near/distance;
float bottom = (-height/2) * near/distance;
float top = ( height/2) * near/distance;
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustum(left, right, bottom, top, near, far);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -focal);
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int new_width, int new_height) {
width = new_width;
height = new_height;
}
You need to use a perspective projection matrix and then use your model-view matrix to get the position and scaling right.
You're using an orthogonal projection (glOrthof()), which explicitly disables perspective.
It's opposite is glFrustum(), often wrapped by gluPerspective()/ which is easier to use but requires the GLU library.

Categories

Resources