I have a question regarding transformations in OpenGL ES 2. I'm currently drawing a rectangle using triangle fans as depicted in the image below. The origin is located in its center, while its width and height are 0.6 and 2 respectively. I assume that these sizes are related to the model space. However, in order to maintain the ratio of height and width on a tablet or phone one has to do a projection that considers the proportion of the device lengths (again width and height). This is why I call orthoM(projectionMatrix, 0, -aspectRatio, aspectRatio, -1f, 1f, -1f, 1f);and the aspectRatio is given by float aspectRatio = (float) width / (float) height. This finally leads to the rectangle shown in the image below. Now, I would like to move the rectangle along the x-axis to the border of the screen. However, I was not able to come up with the correct calculation to do so, either I moved it too little or too much. So how would the calculation look like? Furtermore, I'm a little bit confused about the sizes given in the model space. What are the max and min values that can be achieved there?
Thanks a lot!
Vertex position of the rectangle are in world space. A way to do this it could be get the screen coordinates you want to move to and then transform them into world space.
For example:
If the screen is 300 x 200 and you are in the center 0,0 in world space (or 150, 100) in screen space). You want to translate to 300.
So the transformation should be screen_position to normalized device coordiantes and then multiply by inverseOf(projection matrix * view matrix) and divided by the w component.
Here it is explained for mouse that it is finally the same, just that you know the z because it is the one you used for your rectangle already (if it is on the plane x,y): OpenGL Math - Projecting Screen space to World space coords.
Related
I'm implementing 3d card flip animation for android (api > 14) and have an issue with big screen tablets (> 2048 dpi). During problem investigation i've come to the following basic block:
Tried to just transform a view (simple ImageView) using matrix and rotateY of camera by some angle and it works ok for angle < 60 and angle > 120 (transformed and displayed) but image disappears (just not displayed) when angle is between 60 and 120. Here is the code I use:
private void applyTransform(float degree)
{
float [] values = {1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f};
float centerX = image1.getMeasuredWidth() / 2.0f;
float centerY = image1.getMeasuredHeight() / 2.0f;
Matrix m = new Matrix();
m.setValues(values);
Camera camera = new Camera();
camera.save();
camera.rotateY(degree);
camera.getMatrix(m);
camera.restore();
m.preTranslate(-centerX, -centerY); // 1 draws fine without these 2 lines
m.postTranslate(centerX, centerY); // 2
image1.setImageMatrix(m);
}
And here is my layout XML
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="#+id/ImageView01"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:src="#drawable/naponer"
android:clickable="true"
android:scaleType="matrix">
</ImageView>
</FrameLayout>
So I have the following cases:
works fine for any angle, any center point if running on small screens 800X480, 1024x720, etc...
works ok for angle < 60 and > 120 when running on big screen devices 2048x1536, 2560x1600...
works ok for any angle on any device if rotation not centered (matrix pre and post translations commented out )
fails (image disappears) when running on big screen device, rotation centered and angle is between 60 and 120 degrees.
Please tell what I'm doing wrong and advise some workaround... thank you!!!
This problem is caused by the camera distance used to calculate the transformation. While the Camera class itself doesn't say much about the subject, it is better explained in the documentation for the View.setCameraDistance() method (emphasis mine):
Sets the distance along the Z axis (orthogonal to the X/Y plane on
which views are drawn) from the camera to this view. The camera's
distance affects 3D transformations, for instance rotations around the
X and Y axis. (...)
The distance of the camera from the view plane can have an affect on
the perspective distortion of the view when it is rotated around the x
or y axis. For example, a large distance will result in a large
viewing angle, and there will not be much perspective distortion of
the view as it rotates. A short distance may cause much more
perspective distortion upon rotation, and can also result in some
drawing artifacts if the rotated view ends up partially behind the
camera (which is why the recommendation is to use a distance at
least as far as the size of the view, if the view is to be rotated.)
To be honest, I hadn't seen this particular effect (not drawing at all) before, but I suspected it could be related to this question related to perspective distortion I'd encountered in the past. :)
Therefore, the solution is to use the Camera.setLocation() method to ensure this doesn't happen.
An important distinction with the View.setCameraDistance() method is that the units are not the same, since setLocation() doesn't use pixels. While setCameraDistance() adjusts for density, setLocation() does not. Therefore, if you wanted to calculate an appropriate z-distance based on the view's dimensions, remember to adjust for density. For example:
float cameraDistance = Math.max(image1.getMeasuredHeight(), image1.getMeasuredWidth()) * 5;
float densityDpi = getResources().getDisplayMetrics().densityDpi;
camera.setLocation(0, 0, -cameraDistance / densityDpi);
Instead of using 12 lines to create rotation matrix, you could just implement this one in first line http://en.wikipedia.org/wiki/Rotation_matrix
Depending of effect you want, you might want to center image to axis you want to rotate around.
http://en.wikipedia.org/wiki/Transformation_matrix
Hmm for image disappearing, I would guess it has something to do with either memory (out of memory - although this would bring exception) or rounding problems. Maybe you could try increasing precision to double precision?
One thing that comes to mind is that cos(alpha) goes toward 0 when alpha goes toward PI/2. Other than that I don's see any correlation between angles and why it doesn't work for big images.
You need to adjust your Translate coordinates. When calculating the translation for your image you need to take image size into account too. When you perform matrix calculations you set android:scaleType="matrix" for your ImageView. This aligns your image at the top left corner by default. Then, when you apply your pre/post translation, your image may get off the bounds of your ImageView (especially if the ImageView is relatively large and your image is relatively small, like in case of beeg screen tablets).
The following translation results in the image being rotated around its center Y axis and keeps the image aligned to the top left corner:
m.preTranslate(-imageWidth/2, 0);
m.postTranslate(imageWidth/2, 0);
The following alternative results in the image being rotated around its center Y/X axises and aligns the image to the center of the ImageView:
m.preTranslate(-imageWidth/2, -imageHeight/2);
m.postTranslate(centerX, centerY);
If your image is a bitmap you can use intrinsic width/height:
Drawable drawable = image1.getDrawable();
imageHeight = drawable.getIntrinsicHeight();
imageWidth = drawable.getIntrinsicWidth();
I'm using OpenGL ES 2.0 on Android and I and I initialise my display like so:
float ratio = (float) width / height;
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7); //Using Orthographic as developing 2d
What I'm having trouble understanding is this:
Let's say my app is a 'fixed screen' game (like Pac-Man ie, no scrolling, just the whole game visible on the screen).
Now at the moment, if I draw a quad at -1 to +1 on both x and y I get something like this:
Obviously, this is because I am setting -ratio, ratio as seen above. So this is correct.
But am I supposed to use this as my 'whole' screen? With rather massive letterboxing on the left and right?
I want a rectangular display that is the whole height of the physical display (and as much of the width as possible), but this would mean drawing at less that -1 and more than +1, is this a problem?
I realise the option may be to use clipping if this was a scrolling game, but for this particular scenario I want the whole 'game board' on the screen and to be static (And to use as much of the available screen real estate as possible without 'stretching' thus causing elongation of my sprites).
As I like to work with 0,0 as the top of the screen, basically what I do is pass my draw method something like so:
quad1.drawQuad (10,0);
When the drawQuad method get's this, it basically takes the range from left to right as expressed my openGL and divide the the screen width (so, in my case -1.7 through +1.7 so 3.4/2560 = 0.001328125). And say I specify 10 as my X (as above), it will say something like:
-1.7 + (10*0.001328125) = -1.68671875
It then plots the quad at -1.68671875.
Doing this I am able to work with normal co-ords (and I just subtract rather than add for y axis so I can have 0 at the top).
Is this a good way to do things?
Because with this method, at the moment, if I specify a 100,100 square, it isn't a square, it's rectangle. However, on the plus side, I can fill the whole physical screen by scaling the quad by width x height.
You are drawing a 1x1 quad, so that is why you see a 1x1 quad. Try translating the quad 0.25 to the right or left and you will see that you can draw in that space too.
In graphics, you create an object, like a quad, in your case you made it 1x1. Then you position it wherever you want. If you do not position it, then it will be at the origin, which is what you see.
If you draw a wider shape, you will also see you can draw outside this area on the screen.
By the way, with your ortho matrix function, you aren't just specifying the screen aspect ratio, you are also specifying the coordinate unit size you have to work with. This is why a 1x1 is filling the height the of the screen, because your upper and lower boundaries are set to 1 and -1. Your ratio is a little more than one, since your width is longer than your height, so your left and right boundaries are essentially something like -1.5 and 1.5 (whatever your ratio happens to be).
But you can also do something like this;
Matrix.orthoM(mProjMatrix, 0, -width/2, width/2, -height/2, height/2, 3, 7);
Here, your ratio is the same, but you are sending it to your ortho projection with screen coordinates. (Disclaimer: I don't use the same math library you do, but these appears to be a conventional ortho matrix function based on the arguments you are passing to it).
So lets say you have a 1000x500 pixel resolution. In OpenGL your origin of 0,0 is in the middle. So now your left edge is at (-500,y), right edge at (500,y) and your top is (x,250). So if you draw your 1x1 quad, it will be very tiny, but if you draw a 250x250 square, it will look like your 1x1 quad in your previous ortho projection.
So you can specify the coordinates you want, the ratio, the unit size, etc for how you want to work. Personally, I dont't like specifying coordinates as fractions between 0 and 1, I like to think about them in the same sense as the screen pixels.
But whether or not you choose to do this, hopefully you understand what you are actually passing to these matrix functions.
One of the best ways to learn is draw an object to the screen and just play around with different numbers you send to your modelview and projection matrices so you can see what it is they are actually doing.
I'm writing a little cross platform game engine for iOS, Android and BADA. I have a question about setting the perspective to be consistent regardless of screen resolution and ratio.
I have the following set up for my flipped normalized orthographic projection which works fine:
glViewport(0, 0, mWidth, mHeight);
glMatrixMode(GL_PROJECTION);
glOrthof(-1.0, //LEFT
1.0, //RIGHT
-1.0 * mHeight / mWidth, //BOTTOM
1.0 * mHeight / mWidth, //TOP
-2.0, //NEAR
100.0); //FAR
On my iPhone's this is fine and I get the desired position of world objects but on some of the Android devices and iPads the position when retaining the correct ratio.
All of the meshes are the right proportions but the position obviously alters so that if something is aligned to the bottom of the screen, when rendered on the iPad the objects will be drawen partly off the screen.
So the question:
Is this correct and I need to place objects relative to the width and height of the viewport?
Or
Is there a method to setting up the orthogonal perspective so that regardless of screen ratio the positioning will remain constant without damaging the perspective of world objects?
I am thinking from what I know and the math I did the second isn't an option because perspective is defined based on ratio.
The iPad and iPhone screens have different proportions. The iPhone is 3:2 and the iPad is 4:3. Android phones have too wide range of different proportions to list and I wouldn't really like to comment on Bada. So unless you're going to show your image in a letterbox or pillarbox, or stretch it so that the aspect ratio changes from device to device, the amount of your internal world that's visible is going to change between devices.
At the minute you've fixed the left and right parameters while calculating the top and bottom based on the screen proportions. So you'll get exactly the same amount of your world across the screen on every device but the amount on the vertical will change.
If your game involves a camera moving in 3d then there's really not much you can do about it. But since you're talking about things being aligned to sides of the screen I guess the camera moves in 2d?
As a general rule, if the camera moves along the vertical then you probably want to keep what you have. Your level layouts that are exactly the width of the screen will be the width of everybody's screen. Wider devices will be able to see further ahead or behind, but there you go.
If the camera moves along the horizontal then you probably want to switch to supplying fixed values for top and bottom, and calculating left and right as per the aspect ratio. So I guess that'd be:
glOrthof(-1.0 * mWidth / mHeight, //LEFT
1.0 * mWidth / mHeight, //RIGHT
-1.0 , //BOTTOM
1.0 , //TOP
-2.0, //NEAR
100.0); //FAR
In terms of being able to close this all off inside a library, you'll probably just need to receive a flag as to whether logical viewport width should be fixed and height adapted to the screen or height fixed and width adapted.
I am working on an Android Application in which a 3d scene is displayed and the user should be able to select an area by clicking/tapping the screen. The scene is pretty much a planar (game) board on which different objects are placed.
Now, the problem is how do I get the clicked area on the board from the actual screen-space coordinates?
I was planning on using gluUnProject(), as I have access to (almost) all the necessary parameters. Unfortunately I am missing the winZ param, and cannot get the current depth as the touch event is occurring in a different thread than the GL-thread.
My new plan is to still use gluUnProject, but with a winZ of 0, and then project the resulting point onto the board (the board stretches from 0,0,0 to 10,0,10 in model space), However, I can't seem to figure out how to do this?
I would be very happy if anyone could help me out with the maths needed to do this (matrices were never my strongest side), or perhaps find a better solution.
To clarify; here is an image of what I want to do:
The red rectangle represent the device screen, the green x is the touch event and the black square is the board (grey subdivisions represent a square of one unit). I need to figure out where on the board the touch has happened (in this case it is in square 1,1).
As you are working in 2D basically already, (I presume you mean your 3D board stretches from 0,0,0 to 10,10,0 (x,y,z).) you could translate and interpolate/extrapolate the 2D/3D space coordinates from your screen space coordinates without the gluUnProject(). You will need your screen resolution, and to pick the resolution of the 3D space grid you wish to convert to. If both the screen and 3D space origins are aligned (0,0 screen space is at 0,0,0 3D space), and your screen dimensions are 320x240, using your existing 10x10 3D grid, then 320/10 = 32, and 240/10 = 24, thus the screen space size of a single 1x1 area is 32x24. So if the user presses on 162, 40, then the user is pressing within ( 5, 1, 0) (162/32 >= 5 but < 6, 40/24 >= 1 but < 2 ) in the 3D space. If you need greater resolution than this you can select a higher 3D space grid resolution (i.e. using 20 instead of 10). You don't need to update the GL matrix to use this factor. Though it may make it simpler in some ways, I'm sure from a modeling perspective you would have additional work to do. Just be aware for a factor like 20, 1,3 would be at (.5, 1.5, 0). If your screen and 3D space origins are not already aligned will need to translate the screen space coord prior to this. If 0,0 screen space is 10,10,0, you will need to take your screen resolution and subtract the current point from it, making 0,0 into 320, 240 in this example, our example point of 162, 40, would be 158 (320-158 == 162), 200 (240-200 == 40).
If you'd like an overview of the projection matrix and how that all works, which could help you understand where to put the screen space dimensions in the unproject matrix, read this chapter of the OpenGL red book. http://www.glprogramming.com/red/chapter03.html
Hope this helps, and good luck!
So, I managed to solve this by doing the following:
float[] clipPoint = new float[4];
int[] viewport = new int[]{0, 0, width, height};
//screenY/screenX are the screen-coordinates, y should be flipped:
screenY = viewport[3] - screenY;
//Calculate a z-value appropriate for the far clip:
float dist = 1.0f;
float z = (1.0f/clip[0] - 1.0f/dist)/(1.0f/clip[0]-1.0f/clip[1]);
//Use gluUnProject to create a 3d point in the far clip plane:
GLU.gluUnProject(screenX, screenY, z, vMatrix, 0, pMatrix, 0, viewport, 0, clipPoint, 0);
//Get a point representing the 'camera':
float eyeX = lookat[0] + eyeOffset[0];
float eyeY = lookat[1] + eyeOffset[1];
float eyeZ = lookat[2] + eyeOffset[2];
//Do some magic to calculate where the line between clipPoint and eye/camera would intersect the y-plane:
float dX = eyeX - clipPoint[0];
float dY = eyeY - clipPoint[1];
float dZ = eyeZ - clipPoint[2];
float resX = glu[0] - (dX/dY)*glu[1];
float resZ = glu[2] - (dZ/dY)*glu[1];
//resX and resZ is the wanted result.
I am trying to learn opengl stuff on Android. In the gl.gltranslatef(x,y,z) call, I am shifting my texture by some units in the +ve x direction. But I am unable to find the number of pixels does 1 unit of x belong to?
Here is what I am doing:
I call gl.glviewport(0,0,width,height); // This will set my rectangle with 0,0 as lowerleft corner and then extend it to accommodate width and height.
Then
I call to gl.glfrustrum(-5,5,-7,7,3,7); // I am little confused how this call is using the dimensions I set in gl.glviewport.
How will -5 to 5 units from left to right in the above call, translate to pixels on the screen of android?
I mean if width = 320 and height = 533 pixels, then what will be the number of pixels occupied on the screen due to the gl.glfrustrum call?
I am experimenting in the gl.gltranslatef call by specifying xshift as 5.0, but it does not translate the bitmap at the right or left corner of the screen, when I increase it to 6, part of it is still visible on the screen.
Thanks
Siddhesh
In short, I am searching for the maximum number of units (in terms of X) which will represent extreme corners of my android phone screen.
glViewpoint tells it what rectangle (in pixels) your OpenGL output should be displayed in.
glFrustum tells it what coordinates in your "world" units should be mapped to that viewport.
An important point: your glFrustum call includes not only a height and width, but also a depth. Since you are specifying a Frustum, not a cube, that means anything with a Z coordinate anywhere but the very front of your frustum will be scaled down appropriately for its distance from the viewer.
As such, when you to a glTranslatef, the distance by which a particular object will move (in terms of pixels) will depend on its distance from the viewer. The further away it is from the viewer, the fewer pixels a particular sideways or up/down will translate to.
Depending on what else you're doing, one easy way to deal with this might be to use glOrtho instead of glFrustum. glOrtho gives orthographic mode, which means no perspective scaling is done, so a given X or Y distance will translate to the same number of pixels, regardless of distance from the viewer.