Want to have several alternative perspective options in GL - android

So I've set up the perspective for my view in the onSurfaceCreate method using this:
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
float ratio = _width / _height;
// perspective:
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 110.0f);
It's more or less identical to this tutorial:
http://www.droidnova.com/android-3d-game-tutorial-part-vi,436.html
BUT i want to be able to switch between 45.0 and other angles of view. When I try to change the values of
gl.glFrustumf
later, the screen goes blank! I can manually set the angle in the onSurfaceCreate method but I can't figure out how to have the option to change between 2 perspective views without restarting the app. Can anyone help?
Thanks for your time

Do you clear the projection matrix before calling glFrustum the second time? OpenGL somewhat unintuitively multiplies the frustum matrix with the previous matrix on the stack. If you call glFrustum several times in succession without resetting the projection matrix you will get a nonsensical result.

Related

Projection and Translation in OpenGL ES 2

I have a question regarding transformations in OpenGL ES 2. I'm currently drawing a rectangle using triangle fans as depicted in the image below. The origin is located in its center, while its width and height are 0.6 and 2 respectively. I assume that these sizes are related to the model space. However, in order to maintain the ratio of height and width on a tablet or phone one has to do a projection that considers the proportion of the device lengths (again width and height). This is why I call orthoM(projectionMatrix, 0, -aspectRatio, aspectRatio, -1f, 1f, -1f, 1f);and the aspectRatio is given by float aspectRatio = (float) width / (float) height. This finally leads to the rectangle shown in the image below. Now, I would like to move the rectangle along the x-axis to the border of the screen. However, I was not able to come up with the correct calculation to do so, either I moved it too little or too much. So how would the calculation look like? Furtermore, I'm a little bit confused about the sizes given in the model space. What are the max and min values that can be achieved there?
Thanks a lot!
Vertex position of the rectangle are in world space. A way to do this it could be get the screen coordinates you want to move to and then transform them into world space.
For example:
If the screen is 300 x 200 and you are in the center 0,0 in world space (or 150, 100) in screen space). You want to translate to 300.
So the transformation should be screen_position to normalized device coordiantes and then multiply by inverseOf(projection matrix * view matrix) and divided by the w component.
Here it is explained for mouse that it is finally the same, just that you know the z because it is the one you used for your rectangle already (if it is on the plane x,y): OpenGL Math - Projecting Screen space to World space coords.

OpenGL ES 2.0 : flip y-coordinates using perspective projection

I wonder if there is an easy way to flip the y-coordinates when using perspective projection? The threads about the issue seem to focused on orthographic projection. I am translating my game based on Canvas to OpenGL ES 2.0 and have relatively complex collision detection. And a lot of syntax is based on the y-axis starts from top of the screen with 0 and ends on the bottom of thes screen for instance 2560
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
game_width = width;
game_height = height;
GLES20.glViewport(0, 0, width, height);
// while the width will vary as per aspect ratio.
final float ratio = (float) width / height;
final float left = -ratio;
final float right = ratio;
final float bottom = -1.0f;
final float top = 1.0f;
final float near = 1f;
final float far = 40.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
There is very little difference using orthogonal or frustum matrix so the most simple answer would to simply swap the bottom and top parameters or even set them to whatever you need.
But to look into frustum a bit more:
What this method does is it creates a matrix that will scale the objects depending on the distance from near. It is designed so that an object at near is scaled by 1.0. So for instance if you put a rectangle with coordinates left, right, top, bottom as x and y then near as z and using no other matrix but the frustum the result will be exactly a full screen rectangle.
Objects that are closer to near will usually not be drawn and those further will be scaled linearly depending on all parameters but far. The far parameter effects nothing but where your objects will stop being drawn. So in most cases there is no difference if you put a very large far value but one very important; Effect of having a large far value will be precision of depth test. So when using depth buffer ensure that this value is as small as possible but still large enough to see all your objects.
In most cases we define frustum with a field of view as angle. You define constant near, far and fov from which the border parameters are then computed like right = tan(fov)*near*0.5 and top = tan(fov)*near*0.5*(viewHeight/viewWidth). These are just some examples though as there are many ways to define it.
In your case there is no reason not to define these values as you please. So having something like left = 0.0, right = width, bottom = height and top = 0.0. But then you still need to define near and far values which must be positive. Then if your objects are at 0.0 distance then they will all be clipped.
To avoid this it is best if you use a lookAt procedure which will generate another matrix that may define "camera" position in your scene. By simply putting it to z=-near you should see the objects exactly as with using orthographic projection. The problem now is that if you want to "zoom in" by putting the camera closer to the objects those objects will again not be drawn.
To achieve something like that you need to define some maximum scale for instance maxZoom = 10.0. What you would do then is divide all of the border parameters (top, left...) with that value. You would also apply this scale to the z value in your lookAt matrix to see the scene as not being zoomed.
So in general to flip the coordinates you may modify the border values or you may play with look at matrix. There are other ways as well but these are pretty standard. I hope this clears up a few things for you.

Android view 3d rotate transformation on big resolution screens

I'm implementing 3d card flip animation for android (api > 14) and have an issue with big screen tablets (> 2048 dpi). During problem investigation i've come to the following basic block:
Tried to just transform a view (simple ImageView) using matrix and rotateY of camera by some angle and it works ok for angle < 60 and angle > 120 (transformed and displayed) but image disappears (just not displayed) when angle is between 60 and 120. Here is the code I use:
private void applyTransform(float degree)
{
float [] values = {1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f, 0.0f, 1.0f};
float centerX = image1.getMeasuredWidth() / 2.0f;
float centerY = image1.getMeasuredHeight() / 2.0f;
Matrix m = new Matrix();
m.setValues(values);
Camera camera = new Camera();
camera.save();
camera.rotateY(degree);
camera.getMatrix(m);
camera.restore();
m.preTranslate(-centerX, -centerY); // 1 draws fine without these 2 lines
m.postTranslate(centerX, centerY); // 2
image1.setImageMatrix(m);
}
And here is my layout XML
<?xml version="1.0" encoding="utf-8"?>
<FrameLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent">
<ImageView
android:id="#+id/ImageView01"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="center"
android:src="#drawable/naponer"
android:clickable="true"
android:scaleType="matrix">
</ImageView>
</FrameLayout>
So I have the following cases:
works fine for any angle, any center point if running on small screens 800X480, 1024x720, etc...
works ok for angle < 60 and > 120 when running on big screen devices 2048x1536, 2560x1600...
works ok for any angle on any device if rotation not centered (matrix pre and post translations commented out )
fails (image disappears) when running on big screen device, rotation centered and angle is between 60 and 120 degrees.
Please tell what I'm doing wrong and advise some workaround... thank you!!!
This problem is caused by the camera distance used to calculate the transformation. While the Camera class itself doesn't say much about the subject, it is better explained in the documentation for the View.setCameraDistance() method (emphasis mine):
Sets the distance along the Z axis (orthogonal to the X/Y plane on
which views are drawn) from the camera to this view. The camera's
distance affects 3D transformations, for instance rotations around the
X and Y axis. (...)
The distance of the camera from the view plane can have an affect on
the perspective distortion of the view when it is rotated around the x
or y axis. For example, a large distance will result in a large
viewing angle, and there will not be much perspective distortion of
the view as it rotates. A short distance may cause much more
perspective distortion upon rotation, and can also result in some
drawing artifacts if the rotated view ends up partially behind the
camera (which is why the recommendation is to use a distance at
least as far as the size of the view, if the view is to be rotated.)
To be honest, I hadn't seen this particular effect (not drawing at all) before, but I suspected it could be related to this question related to perspective distortion I'd encountered in the past. :)
Therefore, the solution is to use the Camera.setLocation() method to ensure this doesn't happen.
An important distinction with the View.setCameraDistance() method is that the units are not the same, since setLocation() doesn't use pixels. While setCameraDistance() adjusts for density, setLocation() does not. Therefore, if you wanted to calculate an appropriate z-distance based on the view's dimensions, remember to adjust for density. For example:
float cameraDistance = Math.max(image1.getMeasuredHeight(), image1.getMeasuredWidth()) * 5;
float densityDpi = getResources().getDisplayMetrics().densityDpi;
camera.setLocation(0, 0, -cameraDistance / densityDpi);
Instead of using 12 lines to create rotation matrix, you could just implement this one in first line http://en.wikipedia.org/wiki/Rotation_matrix
Depending of effect you want, you might want to center image to axis you want to rotate around.
http://en.wikipedia.org/wiki/Transformation_matrix
Hmm for image disappearing, I would guess it has something to do with either memory (out of memory - although this would bring exception) or rounding problems. Maybe you could try increasing precision to double precision?
One thing that comes to mind is that cos(alpha) goes toward 0 when alpha goes toward PI/2. Other than that I don's see any correlation between angles and why it doesn't work for big images.
You need to adjust your Translate coordinates. When calculating the translation for your image you need to take image size into account too. When you perform matrix calculations you set android:scaleType="matrix" for your ImageView. This aligns your image at the top left corner by default. Then, when you apply your pre/post translation, your image may get off the bounds of your ImageView (especially if the ImageView is relatively large and your image is relatively small, like in case of beeg screen tablets).
The following translation results in the image being rotated around its center Y axis and keeps the image aligned to the top left corner:
m.preTranslate(-imageWidth/2, 0);
m.postTranslate(imageWidth/2, 0);
The following alternative results in the image being rotated around its center Y/X axises and aligns the image to the center of the ImageView:
m.preTranslate(-imageWidth/2, -imageHeight/2);
m.postTranslate(centerX, centerY);
If your image is a bitmap you can use intrinsic width/height:
Drawable drawable = image1.getDrawable();
imageHeight = drawable.getIntrinsicHeight();
imageWidth = drawable.getIntrinsicWidth();

Android - calculating pixel rotation without matrix? And checking if pixel is in view

I'm hoping someone can help me out. I'm making an image manipulation app, and I found I needed a better way to load in large images.
My plan, is to iterate through "hypothetical" pixels of an image (a "for loop" that covers width/height of the base image, so each iteration represents a pixel), scale/translate/rotate that pixels position relative to the view, then use this information to determine which pixels are being displayed in the view itself, then use a combination of BitmapRegionDecoder and BitmapFactory.Options to load in only the section of image that the output actually needs rather than a full (even if scaled) image.
So far I seem to have covered scale of the image and translation properly, but I can't seem to figure out how to calculate rotation. Since it's not a real Bitmap pixel I can't use Matrix.rotate =( Here is the image translations in the onDraw of the view, imgPosX and imgPosY hold the center point of the image:
m.setTranslate(-userImage.getWidth() / 2.0f, -userImage.getHeight() / 2.0f);
m.postScale(curScale, curScale);
m.postRotate(angle);
m.postTranslate(imgPosX, imgPosY);
mCanvas.drawBitmap(userImage.get(), m, paint);
and here is the math so far of how I'm trying to determine if an images pixel is on the screen:
for(int j = 0;j < imageHeight;j++) {
for(int i = 0;i < imageWidth;i++) {
//image starts completely center in view, assume image is original size for simplicity
//this is the original starting position for each pixel
int x = Math.round(((float) viewSizeWidth / 2.0f) - ((float) newImageWidth / 2.0f) + i);
int y = Math.round(((float) viewSizeHeight / 2.0f) - ((float) newImageHeight / 2.0f) + j);
//first we scale the pixel here, easy operation
x = Math.round(x * imageScale);
y = Math.round(y * imageScale);
//now we translate, we do this by determining how many pixels
//our images x/y coordinates have differed from it's original
//starting point, imgPosX and imgPosY in the view start in center
//of view
x = x + Math.round((imgPosX - ((float) viewSizeWidth / 2.0f)));
y = y + Math.round((imgPosY - ((float) viewSizeHeight / 2.0f)));
//TODO need rotation here
}
}
so, assuming my math up until rotation is correct (probably not but it appears to be working so far), how would I then calculate the rotation from that pixels position? I've tried other similar questions like:
Link 1
Link 2
Link 3
without using rotation the pixels I expect to actually be on the screen are represented (I made text file that outputs the results in 1's and 0's so I can have a visual representation of whats on the screen), but with the formula found in those questions the information isn't what is expected. (Scenario: I've rotated an image so only the top left corner is visible in the view. Using the info from Here to rotate the pixel, I should expect to see a triangular set of 1's in the upper left corner of the output file, but that's not the case)
So, how would I calculate a a pixels position after rotation without using the Android matrix? But still get the same results.
And if I've just messed it up entirely my apologies =( Any help would be appreciated, this project has gone on for so long and I want to finally be done lol
If you need any more information I will provide as much as I possibly can =) Thank you for your time
I realize this question is particularly difficult so I will be posting a bounty as soon as SO allows.
You do not need to create your own Matrix, use the existing one.
http://developer.android.com/reference/android/graphics/Matrix.html
You can map bitmap coordinates to screen coordinates by using
float[] coords = {x, y};
m.mapPoints(coords);
float sx = coords[0];
float sy = coords[1];
If you want to map screen to bitmap coordinates, you can create the inverse matrix
Matrix inverse = new Matrix(m);
inverse.inverse();
inverse.mapPoints(...)
I think your overall approach is going to be slow, as doing the pixel manipulation on the CU from Java has a lot of overhead. When drawing bitmaps normally, the pixel manipulation is done on the GPU.

Getting a scale factor of a Bitmap after rotating it with a matrix

I've got the following problem which I tried to solve the whole day.
I load a Bitmap picture with his corresponding height and width into an ImageView.
Then I use a matrix for moving,scaling and rotating the image.
For scaling I'm using the postScale method.
For moving I'm using the postTranslate method.
Only for Rotating I'm using the preRotate method.
Now I need to get the factor I scaled the image with, because I later need this factor in another program.
Using the MSCALE_X and MSCALE_Y values of the matrix only fits until I did no rotation. If I rotated the image, the scale values don't fit anymore (because the matrix was multiplied with the formula which is shown in the api).
Now my Question is:
How can I still get the scale factor of the image after rotating it?
For the rotation factor (degrees) it is simple, because I store it within an extra variable which is icremented/decremented while rotating.
But for the scale factor it does not work, because if I first scale an image down to 50% and then rescale it up to 150% then I scaled it with a factor of 3 but the original scaling factor is only 1.5).
Another example is. Even if I did not rescale the picture it even changes its scaling factor if I rotate it.
//Edit:
Finally I solved the problem on my own :) (doing a bit math and then I figured something interesting (or lets say obvious) out).
Here my solution:
I figured out that the values MSCALE_X and MSCALE_Y are calculated by using the cosinus function (yeah the basic math...). (Using 0° rotation leads to the correct scalingWidth and scalingHeight within X and Y). (90 and 270° results in a scalingWidth/Height of 0 and 180° results in a scalingWidth/Height multiplied by -1).
This leads me to the idea to write the following function:
This function saves the current matrix within a new matrix. Then it rotates the new matrix to the startstate (0°). Now we can read the non violated values MSCALE_X and MSCALE_Y in our matrix (which are the correct scaling factors now)
I had the same problem. This is a simple way and the logic is sound (in my mind).
For: xScale==yScale
float scale = matrix.mapRadius(1f) - matrix.mapRadius(0f);
For: xScale != yScale
float[] points={0f,0f,1f,1f};
matrix.mapPoints(points);
float scaleX=points[2]-points[0];
float scaleY=points[3]-points[1];
If you are not translating, you may be able to get away with just with 1f vector/point. I've tested the xScale==yScale (mapRadius) variant and it seems to work.
I had a similar problem with an app I'm writing. I couldn't see an obvious an simple solution so I just created a RectF object that had the same initial coords of the bitmap. Then, everytime I adjusted the matrix, I'd apply the transformation to the RectF as well (using Matrix.mapRect() ). This worked perfectly for me. It also allowed me to keep track of the absolute position of the edges of the bitmap.

Categories

Resources