I created one sphere using OpenGL ES20 in Android. In a perspective projection env, I animate the sphere from [-1.5, -2, -2] to [-1.5, 2, -2] . The problem is that, the sphere looks like a ellipse when it reach the frustum boundary. Indeed, it only look like a circle when it is at [0, 0, -2], the more it away from the [0,0], the more it looks like a ellipse.
Is this the standard behavior ? I thought, one sphere should look like a circle in all angles of view. Could you please help ?
You should lessen your field of view; what you show is normal and is a side effect of the slightly artificial nature of a 3d projection — a 3d projection assumes the viewer is sitting a fixed distance from the screen and that their eyes are positioned along z directly from the centre of the screen looking exactly forwards. Check out the related problems described here for a description of the same effect with a real camera.
Quite often the implicit default field of view is ninety degrees. But when you hold a phone in your hand it occupies much less than ninety degrees of your vision.
If you're using glFrustum then try specifying lesser values for left, right, top and bottom. As a quick fix, just throw a glScalef by, say, 2.0 onto your projection stack (or your ES 2 equivalent) after computing your projection matrix.
Related
I'm asking this question because most of the search results in Google and StackOverflow seem to be mostly about creating cylinder panoramas instead of creating code in OpenGLES 2.0 to view them.
In short, I have a picture that I shot with my smartphone camera on Panorama mode. In this case, let's say the image was 19168x3040 pixels.
I've modified the MD360Player library to add a Cylindrical projection (basically the Spherical projection shape but with a different Y component formula). The thing is, I don't seem to have gotten the projection correctly. It looks like this now:
That is after I halved the cylinder's height - if I don't, the cylinder will fit the square's height, but the image will be stretched vertically.
I heard that I have to map the points in the cylinder into a plane to get the projection right. Seems like I have to do what Facebook did and have separate modes for when I touch the view to scroll and when I release my finger to view the mapped picture?
In that case, how do I map the currently viewed part of the cylinder into a plane, especially the area in the zenith and nadir of the cylindrical projection?
The code for the cylinder projection will be posted if needed.
In OpenCV I use the camera to capture a scene containing two squares a and b, both at the same distance from the camera, whose known real sizes are, say, 10cm and 30cm respectively. I find the pixel widths of each square, which let's say are 25 and 40 pixels (to get the 'pixel-width' OpenCV detects the squares as cv::Rect objects and I read their width field).
Now I remove square a from the scene and change the distance from the camera to square b. The program gets the width of square b now, which let's say is 80. Is there an equation, using the configuration of the camera (resolution, dpi?) which I can use to work out what the corresponding pixel width of square a would be if it were placed back in the scene at the same distance as square b?
The math you need for your problem can be found in chapter 9 of "Multiple View Geometry in Computer Vision", which happens to be freely available online: https://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf.
The short answer to your problem is:
No not in this exact format. Given you are working in a 3D world, you have one degree of freedom left. As a result you need to get more information in order to eliminate this degree of freedom (e.g. by knowing the depth and/or the relation of the two squares with respect to each other, the movement of the camera...). This mainly depends on your specific situation. Anyhow, reading and understanding chapter 9 of the book should help you out here.
PS: to me it seems like your problem fits into the broader category of "baseline matching" problems. Reading around about this, in addition to epipolar geometry and the fundamental matrix, might help you out.
Since you write of "squares" with just a "width" in the image (as opposed to "trapezoids" with some wonky vertex coordinates) I assume that you are considering an ideal pinhole camera and ignoring any perspective distortion/foreshortening - i.e. there is no lens distortion and your planar objects are exactly parallel to the image/sensor plane.
Then it is a very simple 2D projective geometry problem, and no separate knowledge of the camera geometry is needed. Just write down the projection equations in the first situation: you have 4 unknowns (the camera focal length, the common depth of the squares, the horizontal positions of their left sides (say), and 4 equations (the projections of each of the left and right sides of the squares). Solve the system and keep the focal length and the relative distance between the squares. Do the same in the second image, but now with known focal length, and compute the new depth and horizontal location of square b. Then add the previously computed relative distance to find where square a would be.
In order to understand the transformations performed by the camera to project the 3D world in the 2D image you need to know its calibration parameters. These are basically divided into two sets:
Intrensic parameters: These are fixed parameters that are specific for each camera. They are normally represented by a Matrix called k.
Extrensic parameters: These depend on the camera position in the 3D world. Normally they are represented by two matrices: R and T where the first one represents the rotation and the second one represents the translation
In order to calibrate a camera your need some pattern (basically a set of 3D points which coordinates are known). There are several examples for this in OpenCV library which provides support to perform the camera calibration:
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
Once you have your camera calibrated you can transform from 3D to 2D easily by the following equation:
Pimage = K · R · T · P3D
So it will not only depend on the position of the camera but it depends on all the calibration parameters. The following presentation go through the camera calibration details and the different steps and equations that are used during the 3D <-> Image transformations.
https://www.cs.umd.edu/class/fall2013/cmsc426/lectures/camera-calibration.pdf
With this in mind you can project whatever 3D point to the image and get its coordinate on it. The reverse transformation is not unique since going back from 2D to 3D will give you a line instead of a unique point.
I am using OpenGLES20 with android and I would like to know how to do the following:
I think it's easier to explain with a picture...
How can I rectify this stretching. Note: I am working in 2D.
I've heard this problem is solved using something called a projection matrix. I have also read a StackOverflow question saying that the android documentation for setting up a projection matrix is not good. I have tried it personally and couldn't get it to work.
This question is extremely poorly put. On the left image you have rectangular view with coordinate system [0,1]x[0,1] and a correctly drawn triangle while on the right you have the same view with view coordinate system and stretched triangle... Taking this 2 things into consideration your triangle coordinates are already stretched to begin with (or there is an extra model view matrix). If they weren't stretched the triangle would be drawn correctly on the right image.
It is a very common issue your scene is stretched when dealing with different view ratios. In general to solve this you are looking for something like glOrtho which can define your coordinate system. The input parameters for this method are left, right, top and bottom and it is easiest to simply use screen coordinates (like presented on the right image). Another approach is to normalise this input to either [0,1]x[0,height/width] or [0,width/height]x[0,1]. These two methods represent "fit" and "fill" and which is which depends on width of view being smaller or larger then height (portrait, landscape).
When using a correct orthographical matrix your square will always be a square without using any additional matrices or multiplying the vertex arrays... In your case it seems you already multiplied your vertices so I suggest you remove that, all of it. If you can not and those vertices will continue to be scaled incorrectly I suggest you use model view matrix to rescale them.
i am planing to reengineer my prototype of a 2D role-plaing game to a "real product". In that case i think about using openGl instead of Android Canvas.
The reason why i think about it is because i want the game to work on devices with different screen resolutions. To do so i thought about using a Cameraview from openGl faced on an wall where my 2D gametextures are moving. If the resolution of the current device is to small for the whole game-content i want to move the cameraview so that the character is always at the middle till the cameraframe gets to the edges of the "wall".
Is this a possible solution for it or would you rather chose a different way?
Is it even possible to draw sprites in openGl as i can do with canvas? Simply like several Layers above each other. At first the tiles than the figures with for example simple squares as lifebars (first background than the red life above it) and so on.
positionRect = new Rect(this.getPositionX(), this.getPositionY()
- (this.spriteHeight - Config.BLOCKSIZE), this.getPositionX()
+ Config.BLOCKSIZE, this.getPositionY() + Config.BLOCKSIZE);
spritRect = new Rect(0, 0, Config.BLOCKSIZE, spriteHeight);
canvas.drawBitmap(this.picture, spritRect, positionRect, null);
If so how do i start with getting the first background and maybe first Dot(an .png picture)? I didnt find any tutorial what gives me the right kick off. I know how to sett up the projekt for an GLSurfaceView and so on.
You will need a bit of adjustments but it is possible and quite easy. Since you can get a tutorial and start some simple programs I will just give you some pointers:
First of all you should look into projections. You can use "glFrustumf" or "glOrthof" on the projection matrix. First one is used more for 3D so use Ortho. The parameters inside this method will represent the coordinate system borders of your screen. If you want them to be the same as most "view" systems insert values: top=0, left=0, right=view.width, right=view.height.
Now you can create a square buffer instead of rect as in
float[] buffer =
{origin.x, origin.y,
origin.x, origin.y+size.height,
origin.x+size.width, origin.y+size.height,
origin.x+size.width, origin.y,
};
And texture coordinates as
float[] textureCoordinates =
{.0, .0,
.0, 1,
1, 1,
1, .0,
};
You will also need to load the texture(s) (in some initialization and only once if possible) for witch use google or stack overflow since it depends on the platform...
And this is pretty much all you need to join it in your draw method:
enableClientState(vertexArray)
enableClientState(texCoordArray)
enable(texture2d)
//for each object:
vertexPointer(buffer)
texCoordPointer(textureCoordinates) //unless all are same
bindTexture(theTextureRepresentingTheSpriteYouWant)
draw(triangleStrip, 4)
As for moving use translate on the model matrix:
pushMatrix()
translatef(x, y, .0)
drawScene()
popMatrix()
When working with opengl, you aren't constrained to the "window" dimensions. You will define a projection matrix and viewport for your world, and then to change the "camera", you would just play with the projection matrix and viewport. If I were you, i'd pick up a book on opengl before starting this project so you are aware of how opengl works.
Also, if you are working with java then you will want to use the GLSurfaceView class. This handles all of the threading and everything for you, so you don't need to worry about it.
I need to draw a spinning globe using opengl es in android. I think we need to draw a sphere and then apply a texture map on it. If I am correct, we cannot use the utility library glu in Opengl ES for drawing a sphere. I did find some code in objective C, but then I would have to make it work on android.
http://www.iphone4gnew.com/procedural-spheres-in-opengl-es.html
Is there any other way to do this ? I'm no sure how to approach this problem, Could you give me some inputs that would set me looking in the right direction.
Thanks
You could actually create your own sphere rendering function.
A tesselated sphere is no more then a stack of n cone segments, each approximated with m slices.
This image (courtsey of dglwiki.de) illustrates this:
(the german text translates to 'If the resolution is to low, the sphere degenerates to other symetric Bodies)
In order to construct the sphere, you'll need to specify the center point, radius, number of stacks and number of slices per stack.
The first pole of your sphere can be any point with a distance of radius from the center point. The vector from this point to the center point defines your sphere's axis of rotation (and thereby the position of the second pole)
Next, you'll need to approximate several equidistant circles of latitude on your sphere around the axis of rotation. The number of circles should be number of stacks -1. Each of these circles should have as much vertices as your desired number of slices.
Having calculated these, you have enough geometry information to construct your spheres faces.
Begin with a triangle fan originating at one of the poles using the vertices of the first circle.
Then, construct Triangle strips for each pair of neighbouring circles of latitude. The last step is to construct another triangle fan from the second pole to the last of your circles of latitude.
Using this approach, you can generate arbitrary spheres of arbitrary smoothness
In addition to what sum1 says, the link you provide to obj-C code is mostly just C, which translates quite nicely to Java/android. The technique provided is very similar to the one sum1 suggests, although the author uses only one fan at the top, then draws the entire remainder of the sphere with a single triangle strip. In addition, his globe is "laying on its side", with the fan at the "East pole" and the other point at the "west pole."
However, you can either use the link you provide as-is, or make the adjustments easily enough.