In OpenCV I use the camera to capture a scene containing two squares a and b, both at the same distance from the camera, whose known real sizes are, say, 10cm and 30cm respectively. I find the pixel widths of each square, which let's say are 25 and 40 pixels (to get the 'pixel-width' OpenCV detects the squares as cv::Rect objects and I read their width field).
Now I remove square a from the scene and change the distance from the camera to square b. The program gets the width of square b now, which let's say is 80. Is there an equation, using the configuration of the camera (resolution, dpi?) which I can use to work out what the corresponding pixel width of square a would be if it were placed back in the scene at the same distance as square b?
The math you need for your problem can be found in chapter 9 of "Multiple View Geometry in Computer Vision", which happens to be freely available online: https://www.robots.ox.ac.uk/~vgg/hzbook/hzbook2/HZepipolar.pdf.
The short answer to your problem is:
No not in this exact format. Given you are working in a 3D world, you have one degree of freedom left. As a result you need to get more information in order to eliminate this degree of freedom (e.g. by knowing the depth and/or the relation of the two squares with respect to each other, the movement of the camera...). This mainly depends on your specific situation. Anyhow, reading and understanding chapter 9 of the book should help you out here.
PS: to me it seems like your problem fits into the broader category of "baseline matching" problems. Reading around about this, in addition to epipolar geometry and the fundamental matrix, might help you out.
Since you write of "squares" with just a "width" in the image (as opposed to "trapezoids" with some wonky vertex coordinates) I assume that you are considering an ideal pinhole camera and ignoring any perspective distortion/foreshortening - i.e. there is no lens distortion and your planar objects are exactly parallel to the image/sensor plane.
Then it is a very simple 2D projective geometry problem, and no separate knowledge of the camera geometry is needed. Just write down the projection equations in the first situation: you have 4 unknowns (the camera focal length, the common depth of the squares, the horizontal positions of their left sides (say), and 4 equations (the projections of each of the left and right sides of the squares). Solve the system and keep the focal length and the relative distance between the squares. Do the same in the second image, but now with known focal length, and compute the new depth and horizontal location of square b. Then add the previously computed relative distance to find where square a would be.
In order to understand the transformations performed by the camera to project the 3D world in the 2D image you need to know its calibration parameters. These are basically divided into two sets:
Intrensic parameters: These are fixed parameters that are specific for each camera. They are normally represented by a Matrix called k.
Extrensic parameters: These depend on the camera position in the 3D world. Normally they are represented by two matrices: R and T where the first one represents the rotation and the second one represents the translation
In order to calibrate a camera your need some pattern (basically a set of 3D points which coordinates are known). There are several examples for this in OpenCV library which provides support to perform the camera calibration:
http://docs.opencv.org/doc/tutorials/calib3d/camera_calibration/camera_calibration.html
Once you have your camera calibrated you can transform from 3D to 2D easily by the following equation:
Pimage = K · R · T · P3D
So it will not only depend on the position of the camera but it depends on all the calibration parameters. The following presentation go through the camera calibration details and the different steps and equations that are used during the 3D <-> Image transformations.
https://www.cs.umd.edu/class/fall2013/cmsc426/lectures/camera-calibration.pdf
With this in mind you can project whatever 3D point to the image and get its coordinate on it. The reverse transformation is not unique since going back from 2D to 3D will give you a line instead of a unique point.
Related
I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRotationMatrixFromVector converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.
I was inspired by Pokemon GO and wanted to make a simple prototype for learning purposes. I am a total beginner in image processing.
I did a little research on the subject. Here is what I came up with. In order to place any 3D model in the real world, I must know the orientation. Say if I am placing a cube on a table:
1) I need to know the angles $\theta$, $\phi$ and $\alpha$ where $\theta$ is the rotation along the global UP vector, $\phi$ is the rotation along the camera's FORWARD vector and $\alpha$ is rotation along camera's RIGHT vector.
2) Then I have to multiply these three rotation matrices with the object's transform using these Euler angles.
3) The object's position would be at the center point of the surface for protyping.
4) I can find the distance of the surface using android's camera's inbuilt distance estimation using focal length. Then I can scale the object accordingly.
Is there any more straight forward way to do this using OpenCV or am I going in the right track?
I know there is some post talking about this topic but I could not find my answer.
I want to calibrate my android camera without chessboard for 3d reconstruction, so I need my intrinsic and extrinsic parameters.
My first goal is to extract the 3D real system to be able to put some 3d Model on screen.
My step :
From a picture of a building I extract 4 points that represent the real 3D system
/!\ this step require camera calibration /!\
Convert them to 3d Point (solvePnP for exemple)
Then from my 3D Axis I create a OpenGL projection and modelview matrix
My main problem is that I want to avoid a calibration step, so how can calibrate without chessboard? I have some data from android such as focal length. I can guess that the projection center is the center of my camera picture.
Any idea or advice? or other way to do it ?
here is nochess calibation of qtcalib.
This scheme is recomended when you need obtain a camera calibration
from a image that don't have calibration chessboard. In this case, you
can approximate the camera calibration if you know 4 points in the
image forming a flat rectangle in real world. Is important to remark
that the aproximated calibration depends on the 4 selected points and
the values that you will set for the dimensions of the rectangle
I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.
Assume I have 3 cubes at random location/orientation and I want to detect if any of the cube is overlapping (or colliding) with another cube. This overlap or collision could also happen as the cubes location/rotation are changed in each frame. Please note that I am looking for Android based and OpenGL ES (1.0 or 1.1) based solution for this.
This isn't really an OpenGL problem - it just does rendering.
I don't know of any ready-made android libraries for 3D collision detection, so you might just have to do the maths yourself. Efficient collision detection is generally the art of using quick, cheap tests to avoid doing more expensive analysis. For your problem, a good approach to detecting if cube A intersects cube b would be to do a quick rejection test, either
Compute the bounding spheres for A and B - if the distance between the two sphere's centers is greater than the sum of radii, then A and B do not intersect
Compute the axis-aligned bounding boxes for A and B - if the bounds do not intersect (very easy to test), then neither do A and B
If the bounds test indicates possible collision it's time for some maths. There are two ways to go from here: testing for vertex inclusion and testing for edge/face intersection
Vertex inclusion is testing the vertices of A to see if they lie within B: either rotate the vertex into B's frame of reference to test for inclusion, or use the planes of B's faces directly in a frustum-culling style operation.
Edge/Face intersection is testing each of the edges of A for intersection with B's face triangles.
While the vertex inclusion test is a bit cheaper than Edge/Face testing, it's possible for cubes to intersect without encompassing each other's vertices, so a negative result does not mean no intersection. Similarly, it's possible for cubes to intersect without an intersection between an edge and a face (if one lies within the other). You'll have to do a little of both tests to catch every intersection. This can be avoided if you can make some assupmtions about how the cubes can move from frame to frame, i.e.: if the A and B were not touching last frame, it's unlikely that they are A is wholly within B now.