Find the Dimensions (Height ,width) of an Object using Camera - android

I want to find the solution to get the dimensions of an Object Using Camera, Well it sounds like Duplicate one
How to measure height, width and distance of object using camera?
But the solution doesn't help me out.Well from the Above link i got some idea to find out the distance (Measure Distance).
Can somebody suggest me how am i supposed to get the width as well as height of an object. Simple math or any Idea would be really helpful.
Is there any possibilities to achieve the above solution using OpenCV.
Measure Height and Width
What i have tried so far:
Suppose we assume the Fixed Distance , We can calculate Angle of elevation
tan(α/2) = (l/2)/d,
hence
α = 2*atan(l/2d)
But still we don't know the value of L (Length of the object)
Another way to find the View angle:
double thetaV = Math.toRadians(camera.getParameters().getVerticalViewAngle());
double thetaH = Math.toRadians(camera.getParameters().getHorizontalViewAngle());
Seems Not working !!

The actual physics of a lens are explained for example on this website of Georgia State University.
See this illustration which explains how you can use either the linear magnification or focal length relations to find out object size from image size:
In particular, -i / h' = o / h, and this relation o / h holds true for all similar triangles (that is, an object of size 2h at distance 2o has the same sizeh' on the picture). So as you can see, even in the case of the full equation, you can't know both the distance o and the size h of an object -- however one will give you the other.
On the other hand, two objects at the same distance o will see their sizes h1' and h2' on the image be proportional to their sizes in real life h1 and h2, since h1' / h1 = M = h2' / h2.
Hence if you know for one object both oand h, you know M, thus knowing an object's size on film you can deduct its size from its distance and vice versa.
The -i / h' value is naturally expressed for the maximal h'. If the size of an object fills the image exactly, it fills the field of view, then the ratio of its distance to its size is tan(α/2) = (l / 2) / d (note that in the conventions of the image below, d = o and l = 2 * h).
This α is what you name theta in your example. Now, from the image size you can get under what angle you see the image -- that is, what size l would the image be if it were at distance d. From there, you can deduce the size of the object from its distance and vice versa.
Algorithm steps:
get ratio r = size of object in image (in px) / total size of image (in px).
Do this along the axis for which you know or plan to get the real object size, of course.
get the corresponding field of view and angle multiply r by the tangent of half that angle
r *= tan(camera.getParameters().getXXXXViewAngle() / 2)
r is now the tangent of the half-angle under which you see the object, hence the following relations are true: r = (l / 2) / d = h / o (with the respective drawing's notations).
If you know the distance d to the object, its size is l = 2 * r * d
If you know the size l of the object, it is at distance is d = l / (2 * r)
This works for objects hat are actually pointed at by the camera, if they aren't centred the maths may be off.

Related

Converting Focal Length in millimeters to pixels - Android

In Android, I am currently accessing the camera's focal length by using getFocalLength() in Camera1. Camera2 is not an option.
I am trying to full fill the current calculation: focal_length_pix = focal_length_m * 1 (pix) / pixel_width_m.
Basically this converts the focal length from mm -> px. Now I know the focal_length_m variable but I am currently trying to figure out the pixel_width_m which is is the width of a pixel (on the sensor) in meters.
I am struggling to find a way to calculate the width of a pixel on the sensor. Any suggestions, ideas would be much appreciated.
You are able to calculate the focal length in pixels by the following:
double focal_length_pix = (size.width * 0.5) / tan(horizontalAngleView * 0.5 * PI/180);
size derives from getPreviewSize()

Android - calculating pixel rotation without matrix? And checking if pixel is in view

I'm hoping someone can help me out. I'm making an image manipulation app, and I found I needed a better way to load in large images.
My plan, is to iterate through "hypothetical" pixels of an image (a "for loop" that covers width/height of the base image, so each iteration represents a pixel), scale/translate/rotate that pixels position relative to the view, then use this information to determine which pixels are being displayed in the view itself, then use a combination of BitmapRegionDecoder and BitmapFactory.Options to load in only the section of image that the output actually needs rather than a full (even if scaled) image.
So far I seem to have covered scale of the image and translation properly, but I can't seem to figure out how to calculate rotation. Since it's not a real Bitmap pixel I can't use Matrix.rotate =( Here is the image translations in the onDraw of the view, imgPosX and imgPosY hold the center point of the image:
m.setTranslate(-userImage.getWidth() / 2.0f, -userImage.getHeight() / 2.0f);
m.postScale(curScale, curScale);
m.postRotate(angle);
m.postTranslate(imgPosX, imgPosY);
mCanvas.drawBitmap(userImage.get(), m, paint);
and here is the math so far of how I'm trying to determine if an images pixel is on the screen:
for(int j = 0;j < imageHeight;j++) {
for(int i = 0;i < imageWidth;i++) {
//image starts completely center in view, assume image is original size for simplicity
//this is the original starting position for each pixel
int x = Math.round(((float) viewSizeWidth / 2.0f) - ((float) newImageWidth / 2.0f) + i);
int y = Math.round(((float) viewSizeHeight / 2.0f) - ((float) newImageHeight / 2.0f) + j);
//first we scale the pixel here, easy operation
x = Math.round(x * imageScale);
y = Math.round(y * imageScale);
//now we translate, we do this by determining how many pixels
//our images x/y coordinates have differed from it's original
//starting point, imgPosX and imgPosY in the view start in center
//of view
x = x + Math.round((imgPosX - ((float) viewSizeWidth / 2.0f)));
y = y + Math.round((imgPosY - ((float) viewSizeHeight / 2.0f)));
//TODO need rotation here
}
}
so, assuming my math up until rotation is correct (probably not but it appears to be working so far), how would I then calculate the rotation from that pixels position? I've tried other similar questions like:
Link 1
Link 2
Link 3
without using rotation the pixels I expect to actually be on the screen are represented (I made text file that outputs the results in 1's and 0's so I can have a visual representation of whats on the screen), but with the formula found in those questions the information isn't what is expected. (Scenario: I've rotated an image so only the top left corner is visible in the view. Using the info from Here to rotate the pixel, I should expect to see a triangular set of 1's in the upper left corner of the output file, but that's not the case)
So, how would I calculate a a pixels position after rotation without using the Android matrix? But still get the same results.
And if I've just messed it up entirely my apologies =( Any help would be appreciated, this project has gone on for so long and I want to finally be done lol
If you need any more information I will provide as much as I possibly can =) Thank you for your time
I realize this question is particularly difficult so I will be posting a bounty as soon as SO allows.
You do not need to create your own Matrix, use the existing one.
http://developer.android.com/reference/android/graphics/Matrix.html
You can map bitmap coordinates to screen coordinates by using
float[] coords = {x, y};
m.mapPoints(coords);
float sx = coords[0];
float sy = coords[1];
If you want to map screen to bitmap coordinates, you can create the inverse matrix
Matrix inverse = new Matrix(m);
inverse.inverse();
inverse.mapPoints(...)
I think your overall approach is going to be slow, as doing the pixel manipulation on the CU from Java has a lot of overhead. When drawing bitmaps normally, the pixel manipulation is done on the GPU.

camera: image projection

I'd like to project images on a wall using camera. Images, essentially, must scale regarding the distance between camera and the wall.
Firstly, I made distance calculations by using right triangle trigonometry(visionHeight * Math.tan(a)). It's not 100% exact but yet close to real values.
Secondly, knowing the distance we can try to figure out all panorama height by using isosceles triangle trigonometry formula: c = a * tan(A);
A = mCamera.getParameters().getVerticalViewAngle();
The results are about 30% greater than the actual object height and it's kinda weird.
double panoramaHeight = (distance * Math.tan( mCamera.getParameters().getVerticalViewAngle() / 2 * 0.0174532925)) * 2;
I've also tried figuring out those angles using the same isosceles triangle's formula, but now knowing the distance and the height. I got 28 and 48 degrees angles.
Does it mean that android camera doesn't render everything it shoots ? And, what other solutions you can suggest ?
Web search shows that the values returned by getVerticalViewAngle() cannot be blindly trusted on all devices; also note that you should take into account the zoom level and aspect ratio, see Determine angle of view of smartphone camera

Calculate angle of moving ball after collision with angled or sloped wall that is a 2D line segment

If you have a "ball" inside a 2D polygon, made up of say, 4 line segments that act as bounding walls, how do you calculate the angle of the ball after the collision with the irregularly sloped wall?
I know how to make the ball bounce if the wall is horizontal, vertical, or at a 45 degree angle. I also have my code setup to detect a collision with the wall.
I've read about dot products and normals, but I cannot figure out how to implement these in Java / Android. I'm completely stumped and feel like I've looked up everything 10 pages deep in Google 10 times now. I'm burned out trying to figure this out, I hope someone can help.
Apologies in advance: I don't know the correct Android types. I'm assuming you have a vector type with properties 'x' and 'y'.
If the wall were horizontal and the current velocity were 'vector' then it'd be as easy as:
vector.y = -vector.y;
And you'd leave the x component alone. So you need to do something analogous, but more general.
You do that by substituting the idea of the line normal (a vector perpendicular to the line) for hard coding for the y axis (which is perpendicular to the horizontal).
Since the normal is orthogonal to the line, it can be found by rotating the line by 90 degrees. In 2d, the vector (a, b) can be rotated by 90 degrees by converting it to (-b, a). Hence if you have a line from (x1, y1) to (x2, y2) then you can get the normal with:
vectorAlongLine.x = x2 - x1;
vectorAlongLine.y = y2 - y1;
normal.x = -vectorAlongLine.y;
normal.y = vectorAlongLine.x;
You don't actually care how long the original line was (and it'll affect computations later when you don't want it to), so you want to make the normal be of length 1 irrespective of its current length. You can do that by dividing it by its current length. So, e.g.
lengthOfNormal = Math.sqrt(normal.x*normal.x + normal.y*normal.y);
normal.x /= lengthOfNormal;
normal.y /= lengthOfNormal;
Using the Pythagorean theorem there to get the length.
With the horizontal line, flipping on the y axis was the same as (i) working out what the extent of the vector extends along the y axis; and (ii) subtracting that amount twice — once to get the velocity to be 0 in that direction, again to make it the negative version of the original. That is, it's the same as:
distanceAlongNormal = vector.y;
vector.y -= 2.0 * distanceAlongNormal;
The dot product is used in the general case is to work how far the vector extends along the normal. So it does the same as taking vector.y does for the horizontal line. This is where you possibly have to take a bit of a leap of faith. It's a property of the dot product and you can persuade yourself by inspecting a right-angled triangle. But for now, if you had a horizontal line, you'd have ended up with the normal (0, 1). Since the dot product would be:
vector.x * normal.x + vector.y * normal.y
You'd compute:
distanceAlongNormal = vector.x * 0.0 + vector.y * 1.0;
Which is obviously the same thing as just taking the y component.
Having worked out the distance along the normal, you actually want to then subtract that amount times the normal times two. The only additional step here is multiplying by the normal to get a 2d quantity to subtract. That's because you're looking to subtract in the order of the normal. So complete code, based on a normal computed earlier, is:
distanceAlongNormal = vector.x * normal.x + vector.y * normal.y;
vector.x -= 2.0 * distanceAlongNormal * normal.x;
vector.y -= 2.0 * distanceAlongNormal * normal.y;
If you hadn't made normal of length 1, then you'd need to divide by the length here, since the dot product would scale the distanceAlongNormal value by that amount.
This might come in handy for you
http://www.tonypa.pri.ee/vectors/tut07.html

Determine angle of view of smartphone camera

I'm trying to determine the degree size of the field-of-view of a Droid Incredible smartphone's camera. I need to know this value for an application that I'm developing. Does anyone know how I can find out/calculate it programmatically?
The Camera.Parameters getHorizontalViewAngle() and getVerticalViewAngle() functions provide you with the base view angles. I say "base", because these apply only to the Camera itself in an unzoomed state, and the values returned by these functions do not change even when the view angle itself does.
Camera.Parameters p = camera.getParameters();
double thetaV = Math.toRadians(p.getVerticalViewAngle());
double thetaH = Math.toRadians(p.getHorizontalViewAngle());
Two things cause your "effective" view angle to change: zoom, and using a preview aspect ratio that does not match the camera aspect ratio.
Basic Math
The trigonometry of field-of-view (Θ) is fairly simple:
tan(Θ/2) = x / 2z
x = 2z tan(Θ/2)
x is the linear distance viewable at distance z; i.e., if you held up a ruler at distance z=1 meter, you would be able to see x meters of that ruler.
For instance on my camera, horizontal field of view is 52.68° while vertical field of view is 40.74°. Convert these to radians and plug them into the formula with an arbitrary z value of 100m, and you get x values of 99.0m(horizontal) and 74.2m(vertical). This is a 4:3 aspect ratio.
Zoom
Applying this math to zoom levels is only slightly harder. Now, x remains constant and it is z that changes in a known ratio; we must determine Θ.
tan (Θ/2) = x / (2z)
tan (Θ'/2) = x / (2z')
Θ' = 2 atan((z / z') tan(Θ/2))
Where z is the base zoom level (100), z' is the current zoom level (from CameraParameters.getZoomRatios), Θ is the base horizontal/vertical field of view, and Θ' is the effective field of view. Adding on degree->radian conversions makes this rather verbose.
private static double zoomAngle(double degrees, int zoom) {
double theta = Math.toRadians(degrees);
return 2d * Math.atan(100d * Math.tan(theta / 2d) / zoom);
}
Camera.Parameters p = camera.getParameters();
int zoom = p.getZoomRatios().get(p.getZoom()).intValue();
double thetaH = zoomAngle(p.getHorizontalViewAngle(), zoom);
double thetaV = zoomAngle(p.getVerticalViewAngle(), zoom);
Aspect Ratio
While the typical camera is a 4:3 aspect ratio, the preview may also be available in 5:3 and 16:9 ratios and this seems to be accomplished by actually extending the horizontal field of view. This appears to be undocumented, hence unreliable, but by assuming that's how it works we can calculate the field of view.
The math is similar to the zoom calculations; however, in this case z remains constant and it is x that changes. By assuming that the vertical view angle remains unchanged while the horizontal view angle is varied as the aspect ratio changes, it's possible to calculate the new effective horizontal view angle.
tan(Θ/2) = v / (2z)
tan(Θ'/2) = h / (2z)
2z = v / tan(Θ/2)
Θ' = 2 atan((h/v) tan(Θ/2))
Here h/v is the aspect ratio and Θ is the base vertical field of view, while Θ' is the effective horizontal field of view.
Camera.Parameters p = camera.getParameters();
int zoom = p.getZoomRatios().get(p.getZoom()).intValue();
Camera.Size sz = p.getPreviewSize();
double aspect = (double) sz.width / (double) sz.height;
double thetaV = Math.toRadians(p.getVerticalViewAngle());
double thetaH = 2d * Math.atan(aspect * Math.tan(thetaV / 2));
thetaV = 2d * Math.atan(100d * Math.tan(thetaV / 2d) / zoom);
thetaH = 2d * Math.atan(100d * Math.tan(thetaH / 2d) / zoom);
As I said above, since this appears to be undocumented, it is simply a guess that it will apply to all devices; it should be considered a hack. The correct solution would be splitting off a new set of functions getCurrentHorizontalViewAngle and getCurrentVerticalViewAngle.
Unless there's some API call for that (I'm not an Android programmer, I wouldn't know), I would just snap a picture of a ruler from a known distance away, see how much of the ruler is shown in the picture, then use trigonometry to find the angle like this:
now you have the two distances l and d from the figure. With some simple goniometry, one can obtain:
tan(α/2) = (l/2)/d,
hence
α = 2*atan(l/2d)
So with this formula you can calculate the horizontal field-of-view of your camera. Of course measuring the vertical f.o.v. goes exactly the same way except that you then need to view the object in its vertical position.
Then you can hard-code it as a constant in your program. (A named constant, of course, so it'd be easy to change :-p)
I have a Droid Incredible as well. Android 2.2 introduced the functions you are looking for. In my code, I have:
public double getHVA() {
return camera.getParameters().getHorizontalViewAngle();
}
public double getVVA() {
return camera.getParameters().getVerticalViewAngle();
}
However, these require that you have the camera open. I'd be interested to know if there is a "best practices" way to not have to open the camera each time to get those values.
#David Zaslavsky - how? What is the mathematical relationship between the zoom levels? I can't find it anywhere (I asked in this question: What do the Android camera zoom numbers mathematically represent?)

Categories

Resources