get real size based on pixel size - android

I am trying to get the true object size in android app using OpenCV library.
I've created an app that is able to recognize a car (looking on it side). Now the thing that I need to do is too get it true width and height.
What I have:
I have the height and width in pixels of the object (I' wrapping the car in rectangle), I have the resolution of the camera that is used.
Can I convert in some way pixels to true size ? I can't use distance from me to object cause I don't know it.

Can I convert in some way pixels to true size ? I can't use distance from me to object cause I don't know it
No, you can't without knowing the distance. The perspective projection of real world objects (the car) onto the 2D image plane results in an information loss. For example, take a small toy car near the camera and a normal car far from the camera. Both could result in the same projected size in pixels although their true sizes are different.

Related

How to measure distance using camera pixels?

How can I know a distance between phone and an object which appears in the camera? Is there a way to measure it by using pixels of a camera between them? And if yes, then approximately what accuracy will I have?
Are you taking about a camera inside the phone? If so, there is no number of pixels between the phone and the object.
In "laboratory conditions" you could measure the size of the known object in the image and use some empirical values to interpolate the distance between phone and object. Maybe openCV for Android helps to get acceptable results outside laboratory conditions.
I think there is a solution to your problem:
Focus the object and use the focus-distance. The results for objects in the distance will be useless, results for nearer objects should be acceptable.

How to measure real size a object by using the other object ( square marker ) when knowing the real size in Android? [duplicate]

I have to make a mobile app that calculates the real life size of an object in an image.
I have done some research on it and found helpful [question]: How would you find the height of objects given an image?
The relation of the distance of the camera and real life size of the object isn't actually that complex, the ratio of the size of the object on the sensor and the size of the object in real life is the same as the ratio between the focal length and distance to the object.
distance to object (mm) = focal length (mm) * real height of the object (mm) * image height (pixels)
---------------------------------------------------------------------------
object height (pixels) * sensor height (mm)
But how to get the value of real height of the object if distance is not known ?
Do the tools that create 3d models from images have real life dimensions?
The simple answer is you can't.
Incidentally, this is why humans have two eyes. If you want to judge size without a known distance, you'll need at least two reference points. This allows you to triangulate the position of the object, get a distance to it, and use your known focal distance to calculate the size.
The more complex answer is there are ways around this for example:
Cheat by using a known reference:
For example, if you have an object of known size, you can infer the distance. This is similar to what NASA does to calibrate its cameras, for example.
You can make safe assumptions if you're dealing with common objects, such as the height of one storey when analysing the image of a building.
Move your camera around:
This allows you to get more than one reference point with the same camera.
I suppose you could use the accelerometer to accurately measure the positional relation between the image captured at point T1 in time and point T2. This would give you two images of the same subject with a known distance between them. This then allows you to triangulate as if you had two eyes.
Whether normal hand-held camera jitters will be sufficient for triangulation, or whether the accelerometer will be accurate enough to inertially position the phone, I don't know.
Assume a distance:
If your app is designed to compare something on the scale of a human hand (or other bit of human anatomy), you can probably safely assume a distance based on what people will naturally do. The focus limits of the camera itself will also give an upper and lower range on how far an object can be and still be in focus. This will probably be within a tolerable margin of error.
As you mention in your question, there is an entire subfield dedicated to this question, and it is an active research area.

What camera viewport width and height should I use with orthographic camera

I am very beginner to game programming and Libgdx. I am really confused what camera viewport size should I use. In some articles i found that they used 480x800 which is same as target device size. In some articles I found the use meters as 5x5 meter.
So which method is better and how (if you can give some benefits).
If I use meter unites for camera viewport then which is first mapped, width or height.
If i use 5x5 meter for 480x800 pixels device then visible area of world
height = 5 meter = 480px and
width = 800/480 * 5 = 8.33 meter
or
width = 5 meter = 800px and
height = 480/800 * 5 = 3 meter
Is it correct calculation of visible world size and which is used first or second.
I am confused when they start using meter for size everywhere instead of pixels. like actor size is 1x1 meter even it is only 64x64 px. It is really difficult to estimate position and size for me.
Please link any good article about camera and camera units.
Whatever dimensions you specify, they'll be mapped to the entire screen by default. If the aspect ratios don't match, the displayed graphics will be stretched. So if you set your camera to a 5x5 coordinate system on a non-square screen without changing the drawing area, it'll be heavily distorted. If you render it in a square desktop window, it'll be fine.
The advantage of using smaller coordinate systems is that it's easier to calculate with, and possibly more meaningful in the context of a game - e.g. you can think of them as meters, as you said. It's useful in cases where the content matters more than the exact positions on the screen - like drawing the game world.
Using larger coordinates which match the resolution of some devices can be more useful when you're drawing UI. You can see how large you should make each image, for example, if you target that resolution. (Scaling can cause distortions.)
But ultimately, it's a matter of preference. Personally, I like smaller coordinate systems more, so I recently coded my level select menu in a 20*12 system. (I did run into problems when rendering a BitmapFont though - they were not very well made for scaling like this.) Others might prefer to work with resolution-sized coordinates for gameplay rendering as well. What matters is that you should make sure you're not distorting the graphics too much by badly matching aspect ratios.
try 136 for Width and 204 for Height

GroundOverlay images with dimention lengths of power 2

I'm using a GroundOverlay to display images over a Google Map on Android. The images I'm using are being pulled from a server and aren't guaranteed to have width/height with a value that is a power of 2.
According to the docs if I don't make the sides of length power 2 the API will do this for me.
Note: When the image is added to the map it will be converted to an image with sides that are powers of two. You can avoid this conversion by using an original image with dimensions that are powers of two — for example, 128x512 or 1024x1024.
Does anyone on the Google Maps for Android team have any information on whether the images are stretched/cropped etc in order to make them have sides of length power 2?
If they are stretched/cropped is this taken into account when rendering the bitmap if I specify a certain LatLngBounds as the region to overlay the image onto (e.g. are the bounds increased to match the new width/height)?
Also, is the aspect ration of the image preserved? Is this even possible if the original source image doesn't have an aspect ratio that would allow resizing to have sides of length power 2?
Thanks,
Andy.

Is it possible to measure distance to object with camera?

Is it possible to measure distance to object with phone camera?
I mean, in my application I start the camera, facing the camera to the object (lets say house) and then press the button and it calculates the distance and shows me in screen.
If it's possible where I can find some tutorial or information about it?
I accept the question has been answered adequately (with the obvious caveats of requiring level ground and possible accuracy problems) but for those who don't believe it can be done or that it needs a video camera, let me explain the low-level math needed to do it....
The picture above shows me standing outside my house. The horizontal (d) is the distance I want to measure and the vertical (h) is the height above the ground at which I'm holding the camera. In this case 'h' is a known value when I'm holding the android camera at eye-level (approx 67 inches or 1.7 metres). When I tilt the camera to aim it directly at the point my house meets the ground, all the software needs to do is work out the angle (a) relative to vertical and it can calculate 'd' using...
d = h * tan a
Well you should read how ithinkdiff.com "measures" the distance:
Uses the angle of the iPhone to estimate the distance to a point on the ground.
Hold the iPhone in front of you, align the point in the camera and get a direct
reading of the distance. The distance can then be used in the speed tool.
So basically it takes the height of where you hold the phone (eye-level), then you must point the camera to the point where object touches the ground. Then the phone measures the inclination and with simple trigonometry it calculates distance.
This is of course not very accurate. It gets less accurate the further the object is. Also it assumes that the ground is level.
Nope. The camera can only give you image data and an image alone doesn't give you enough information to give you depth information. If you had multiple images that you had location information for or even video you could then process it to triangulate the distance, but a single image alone would not be enough to give you a distance.
You can use the technique used by our eye to get perspective of depth and distance.
1) Get 2 images of the same object from two different camera positions.
2) The distance or pixels between object in 2 images is inversely proportional to distance between camera and object.
The implementation is available at https://github.com/agnelvishal/Distance-between-camera-and-object
Here is the research paper http://dsc.ijs.si/files/papers/S101%20Mrovlje.pdf
You have the angle in the phone's accelerometer. If you calculate the tangent of this angle and multiply it by the height of the camera lens, you get the distance.
I think this App uses the approach MisterSquonk mentioned (its free). Watch the "Trigonometry" technique.
I think by using FastCV you can calculate the distance between Camera and the object. In this You dont need to know the angle or the Position of camera that you are holding above ground Level. take a look at this question here
One way to achieve this is using the DPI's in your device. You can take a picture and calculate the height. But you'll need another object as a reference and then you will be able to know the problem with this method could be the perspective between the objects
I think it could be possible doing that using the phone camera. I know that the modern phones use lenses to focus on a object. If it is possible to know their focal length and their position(displacement) to focus on the chosen object it's also possible to determinate the distance.
No. Only with two cameras in stereo mode, like the xbox 360 kinect. It takes at least 3 points to triangulate distance.

Categories

Resources