I am working on a project in which i have to calculate my device height from ground. I have searched all over the internet but could not find any solution.
Please, Anyone tell me what to do..??
Take it with a grain of salt, a bit of humor and a sense of philosophy. Change the barometer by your smartphone.
http://naturelovesmath-en.blogspot.ca/2011/06/niels-bohr-barometer-question-myth.html
First it has to be clarified, if "height from ground" means altitude in meaning "height from sea level" or you mean, how far the phone is away from the floor, when you have it in your hands.
For the second case:
Like SonicWind states, you could do the trick using the camera.
It would require calibration of the camera and to have a standard object.
Take a picture of the standard object which has to be positioned on the ground with standard zoom.
Recognize the object size - or select it in the picture, and calculate the distance to the object.
-> you have the distance to the ground.
The object might be also your shoes etc. So if the application should be for multiple users, you might allow them to enter their shoe sizes ;)
This is an odd one..but OK..I like a challenge. The only way to realistically do this is to run a sonar sensor on the phone(easily done on arduino). Other than that..all you can do is set up the code to read the accelerators to guesstimate the distance(put the phone on the ground and pick it up to the height you want. It appears to be impossible to do otherwise(maybe some concept use of the camera..)
Related
My device has only two focus modes, AUTO and FIXED (as per getSupportedFocusModes()).
I want to set my camera at a fixed focus distance of 'x' (x being whatever I like, or whatever I can get from the camera..). (I'm aware of setFocusMode(Camera.Parameters.FOCUS_MODE_FIXED), but this seems to be fixed only on the farthest possible setting..)
Can this be done? (Android version 4.2.2)
Not trying to completely answer the question here, just trying to give it some direction.
So, what you need here is a driver support for that kind of operation. Then at some point you'd ask the driver from your application to set a requested focus distance.
Another question is: "if anyone really needs that kind of functionality?".
Android documentation says:
public static final String FOCUS_MODE_FIXED
Focus is fixed. The camera is always in this mode if the focus is not adjustable. If the camera has auto-focus, this mode can fix the focus, which is usually at hyperfocal distance. Applications should not call autoFocus(AutoFocusCallback) in this mode.
Lets see what hyperfocal distance is.
Hyperfocal distance
From Wikipedia, the free encyclopedia
In optics and photography, hyperfocal distance is a distance beyond which all objects can be brought into an "acceptable" focus. There are two commonly used definitions of hyperfocal distance, leading to values that differ only slightly:
Definition 1: The hyperfocal distance is the closest distance at which a lens can be focused while keeping objects at infinity acceptably sharp. When the lens is focused at this distance, all objects at distances from half of the hyperfocal distance out to infinity will be acceptably sharp.
Definition 2: The hyperfocal distance is the distance beyond which all objects are acceptably sharp, for a lens focused at infinity.
The distinction between the two meanings is rarely made, since they have almost identical values. The value computed according to the first definition exceeds that from the second by just one focal length.
As the hyperfocal distance is the focus distance giving the maximum depth of field, it is the most desirable distance to set the focus of a fixed-focus camera.
So the focus is not set on the farthest possible setting, but is set to have all visible objects to be acceptably sharp.
Returning to the question.
If you happen to be a developer of this particular camera's firmware, you can add any needed IOCTLs to you driver. But then you still going to need to call them somehow. This can't be achieved without adding additional functions into the Android OS, and further recompiling of Android itself and it's underlying Linux kernel.
So it seems like you can't achieve this goal, not from the user space at least.
One potential approach to achieve that fixed focus distance is to call autoFocus at the start of the camera life-cycle. Keep calling autoFocus sporatically until a condition is met. Once the condition is met, then instead of calling autoFocus, set a flag and call takePicture instead.
This is one solution that I have come to in order to get the desired effect that you might be looking to achieve.
So within my thread that is taking pictures continuously, the code looks something like this:
if(needsFocus)
{
myCamera.autoFocus(autoFocusCallback);
}
else //Focus is not needed anymore at this point
{
if(myCamera != null)
{
myCamera.startPreview();
myCamera.takePicture(pictureCallback);
}
}
Once the condition is met, needsFocus is set to true. At this point, the focus is fixed at the place that I want it to be at. Then it won't change throughout the rest of the activities task. The condition for my case was the appearance of a particular object detected with the OpenCV library.
I might be wrong, but the way you phrase your question seems like coming from a classic DSLR lens perspective.
On an android mobile camera, you don't actually have to worry that much of a lens focal distance, unless your mobile camera allows that (which does not seem to be the case, as you mention it just allows auto or fixed, instead of infinite, macro, continuous-video, etc).
You can just set local areas on the camera to focus and let the sdk do its work. If the object touched on the camera image is far or near it's the sdk work to calculate accordingly and focus for you.
For an example, try this open-source camera project.
I have a nice idea for android application want to make real scale, not like others which are fake i was thinking of how to do it but don't have any idea.
EDIT: want to make real scale (what means) It means for example i wanna calculate how much is the weight of a coin, then i'm putting the coin in the screen and calculates how much is the weight of the coin and if its possible the scale to get the weights to 50 grams
Hope its understood now.
Actually it is possible, just probably not precise enough. Most touch controllers can report touch pressure and it is available via MotionEvent.getPressure() call. So with some luck and tedious calibration you can measure weight of something. It is going to work better with cheaper resistive screen.
There is nothing in the Android SDK that supports measuring the weight of a coin.
You can get the pressure (0-1). Calibrate it with a quarter or known value (quarter is 2.5g). Then if a quarter gives you a pressure value of 0.2, then you know that to calculate weight, you use the formula (p * 2.5/2), where p is the value read from getPressure().
I don't know if you can scale it...
Good Luck.
This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D
I would like to develop a personal app for this i need to detect my car's rotation.
In a previous thread i got an answert to which sensors are good for that it's okay.
Now i would like to ask you to please summerize the essential/needed mathematical relationship.
What i would like to see in my app:
- The car rotation in degrees
- The car actual speed (in general this app will be used in slow speed situation like 3-5km/h)
I think the harder part of this is the rotation detect in real time. It will be good to the app could work when i place the phone in a car holder in landscape or portrait mode.
So please summerize me which equations,formulas,realtionships are needed to calculate the car rotation. And please tell me your recomendation to which motion/position sensor are best for this purpuse (gravity,accelerometer,gyro,..)
First i thought that i will use Android 2.2 for better compatibility with my phones but for me 2.3.3 is okay too. In this case i can use TYPE_ROTATION_VECTOR which looks like a good thing but i don't really know that it can be a useful for me or not?
I don't want full source codes i would like to develop it myself but i need to know where can i start, what deep math knowlegde needed and what part of math are needed. And for sensor question: i'am a bit confused there are many sensors which are possible ok for me.
Thanks,
There is no deep math that you need. You should use TYPE_MAGNETIC_FIELD and TYPE_GRAVITY if it is available. Otherwise use TYPE_ACCELEROMETER but you need to filter the accelerometer values using Kalman filter or low pass filter. You should use the direction of the back camera as the reference. This direction is the azimuth returned by calling getOrientation, but before calling getOrientation you need to call remapCoordinateSystem(inR, AXIS_X, AXIS_Z, outR) to get the correct azimuth. Then along as the device is not laying flat, it does not matter what the device orientation is (landscape or portrait). Just make sure that the phone screen is facing the opposite direction of the car direction.
Now declare two class members startDirection and endDirection. In the beginning, startDirection and endDirection have the same values, now if the azimuth change by more than say 3 degrees, there is always a little fluctuation, then change the endDirection to this value and continue to change until say 20 returned azimuth have the same values (you have to experiment with this number). This mean that the car stop turning and then you calculate the difference between startDirection and endDirection, this gives you the degree of rotation. Now set startDirection and endDirection to this new azimuth and wait for next turn.
Is it possible to measure distance to object with phone camera?
I mean, in my application I start the camera, facing the camera to the object (lets say house) and then press the button and it calculates the distance and shows me in screen.
If it's possible where I can find some tutorial or information about it?
I accept the question has been answered adequately (with the obvious caveats of requiring level ground and possible accuracy problems) but for those who don't believe it can be done or that it needs a video camera, let me explain the low-level math needed to do it....
The picture above shows me standing outside my house. The horizontal (d) is the distance I want to measure and the vertical (h) is the height above the ground at which I'm holding the camera. In this case 'h' is a known value when I'm holding the android camera at eye-level (approx 67 inches or 1.7 metres). When I tilt the camera to aim it directly at the point my house meets the ground, all the software needs to do is work out the angle (a) relative to vertical and it can calculate 'd' using...
d = h * tan a
Well you should read how ithinkdiff.com "measures" the distance:
Uses the angle of the iPhone to estimate the distance to a point on the ground.
Hold the iPhone in front of you, align the point in the camera and get a direct
reading of the distance. The distance can then be used in the speed tool.
So basically it takes the height of where you hold the phone (eye-level), then you must point the camera to the point where object touches the ground. Then the phone measures the inclination and with simple trigonometry it calculates distance.
This is of course not very accurate. It gets less accurate the further the object is. Also it assumes that the ground is level.
Nope. The camera can only give you image data and an image alone doesn't give you enough information to give you depth information. If you had multiple images that you had location information for or even video you could then process it to triangulate the distance, but a single image alone would not be enough to give you a distance.
You can use the technique used by our eye to get perspective of depth and distance.
1) Get 2 images of the same object from two different camera positions.
2) The distance or pixels between object in 2 images is inversely proportional to distance between camera and object.
The implementation is available at https://github.com/agnelvishal/Distance-between-camera-and-object
Here is the research paper http://dsc.ijs.si/files/papers/S101%20Mrovlje.pdf
You have the angle in the phone's accelerometer. If you calculate the tangent of this angle and multiply it by the height of the camera lens, you get the distance.
I think this App uses the approach MisterSquonk mentioned (its free). Watch the "Trigonometry" technique.
I think by using FastCV you can calculate the distance between Camera and the object. In this You dont need to know the angle or the Position of camera that you are holding above ground Level. take a look at this question here
One way to achieve this is using the DPI's in your device. You can take a picture and calculate the height. But you'll need another object as a reference and then you will be able to know the problem with this method could be the perspective between the objects
I think it could be possible doing that using the phone camera. I know that the modern phones use lenses to focus on a object. If it is possible to know their focal length and their position(displacement) to focus on the chosen object it's also possible to determinate the distance.
No. Only with two cameras in stereo mode, like the xbox 360 kinect. It takes at least 3 points to triangulate distance.