Mobile phone movement in space - android

is it possible to track mobile phone movement in space? I need and info like:
vertical position of the phone and it's movement in 3d space of our world.
So lets imagine that we are holding phone at 1.5 meters from the ground and moving it by circle trajectory. So I need to get coords of the phone in real word. Is this possible somehow?

It would be possible to do it like GPS, then you would have to move the z axis onto the x axis. Basically making the 3D element part of the 2D element, then reconstructing the 3d sphere using the newly obtained information to get a position in space. The problem with this method is that the earth is round, and you can never get a view that applies to all the phones.

Related

Android accelerometer mapping movement to custom coordinate system

If I have custom coordinate system X - left/right, Y - forward/backward, Z - Up/down that is represented on my PC screen inside my unreal project, how would I map the accelerator values In a way that when I move my phone toward the PC screen (regardless of the phone orientation) so that my Y value goes up and same for other axes?
I got something similar working with rotation by taking "referent" rotation quaternion, inverting it and multiplying it by current rotation quaternion, but I'm just stuck on how to transform movement.
Example of my problem is that if I'm moving my phone up with screen pointing at sky my Z axis increases which is what I want, but when I also point my phone screen to my PC screen and move it forward Z axis again goes up, when I would want in this case that my Y value increases.
There is a similar question Acceleration from device's coordinate system into absolute coordinate system but that doesn't really solve my problem since I don't want to depend on the location of the north for Y and so on.
Clarification of question intent
It sounds like what you want is the acceleration of your device with respect to your laptop. As you correctly mentioned, the similar question Acceleration from device's coordinate system into absolute coordinate system maps the local accelerometer data of a device with respect to a global frame of reference (FoR) (the Cartesian "flat" Earth FoR to be specific - as opposed to the ultra-realistic spherical Earth FoR).
What you know
From your device, you know the local Phone FoR, and from the link above, you can also find the behavior of your device with respect to a flat Earth FoR with a rotation matrix, which I'll call R_EP for Rotation in Earth FoR from Phone FoR. In order to represent the acceleration of your device with respect to your laptop, you will need to know how your laptop is oriented and positioned with respect to either your phone's FoR (A), or the flat Earth FoR (B), or some other FoR that is known to both your laptop and your phone but I'll ignore this cause it's irrelevant and the method is identical to B.
What you'll need
In the first case, A, this will allow you to construct a rotation matrix which I'll call R_LP for Rotation in Laptop FoR from Phone FoR - and that would be super convenient because that's your answer. But alas, life isn't fun without a little bit of a challenge.
In the second case, B, this will allow you to construct a rotation matrix which I'll call R_LE for Rotation in Laptop FoR from Earth FoR. Because the Hamilton product is associative (but NOT commutative: Are quaternions generally multiplied in an order opposite to matrices?), you can find the acceleration of your phone with respect to your laptop by daisy-chaining the rotations, like so:
a_P]L = R_LE * R_EP * a_P]P
Where the ] means "in the frame of", and a_P is acceleration of the Phone. So a_P]L is the acceleration of the Phone in the Laptop FoR, and a_P]P is the acceleration of the Phone in the Phone's FoR.
NOTE When "daisy-chaining" rotation matrices, it's important that they follow a specific order. Always make sure that the rotation matrices are multiplied in the correct order, see Sections 2.6 and 3.1.4 in [1] for more information.
Hint
To define your laptop's FoR (orientation and position) with respect to the global "flat" Earth FoR, you can place your phone on your laptop and set the current orientation and position as your laptop's FoR. This will let you construct R_LE.
Misconceptions
A rotation quaternion, q, is NEITHER the orientation NOR attitude of one frame of reference relative to another. Instead, it represents a "midpoint" vector normal to the rotation plane about which vectors from one frame of reference are rotated to the other. This is why defining quaternions to rotate from a GLOBAL frame to a local frame (or vice-versa) is incredibly important. The ENU to NED rotation is a perfect example, where the rotation quaternion is [0; sqrt(2)/2; sqrt(2)/2; 0], a "midpoint" between the two abscissa (X) axes (in both the global and local frames of reference). If you do the "right hand rule" with your three fingers pointing along the ENU orientation, and rapidly switch back and forth from the NED orientation, you'll see that the rotation from both FoR's is simply a rotation about [1; 1; 0] in the Global FoR.
References
I cannot recommend the following open-source reference highly enough:
[1] "Quaternion kinematics for the error-state Kalman filter" by Joan SolĂ . https://hal.archives-ouvertes.fr/hal-01122406v5
For a "playground" to experiment with, and gain a "hands-on" understanding of quaternions:
[2] Visualizing quaternions, An explorable video series. Lessons by Grant Sanderson. Technology by Ben Eater https://eater.net/quaternions

Fixing device coordinate system regardless of device orientation?

I am successfully converting (latitude, longitude) coordinates to (East, North) coordinates and am trying to figure out a way to accurately place the (East, North) coordinates in the AR world.
Example of my current issue:
Device coordinate system:
A conversion from a (latitude, longitude) coordinate gives me an (East, North) coordinate of,
(+x, 0, +z) where x=East, z=North
Now, if I am facing Northwards the EN coordinate will be placed behind me, as the forward facing axis is -z . If I am facing Southwards, the EN coordinate will be placed behind me once again because it is dependent on my device's orientation.
My question:
In ARCore is it possible to fix a device's coordinate system no matter what orientation the device is in? Or is there an algorithm that takes into account device orientation and allows static placement of Anchors?
EDIT:
I posted this same question on the ARCore-Sceneform GitHub and these are the answers I received:
At first, let's see what Google ARCore engineers say about World coordinates:
World Coordinate Space
As ARCore's understanding of the environment changes, it adjusts its model of the world to keep things consistent. When this happens, the numerical location (coordinates) of the camera and anchors can change significantly to maintain appropriate relative positions of the physical locations they represent.
These changes mean that every frame should be considered to be in a completely unique world coordinate space. The numerical coordinates of anchors and the camera should never be used outside the rendering frame during which they were retrieved. If a position needs to be considered beyond the scope of a single rendering frame, either an anchor should be created or a position relative to a nearby existing anchor should be used.
But!
I firmly believe that a smartphone with its camera and sensors begin their way (in the ArSession) from relative center of coordinate space (it's relative for us, for ARCore it's absolute, of course):
// X, Y, Z coordinates must be counted from zero, mustn't they?
(x: 0.0, y: 0.0, z: 0.0)
Then, under certain circumstances, the world coordinate space might be changed.

Measure sideways tilt of upright device

Some camera apps have a feature where they display a line on the screen which is always parallell to the horizon, no matter how the phone is tilted sideways. By "tilted sideways" I mean rotating the device around an axis that is perpendicular to the screen.
I have tried most of the "normal" rotation functions such as combining the ROTATION_VECTOR sensor with getRotationMatrix() and getOrientation() but none of the resulting axis seem to correspond to the one I'm looking for.
I also tried using only the accelerometer, normalizing all three axis and detecting how much gravity is in the X-axis. This works decently when the device is perfectly upright (i.e not tilted forward/backwards). But as soon as it's tilted forward or backward the measured sideways tilt gets increasingly inaccurate, since gravity is now acting on two axis at the same time.
Any ideas on how to achieve this kind of sideways rotation detection in a way that works even if the phone is tilted/pitched slightly forward/backward?
The result of getRotationMatrix converts from the phone's local coordinate system to the world coordinate system. Its columns are therefore the principal axes of the phone, with the first being the phone's X-axis (the +ve horizontal axis when holding the phone in portrait mode), and the second being the Y.
To obtain the horizon's direction on the phone's screen, the line of intersection between the horizontal plane (world space) and the phone's plane must be found. First find the coordinates of the world Z-axis (pointing to the sky) in the phone's local basis - i.e. transpose(R) * [0, 0, 1]; the XY coordinates of this, given by R[2][0], R[2][1], is the vertical direction in screen-space. The required horizon line direction is then R[2][1], -R[2][0].
When the phone is close to being horizontal, this vector becomes very small in magnitude - the horizon is no-longer well-defined. Simply stop updating the horizon line below some threshold.

Can we Draw line in android canvas between two points given by gyroscope?

Is it possible to draw a line from Point A(where user touched first) and Point B(where user touched second) in android over a camera.
The user can touch first point and rotate his camera in other direction to tap second point.
I am using gyroscope, accelerometer and magnetometer (Sensor Fusion) and I get x,y,z co-ordinates on touch.
But can we draw a 3D image on canvas where ever the user touches. Something similar to what MagicPlan app is doing.
Thanks #chipopo but the real concern is, is it possible to actually draw a line >between two points given by gyroscope sensor.
Short answer,no. Gyroscope is a rate sensor, not a position sensor. You need to do math to get Points.
Since your in Android, I would recommend Orientation
once you have this you need to decide on a radius that best fits your use case and also establish a reference orientation. Once you grab two orientations its up to you how to map shperical points to a 2d canvas.
One method I have used in the past is just plotting the delta pitch on Y and the delta heading on X, however you may need to think about what roll means to you in the context of what your app is trying to do.
You should probably use openGL, but you probably want a drawing library of some sort.

Is it possible to measure distance to object with camera?

Is it possible to measure distance to object with phone camera?
I mean, in my application I start the camera, facing the camera to the object (lets say house) and then press the button and it calculates the distance and shows me in screen.
If it's possible where I can find some tutorial or information about it?
I accept the question has been answered adequately (with the obvious caveats of requiring level ground and possible accuracy problems) but for those who don't believe it can be done or that it needs a video camera, let me explain the low-level math needed to do it....
The picture above shows me standing outside my house. The horizontal (d) is the distance I want to measure and the vertical (h) is the height above the ground at which I'm holding the camera. In this case 'h' is a known value when I'm holding the android camera at eye-level (approx 67 inches or 1.7 metres). When I tilt the camera to aim it directly at the point my house meets the ground, all the software needs to do is work out the angle (a) relative to vertical and it can calculate 'd' using...
d = h * tan a
Well you should read how ithinkdiff.com "measures" the distance:
Uses the angle of the iPhone to estimate the distance to a point on the ground.
Hold the iPhone in front of you, align the point in the camera and get a direct
reading of the distance. The distance can then be used in the speed tool.
So basically it takes the height of where you hold the phone (eye-level), then you must point the camera to the point where object touches the ground. Then the phone measures the inclination and with simple trigonometry it calculates distance.
This is of course not very accurate. It gets less accurate the further the object is. Also it assumes that the ground is level.
Nope. The camera can only give you image data and an image alone doesn't give you enough information to give you depth information. If you had multiple images that you had location information for or even video you could then process it to triangulate the distance, but a single image alone would not be enough to give you a distance.
You can use the technique used by our eye to get perspective of depth and distance.
1) Get 2 images of the same object from two different camera positions.
2) The distance or pixels between object in 2 images is inversely proportional to distance between camera and object.
The implementation is available at https://github.com/agnelvishal/Distance-between-camera-and-object
Here is the research paper http://dsc.ijs.si/files/papers/S101%20Mrovlje.pdf
You have the angle in the phone's accelerometer. If you calculate the tangent of this angle and multiply it by the height of the camera lens, you get the distance.
I think this App uses the approach MisterSquonk mentioned (its free). Watch the "Trigonometry" technique.
I think by using FastCV you can calculate the distance between Camera and the object. In this You dont need to know the angle or the Position of camera that you are holding above ground Level. take a look at this question here
One way to achieve this is using the DPI's in your device. You can take a picture and calculate the height. But you'll need another object as a reference and then you will be able to know the problem with this method could be the perspective between the objects
I think it could be possible doing that using the phone camera. I know that the modern phones use lenses to focus on a object. If it is possible to know their focal length and their position(displacement) to focus on the chosen object it's also possible to determinate the distance.
No. Only with two cameras in stereo mode, like the xbox 360 kinect. It takes at least 3 points to triangulate distance.

Categories

Resources