I'm trying to retrieve my current heading from an android device using Delphi Rad 10.1 Berlin.
There is a function in the OrientationSensor that is True Heading, however, this is only enabled on Windows, according to the Embarcadero knowledgebase.
So, I think to do it, I need to convert the following Variables into one heading.
OrientationSensor1.Sensor.HeadingX
OrientationSensor1.Sensor.HeadingY
OrientationSensor1.Sensor.HeadingZ
As I only need heading(and don't care about altitude), I believe I can disregard Z.
In return I need to retrieve current heading which should be from 0-360.
I used a formula I found online which is :-
angle = atan2(Y, X);
Which seemed to help, but was wildly inaccurate at some positions, and was negative at others.
Any help or advice would be appreciated.
Some details that may help are :
It's a Multi-Device Application in Delphi.
It's only going on Android Devices(and also only being tested on them).
Thanks in advance.
Don't discard HeadingZ.
These tree headings are not relative to world surface but instead relative to your device orientation and tilt.
So in order to get true heading you will have to take into account heading for all three axis and also tilt information for all three axis.
You can read more information about calculating heading here: https://stackoverflow.com/a/1055051/3636228
Yes linked answer is for Objective C but math behind is same for every programming language.
Related
I would like to know if there is a way to integrate different camera angles + zoom depending on the guidance (ex: Highway=40° / National=30° / Departmental=20° / city=10°) and if it is integrated by default in the MapBox sdk and thanks
I'm not sure I fully understand your question. But yes, you are able to assign different zoom and pitch (angle) for your map instance.
When you say 'guidance', I'm interpreting that as scope or level. You're asking if at different levels you can assign different zooms and pitches.
To update these simply call:
map.setZoom(zoom_value);
map.setPitch(pitch_value);
Always using the same zoom and pitch for a level can be tricky though. Think about using the same zoom for USA as you would for Sri Lanka. Or compare city sizes like Reno, Nevada, USA and Mexico City.
However there is no default to handle these by default.
You would probably be better suited to use fitBounds. However you'll still need to manually adjust pitch because fitBounds doesn't do anything to it.
I am working on a project in which i have to calculate my device height from ground. I have searched all over the internet but could not find any solution.
Please, Anyone tell me what to do..??
Take it with a grain of salt, a bit of humor and a sense of philosophy. Change the barometer by your smartphone.
http://naturelovesmath-en.blogspot.ca/2011/06/niels-bohr-barometer-question-myth.html
First it has to be clarified, if "height from ground" means altitude in meaning "height from sea level" or you mean, how far the phone is away from the floor, when you have it in your hands.
For the second case:
Like SonicWind states, you could do the trick using the camera.
It would require calibration of the camera and to have a standard object.
Take a picture of the standard object which has to be positioned on the ground with standard zoom.
Recognize the object size - or select it in the picture, and calculate the distance to the object.
-> you have the distance to the ground.
The object might be also your shoes etc. So if the application should be for multiple users, you might allow them to enter their shoe sizes ;)
This is an odd one..but OK..I like a challenge. The only way to realistically do this is to run a sonar sensor on the phone(easily done on arduino). Other than that..all you can do is set up the code to read the accelerators to guesstimate the distance(put the phone on the ground and pick it up to the height you want. It appears to be impossible to do otherwise(maybe some concept use of the camera..)
This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D
I need to get the area of a known object inside a scene to get the distance from that. The problem is rectifying it so that the area is independent from the angle.
I'm using opencv (on Android) with some java code that is equivalent to this:
http://docs.opencv.org/doc/tutorials/features2d/feature_homography/feature_homography.html#feature-homography
In other words: how do i get the area of the object observed perpendicularly from that distance given the H matrix.
Thank you in advance and sorry for my poor english... :)
You can call cvCalibrateCamera, but am not sure if it works with one image only. The algorithm it is based upon can cope with the one image case, see section 3.1. where it says "if n=1...". So in a pinch you can re-implement it.
I've been playing around with the Android Accelerometer of late using the Android SDK and the Adobe AIR for Android SDK on my Motorola Droid. What I've noticed is that the accelerometer works just fine, but I was wondering if it is possible to compensate in some fashion so that I don't have to use the device in a horizontal position. In other words, I want to use the accelerometer to control my visual display, but initialize it(or modify in some way) so that I don't have to hold it perfectly flat (not much fun having to lean over the phone).
Does anyone know how I can hold the device comfortably in my hand, say 45 degrees, and still utilize the accelerometer to provide forward/backwards readings? Is this possible? Any examples of this this available?
You'll need some simple matrix multiplication math for that. "Calibrate" the rotation by taking the current matrix when you start the app and invert it, then multiply all subsequent matrices with it - that will give you the delta to the starting position.
I had written an application long long ago which dealt with relative rotations. I've forgotten what the code did, but from what I can see, it seems like -
1) I get the initial rotation matrix using - SensorManager.getRotationMatrix(R, I, gravity.clone(), geomagnetic.clone()); (gravity and geomagnetic are the respective accleration and geomagnetic matrices. Dunno why I use clones but there must be some reason.)
2) Now at any stage, get the current rotation matrix and call it R.
3)Calculate the inverse of the initial matrix and call it "initialInverse".
4)Multiply R with initialInverse (or the other way round, you'll have to figure it out).
5) Get your final orientation using SensorManager.getOrientation(delta, values)
I'm sorry but I've totally forgotten what the above code does. I think I remember reading the words Euler Transform somewhere when I wrote this app, so thats what it might be doing. Unfortunately I cannot give you the complete code since I'll probably release this app in the market. However, if you need some more information, please let me know - I'll look into the code and get back to you.
I am working in a project with similar nature where the accelerometer function is not restricted by the position. My way of handling it is very simple, initialize the accelerometer with the current reading as the default. In other words, you have a button that you press once you have the phone in the proper position, upon pressing the button, the current readings of the accelerometer (measures of G) will be your reference (zero values), and make changes when you go above or below those readings... Hope this helps anyone... cheers