MapBox Navigation camera angles depending on the guidance - android

I would like to know if there is a way to integrate different camera angles + zoom depending on the guidance (ex: Highway=40° / National=30° / Departmental=20° / city=10°) and if it is integrated by default in the MapBox sdk and thanks

I'm not sure I fully understand your question. But yes, you are able to assign different zoom and pitch (angle) for your map instance.
When you say 'guidance', I'm interpreting that as scope or level. You're asking if at different levels you can assign different zooms and pitches.
To update these simply call:
map.setZoom(zoom_value);
map.setPitch(pitch_value);
Always using the same zoom and pitch for a level can be tricky though. Think about using the same zoom for USA as you would for Sri Lanka. Or compare city sizes like Reno, Nevada, USA and Mexico City.
However there is no default to handle these by default.
You would probably be better suited to use fitBounds. However you'll still need to manually adjust pitch because fitBounds doesn't do anything to it.

Related

Getting Current Heading Delphi Multi-Device

I'm trying to retrieve my current heading from an android device using Delphi Rad 10.1 Berlin.
There is a function in the OrientationSensor that is True Heading, however, this is only enabled on Windows, according to the Embarcadero knowledgebase.
So, I think to do it, I need to convert the following Variables into one heading.
OrientationSensor1.Sensor.HeadingX
OrientationSensor1.Sensor.HeadingY
OrientationSensor1.Sensor.HeadingZ
As I only need heading(and don't care about altitude), I believe I can disregard Z.
In return I need to retrieve current heading which should be from 0-360.
I used a formula I found online which is :-
angle = atan2(Y, X);
Which seemed to help, but was wildly inaccurate at some positions, and was negative at others.
Any help or advice would be appreciated.
Some details that may help are :
It's a Multi-Device Application in Delphi.
It's only going on Android Devices(and also only being tested on them).
Thanks in advance.
Don't discard HeadingZ.
These tree headings are not relative to world surface but instead relative to your device orientation and tilt.
So in order to get true heading you will have to take into account heading for all three axis and also tilt information for all three axis.
You can read more information about calculating heading here: https://stackoverflow.com/a/1055051/3636228
Yes linked answer is for Objective C but math behind is same for every programming language.

How to Calculate Height of Android Phone from ground

I am working on a project in which i have to calculate my device height from ground. I have searched all over the internet but could not find any solution.
Please, Anyone tell me what to do..??
Take it with a grain of salt, a bit of humor and a sense of philosophy. Change the barometer by your smartphone.
http://naturelovesmath-en.blogspot.ca/2011/06/niels-bohr-barometer-question-myth.html
First it has to be clarified, if "height from ground" means altitude in meaning "height from sea level" or you mean, how far the phone is away from the floor, when you have it in your hands.
For the second case:
Like SonicWind states, you could do the trick using the camera.
It would require calibration of the camera and to have a standard object.
Take a picture of the standard object which has to be positioned on the ground with standard zoom.
Recognize the object size - or select it in the picture, and calculate the distance to the object.
-> you have the distance to the ground.
The object might be also your shoes etc. So if the application should be for multiple users, you might allow them to enter their shoe sizes ;)
This is an odd one..but OK..I like a challenge. The only way to realistically do this is to run a sonar sensor on the phone(easily done on arduino). Other than that..all you can do is set up the code to read the accelerators to guesstimate the distance(put the phone on the ground and pick it up to the height you want. It appears to be impossible to do otherwise(maybe some concept use of the camera..)

Transform Latitude,Longitude-Position on screen in augmented reality app

This is my first post on this forum and I'm very new in programming. I want to build an application where I can see exactly where some gps-values are on my phone. I know a lot of applications, like junaio, mixare and others, but they only show the direction to the objects and they are not very accurate (they don't have the goal to project it on the exact position on screen) - so I want to build it myself. I program in android, but I think it would be the same on iPhone.
I followed the steps suggested from dabhaid :
There are three steps.
1) Determine your position and orientation using sensors.
2) Convert from GPS coordinate space to a planar coordinate space by determining the relative position and bearing of known GPS coordinates using e.g great circle distance and bearing. (your devices stays at the origin of the coordinate space with this scheme)
3) Do a perspective projection http://en.wikipedia.org/wiki/3D_projection#Perspective_projection to figure out where on the plane that is your display (ok, your camera sensor) the objects should appear, so you can augment them.
Step 1: easy, I have the gps-position and all orientations from my mobile device (x,y,z). For further refinements, I can use some algorithm to smooth this values (average, low filter, whatever).
Step 2: I don't know, what is exactly meant by planar coordinate space. I have some different approaches to convert my gps coordinate space. One of them is ECEF (earth centered), where 0,0,0 is the center of the earth. Somehow, this doesn't look good to me, because every little change of ONE axis, results in changes of the other two axis. So if I change the altitude, all of the 3 axis will change. I don't know if I can follow step 3 with this coordinate system.
In step 2 is mentioned: using haversine - this would give me the distance to the point, but I don't get x,y,z from it. Do I have to calculate x,y by using trigometry (bearing (alpha) + distance (hypotenuse)) ?
Step 3: This method looks really cool! If I have my coordinate space from Step 2, I can calculate d_x,d_y,d_z by using the formula on wikipedia. But after this step, I'm not finished yet because i just have the coordinates and for projecting it on my screen, I only need two coordinates? The text from wikipedia is continued by calculating b_x,b_y They use e_x,e_y,e_z which is the viewer's position relative to the display surface -> How can I get these values from my mobile device? (android/ios). Another approach, which is suggested from wikipedia is: Calculating b_x,b_y by by using the formula mentioned on wikipedia. In this formula they use s_x,s_y, which is the screen size and r_x,r_y which is the recording surface size. Again, how can I get the recording surface from my mobile device?
I can't find anything for it on the internet. It seems that nobody on android/ios has ever implemented a perspective projection before...
Thank you very much for all of your answeres! Also, links to useful sites would help!
I think you can find many answers in this other thread: Transform GPS-Points to Screen-Points with Perspective Projection in Android.
Hope it helped, bye!
Here's a simple solution I did on this issue.
A: Mapping GPS locations on the camera preview in Android
Hope it helped. :D

Android:given a current location and lat/long of places arround me how to decide which places are visible in camera?

I am creating AR app for Android which would write name of places/buildings/etc over camera view when I point to places with live camera. I get my current location in lat and long, also I am able to get list of places (with their lat/long) in certain radius from my current location.
However, the most confusing part to implement is to show only those places which are visible in camera in that moment (don't show places). One of idea was to calculate azimuth of my current location, then calculate azimuth of all places which I get in set radius, then calculate camera horizontal angle using getHorizontalViewAngle() and having all this parameters calculate which of places azimuth gets into this interval: [my_current_loc + (getHorizontalViewAngle()/2) ; my_current_loc - (getHorizontalViewAngle()/2)].
However I think it is not very efficient way, can anyone suggest my any solution, or maybe some had similar problem and find good solution. If it is difficult to understand my problem, let me know and I will try to explain in more details.
You are doing the right thing, but in our project we found better (performance wise) to use the rotationmatrix instead of the azimuth. You can take a look at the source code of mixare augmented reality engine. It's on github: https://github.com/mixare/mixare
The core logic is in the MixView class. The main idea is to convert anything to vectors and project them onto a "virtual" sphere that surrounds the phone.
HTH,
Daniele

Android Photography App Double Exposure

I would like to create an android app that would combine two photographs together to create something similar to what you would see in a double exposure photograph. Can you give me any ideas on how to do this?
To get a true double-exposure, all you should need to do is add together the R/G/B values for each pixel with straight addition, with an upper limit of 255 for each component(for 24bpp at least). If it's too bright, you can always reduce it down some afterward.

Categories

Resources