Mapbox Navigation via Customised Routes - android

So Mapbox provides an awesome Navigation SDK for Android, and what I have been trying to do is create my own routes, representing each point as a Feature in a Geojson file, and then passing them on to the MapMatching module to get directions that I can then pass to the Navigation Engine.
My solution evolves into two main parts. The first one involves iterating through the points I want navigation to go through, by adding them as input to the .coordinates element of MapboxMapMatching.builder() and subsequently converting this to
.toDirectionRoute(); per Mapbox instructions and example here: https://www.mapbox.com/android-docs/java/examples/use-map-matching/
private void getWaypointRoute(List<Point> features) {
originPosition = features.get(0);
destinationPosition = features.get(features.size() - 1);
MapboxMapMatching.builder()
.accessToken(Mapbox.getAccessToken())
.coordinates(features)
.steps(true) // Setting this will determine whether to return steps and turn-by-turn instructions.
.voiceInstructions(true)
.bannerInstructions(true)
.profile(DirectionsCriteria.PROFILE_DRIVING)
.build().enqueueCall(new Callback<MapMatchingResponse>() {
#Override
public void onResponse(Call<MapMatchingResponse> call, Response<MapMatchingResponse> response) {
if (response.body() == null) {
Log.e(TAG, "Map matching has failed.");
return;
}
if (response.isSuccessful()) {
currentRoute = response.body().matchings().get(0).toDirectionRoute();
The second bit involves just passing 'currentRoute' to the NavigationLauncher as shown below:
NavigationLauncherOptions options = NavigationLauncherOptions.builder()
.origin(origin)
.destination(destination)
.directionsRoute(currentRoute)
.shouldSimulateRoute(simulateRoute)
.enableOffRouteDetection(false)
.build();
// Call this method with Context from within an Activity
NavigationLauncher.startNavigation(MainActivity.this, options);
An example of the route can be seen here Android Simulator Snapshot with Route . Each point across the route, is an intersection, and corresponds to a feature in my GeoJson file. The problem becomes when I launch the navigation. Every time, either in the simulator or on a real device, each point is interpreted as a destination so the voice command goes 'You have reached your first (second, third etc) destination'. I find this annoying as I would like to have a single route with a destination and that's it. I would just like to have this points so I have my own custom path, instead of the shortest path typically returned by routing applications. I try to avoid the problem by setting voiceInstructions off but then the system goes bananans and the navigation screen moves to lat, lng (0,0) which is pretty much somewhere West of Africa. Any help on how I could resolve this it would be greatly appreciated and I would be happy to buy a beer or two for the person that provides the right answer. I have reached out to Mapbox Support as well but we have not found an answer to the problem so I asked them to escalate it internally within their engineering team, as I believe, although the problem I am solving is not uncommon, it is still not very much tested by developers. Cheers!

So here I am and after the kind support of Mapbox Support and Rafa Gutierrez
I can now answer this post myself.
The problem has been arising due to MapboxMapMatching automatically setting .coordinates as waypoints. If instead, one edits explicitly the waypoints variable to have only two waypoints: origin and destination, then the system is able to process the input customised route without translating each input coordinate as a waypoint. The code example below hopefully clarifies the point described above:
MapboxMapMatching.builder()
.accessToken(Mapbox.getAccessToken())
.coordinates(lineStringRep.coordinates())
.waypoints(OD)
.steps(true)
.voiceInstructions(true)
.bannerInstructions(true)
.profile(DirectionsCriteria.PROFILE_DRIVING)
.build().enqueueCall(new Callback<MapMatchingResponse>()
where OD is an array of integers storing the first (origin) and last index (destination) of your coordinates
OD[0] = 0;
OD[1] = features.size() - 1;

Related

Android Mapbox how to capture marker clicks when using CirlceClustering layer

My Current Android application employs the excellent Mapbox SDK
implementation 'com.mapbox.mapboxsdk:mapbox-android-sdk:8.0.0'
implementation 'com.mapbox.mapboxsdk:mapbox-android-plugin-annotation-v7:0.6.0'
implementation 'com.mapbox.mapboxsdk:mapbox-android-plugin-localization-v7:0.9.0'
My application displays approx 50,000 markers and I am using CircleLayer clustering.
The application works as required/expected apart from the fact I cannot see how to detect when my user clicks on any of the low level markers.
All the "Marker" related mapboxMap methods are all deprecated
and direct the developer to employ
use <a href="https://github.com/mapbox/mapbox-plugins-android/tree/master/plugin-annotation">
* Mapbox Annotation Plugin
However I cannot see how to use plugin-annotation to detect clicks on my low level markers.
What am I missing?
To detect any click on your CircleLayer you need first to implement the onMapClick or onMapLongClick methods. Then on every detected click, you need to query your source layer and see if there any features near that location.
If so, then you can get the N nearest features and handle their behaviour.
It should look like this:
#Override
public boolean onMapClick(#NonNull LatLng point) {
// Get the clicked point coordinates
PointF screenPoint = mapboxMap.getProjection().toScreenLocation(point);
// Query the source layer in that location
List<Feature> features = mapboxMap.queryRenderedFeatures(screenPoint, "MY_SOURCE_LAYER_ID");
if (!features.isEmpty()) {
// get the first feature in the list
Feature feature = features.get(0);
// do stuff...
}
return true;
}
This is a very basic way of handling clicks on your layers data. You can find this example I have slightly modified here.

RoadElement.getPermanentDirectedLinkId() always returns 0

For some reason, this code always returns 0, no matter where I am
public Long getClosestLinkID()
{
GeoCoordinate cur = HereMapsManager.instance.getPositionAnchor(); //returns my current position
Long closest = -1L;
RoadElement closest_elem = RoadElement.getRoadElement(cur, "fre");
if (closest_elem != null) {
closest = closest_elem.getPermanentDirectedLinkId();
}
return closest;
}
It finds a valid RoadElement, but calling getPermanentDirectedLinkId() (or getPermanentLinkId()) constantly returns 0.
Now, the documentation says:
Returns:
Permanent Link ID with direction of this element or 0 if not available.
So I tried with random coordinates on the map a little bit everywhere on the roads in France, and it keeps returning 0. I'm lost here.
getPermanentDirectedLinkId and getPermanentLinkId property is unavailable when the public transport mode RouteOptions.TransportMode#PUBLIC_TRANSPORT is used. For all the other transport modes, it is available only in routes calculated with the online connectivity mode. You should set your Connectivity explicitly to ONLINE(setConnectivity(Connectivity.ONLINE)).
Also, check if you are in one of the below two modes:
Tracking - NavigationManager.startTracking()
Navigation - NavigationManager.startNavigation()
This is required inorder to map match your location to a route.
You have to explicitly download and use offline maps as well to get this information.
Editing to add more information based on the customer comment below: You can check the classes and methods supported for your SDK by looking up the below pages
Starter SDK: {SDK-Download-location}/HERE_Android_SDK_Starter_v3.8_65/HERE-sdk/libs/docs/mapsdoc/index.html
Premium SDK: {SDK-Download-location}/HERE_Android_SDK_Premium_v3.8.0.104/sdk/HERE-sdk/libs/docs/mapsdoc-hybridplus/index.html

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

SKPOITrackerManager not working as excepted

I am using SKPOITrackerManager to track self-defined trackable POIs in navigation mode. The arraylist of SKTrackablePOI objects has many elements which are placed near my route. But only one of them is tracked by onReceivedPOIs(). This method only returns a one-element-list. But it is called 5 to 10 times for exactly one POI. I am sorry that I can not post my complete code here due to project agreement. But here I can show you my settings in an implementation of the SKPOITrackerListener interface:
public void startPOITracking() {
poiTrackerManager = new SKPOITrackerManager(this);
SKTrackablePOIRule skTrackablePOIRule = new SKTrackablePOIRule();
skTrackablePOIRule.setAerialDistance(15000);
skTrackablePOIRule.setRouteDistance(15000);
skTrackablePOIRule.setNumberOfTurns(15000);
skTrackablePOIRule.setMaxGPSAccuracy(15000);
skTrackablePOIRule.setEliminateIfUTurn(false);
skTrackablePOIRule.setMinSpeedIgnoreDistanceAfterTurn(12000);
skTrackablePOIRule.setMaxDistanceAfterTurn(150000);
poiTrackerManager.startPOITrackerWithRadius(100, 0.5);
poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule);
poiTrackerManager.addWarningRulesforPoiType(SKTrackablePOIType.INVALID);
}
I have set the limitions to very high values within SKTrackablePOIRule and still a get only one POI. I can even comment out the line with poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule); and I still receive only one single POI. Maybe someone can help to understand my problem.
Here is something I used in the past:
poiTrackingManager = new SKPOITrackerManager(this);
SKTrackablePOIRule rule = new SKTrackablePOIRule();
rule.setAerialDistance(5000); // this would be our main constraint, stating that all the POIs with 5000m, aerial distance should be detected
rule.setNumberOfTurns(100); // this has to be increased – otherwise some points will be disconsidered
rule.setRouteDistance(10000);//this has to be increased as the real road route will be longer than the aerial distance
rule.setMinSpeedIgnoreDistanceAfterTurn(20); //decrease this to evaluate all candidates
rule.setMaxDistanceAfterTurn(10000); //increase this to make sure we don't exclude any candidates
rule.setEliminateIfUTurn(false); // setting this to true (default) excludes points that require us to make an U-turn to get to them
rule.setPlayAudioWarning(false);
Note: I'm not certain what are the max/min values for these parameters as I've seen some issues when they are too high (they do affect the routing algorithm, more precissely how the road graph is explored so this could explain why at high values it might malfunction) - I would say that you should start with conservative values and then gradually increase them
For startPOITrackerWithRadius I would use different values as if you set the radius to 100 (meters) this would greatly reduce the number of POIs that the SDK is able to analyze (even if the rules are good, the POIs might not be analyzed as they don't fall in the "radius" (aerial distance) around your current position) :
poiTrackingManager.startPOITrackerWithRadius(1500, 0.5);
Also see http://sdkblog.skobbler.com/detecting-tracking-pois-in-your-way/ for more insights on how the POITraker works

Google Maps API for Android - detecting buildings/obstacles between me and location

Hypothetical question. I'm building an augmented reality app using Google Maps API for Android. I'm wondering if there's any data that I can use to determine whether a building lies between me and a specified location. I ask this because when sufficiently zoomed-in, there is clearly 3D data on the shape of buildings included on the map. I was wondering if there was a method like:
boolean buildingInTheWay(myLocation, destinationLocation);
if (buildingInTheWay) {
//Do something
} else {
//Do something else
}
Perhaps there's also something that could be done where if the route to a location is much longer than the birds-eye path to a location, there must be an obstacle in the way (imagine two parallel streets like so:
- = street
X = buildings
A = start location
B = destination location
---------C----A-------
xxxxxxxx | xxxxxxxxx
---------D----B-------
Here, A to B would return true, as the route around the buildings is a lot longer than the direct distance. But C to D would return false, as the route following a road is almost exactly the same distance.
However, that's not very accurate - what about between buildings? I wonder if each building on Google Maps has lat/lng points for each of its corners?
Any thoughts, anyone?

Categories

Resources