Google-Maps Android: Issues drawing complex GeoJson polygons - android

I have a webservice that returns GeoJson polygons (timezones) and I'm trying to draw them in a layer over the map object using GeoJsonLayer
I already tested the GeoJson file in geojson.io and it looks fine (image at the bottom) but when I add it to the map its not being filled and some extra lines appear down south, it looks like its not closing the poly properly, this is the code I'm using to load the file (I have it locally in raw folder at the moment)
val layer = GeoJsonLayer(mGoogleMap, R.raw.sample_timezones_response, requireContext())
layer.addLayerToMap()
val polygonStyle = GeoJsonPolygonStyle()
polygonStyle.fillColor = resources.getColor(R.color.color_main_green_200, null)
polygonStyle.zIndex = 10000f
layer.features.forEach { it1 ->
it1.polygonStyle = polygonStyle
}
This is the json file I'm trying with.
EDIT: the original json file was not followinf the right-hand rule, I fixed it with python lib: geojson-rewind, this is the fixed version that passes the test in https://geojsonlint.com/
I tried also updating to the latest version of the library (18.0.2) and updating the renderer to the newer one but it displays in the same way.
This is how it looks on android:
This is how its supose to look, same json file in geojson.io:

When validating the geojson, I get the error:
Line 1: Polygons and MultiPolygons should follow the right-hand rule
https://www.rfc-editor.org/rfc/rfc7946#section-3.1.6
A linear ring MUST follow the right-hand rule with respect to the
area it bounds, i.e., exterior rings are counterclockwise, and
holes are clockwise.

Related

Android - google maps gets stuck

I'm developing a App which display a Google map and a bunch of markers on it. There's a lot of markers so I divided them in smaller groups and display only those, which are in some bounds depending on the current position of the camera.
To do that I'm using the GoogleMap.OnCameraIdleListener. First I remove the listener, do my calculations and drawing and then I restore the listener to the Fragment containing my map:
#Override
public void onCameraIdle() {
mMap.setOnCameraIdleListener(null);
clearMap();
findTheMarkersInBounds();
displayTheMarkers();
mMap.setOnCameraIdleListener(this);
}
This way I only draw the markers I need to display and the performance is way better then having 1000 markers on the map at once. I also draw about the same number of polylines but that's not the point now.
For some strange reasons, after some panning and zooming the maps doesn't respond anymore. Can't zoom it nor pan it. App displays a dialog that it is not responding and I should wait or close the app. No erros are displayed in logcat. I can't exactly tell when this happens. Sometimes after the first pan, sometimes I can move around 2-3 minutes. Same thing happens on the emulator and on the physical device.
Anyone experienced something like this? Thanks!
Or am I approaching this the wrong way? How else should I optimize the map to display about 1000 markers and polylines. (The markers have text on them, so it can't be the same Bitmap and all of the polylines can have different colors and need to be clickable, so I can't combine them into one large polyline)
EDIT. A little more info about my methods:
After all the marker positions are loaded from the internal database, I do a for-loop through all of them and based on their position and I place them to the corresponding region. Its an 2D array of lists.
My whole area is divided to 32x32 smaller rectangular areas. When I'm searching for the markers to display, I determine which region is in view and display only those markers, which are in this area.
This way I don't need to loop over all of the markers.
My methods (very simplified) look like this:
ArrayList<MarkerObject> markersToDisplay = new ArrayList<MarkerObject>();
private void findTheMarkersInBounds() {
markersToDisplay.clear();
LatLngBounds bounds = mMap.getProjection().getVisibleRegion().latLngBounds;
int[] regionCoordinates = getRegionCoordinates(bounds); // i, j coordinates of my regions [0..31][0..31]
markersToDisplay.addAll(subdividedMarkers[regionCoordinates[0]][regionCoordinates[1]]);
}
private void drawMarkers() {
if ((markersToDisplay != null) && (markersToDisplay.size() > 0)) {
for (int i=0; i<markersToDisplay.size(); i++) {
MarkerObject mo = markersToDisplay.get(i);
LatLng position = new LatLng(mo.gpsLat, mo.gpsLon);
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title));
GroundOverlay m = mMap.addGroundOverlay(groundOverlayOptions.image(bitmapDescriptor).position(position, 75));
m.setClickable(true);
}
}
}
It is hard to help you without source code of findTheMarkersInBounds() and displayTheMarkers(), but seems, you need different approach to increase performance, for example:
improve your findTheMarkersInBounds() logic if it possible;
runfindTheMarkersInBounds() in separate thread and show not all markers in same time, but one by one (or bunch of 10..20 at one time) during findTheMarkersInBounds() searching;
improve your displayTheMarkers() if it possible, actually may be use custom drawing on canvas (like in this answer) instead of creating thousands Marker objects.
For question updates:
Small improvements (first, because they are used for main):
pass approximately max size of markersToDisplay as constructor parameter:
ArrayList<MarkerObject> markersToDisplay = new ArrayList<MarkerObject>(1000);
Instead for (int i=0; i<markersToDisplay.size(); i++) {
use for (MarkerObject mo: markersToDisplay) {
Do not create LatLng position every time, create it once and store in MarkerObject fields.
Main improvement:
This lines are the source of issues:
BitmapDescriptor bitmapDescriptor = BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title));
GroundOverlay m = mMap.addGroundOverlay(groundOverlayOptions.image(bitmapDescriptor).position(position, 75));
IMHO using Ground Overlays for thousands of markers showing is bad idea. Ground Overlay is for several "user" maps showing over default Google Map (like local plan of Park or Zoo details). Use custom drawing on canvas like on link above. But if you decide to use Ground Overlays - do not recreate them every time: create it once, store references to them in MarkerObject and reuse:
// once when marker created (just example)
mo.overlayOptions = new GroundOverlayOptions()
.image(BitmapDescriptorFactory.fromBitmap(createMarker(getContext(), mo.title)))
.position(mo.position, 75))
.setClickable(true);
...
// in your drawMarkers() - just add:
...
for (MarkerObject mo: markersToDisplay) {
if (mo.overlayOptions == null) {
mo.overlayOptions = createOverlayOptionsForThisMarker();
}
mMap.addGroundOverlay(mo.overlayOptions)
}
But IMHO - get rid of thousands of Ground Overlays at all - use custom drawing on canvas.
After further investigation and communication with the google maps android tech support we came to a solution. There's a bug in the GroundOverlay.setZIndex() method.
All you have to do is to update to the newest API version. The bug is not present anymore in Google Maps SDK v3.1.
At this moment it is in Beta, but the migration is pretty straightforward.

Can Mapbox's MapSnapshotter be used to generate a bitmap containing annotations?

I'm attempting to use MapSnapshotter (a part of the Mapbox Maps SDK for Android) to generate a screenshot of a Mapbox map instance, including a single line annotation that I've added to the map.
I seem to be able to generate a static map bitmap as expected, however the image does not contain my line annotation.
Is there a way of having the line appear in the bitmap that MapSnapshotter generates, or is MapSnapshotter limited to capturing map screenshots sans annotations?
I'm using the example code provided in one of Mapbox's repositories for the moment. The only alterations that I've made are to add a new layer and source to the Mapbox map style such that it is displayed on the interactive Mapbox map.
In order to have any symbols/geometries visible on the snapshot taken with the MapSnapshotter, the desired layers would have to be added directly to the style. Alternatively, you can draw on top of the image like in this example, which adds a marker to a snapshot.
Another way is to render a normal, interactive map and take a picture of it with MapboxMap#snapshot.
Please tell how to add the desired layer directly to the style according above answer?
I added my custom symbol layer to mapbox style, but snapshot does not make bitmap with symbols.
FeatureCollection featureCollection = FeatureCollection.fromJson(geoJson);
Source source = new GeoJsonSource("my.data.source", featureCollection);
mapboxMap.addSource(source);
SymbolLayer myLayer = new SymbolLayer("my.layer.id", "my.source.id");
myLayer.withProperties(PropertyFactory.iconImage("my.image"));
mapboxMap.addLayer(myLayer);
// ...
MapSnapshotter.Options snapShotOptions = new MapSnapshotter.Options(500, 500);
snapShotOptions.withRegion(mapboxMap.getProjection()
.getVisibleRegion().latLngBounds);
snapShotOptions.withStyle(mapboxMap.getStyle().getUrl());
MapSnapshotter mapSnapshotter = new MapSnapshotter(this, snapShotOptions);
Thank you.

Google Tango - Getting pose data with IMU as base frame

I am working with Google Project Tango and I tried a basic example with getting pose data:
TangoCoordinateFramePair pair;
pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.target = TANGO_COORDINATE_FRAME_CAMERA_COLOR;
base = TANGO_SUPPORT_ENGINE_OPENGL;
target = TANGO_SUPPORT_ENGINE_OPENGL;
error = TangoSupport_getPoseAtTime(poseTimestamp, pair.base, pair.target, base, target, ROTATION_0, &pose);
This gives TANGO_SUCCESS.
However, if I only change base to this
pair.base = TANGO_COORDINATE_FRAME_IMU;
...I keep getting TANGO_INVALID.
I tried using C API and Unity SDK, and both have a same invalid result.
Why is that? Why can't I use TANGO_COORDINATE_FRAME_IMU?
I am trying to fix Camera offset as mentioned here:
Camera-Offset | Project Tango
but without any success...
TangoSupport_getPoseAtTime only works for getting a pose between a fixed coordinate frame and a moving coordinate frame. The TANGO_INVALID error results from the fact that TANGO_COORDINATE_FRAME_CAMERA_COLOR and TANGO_COORDINATE_FRAME_IMU are both moving coordinate frames.
In order to find the offset between TANGO_COORDINATE_FRAME_IMU and TANGO_COORDINATE_FRAME_CAMERA_COLOR (or between any pair of moving coordinate frames), you need to use TangoService_getPoseAtTime instead.
This code snippet should give you the transform you're looking for:
TangoCoordinateFramePair pair;
pair.base = TANGO_COORDINATE_FRAME_IMU;
pair.target = TANGO_COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData pose;
TangoErrorType result = TangoService_getPoseAtTime(0.0, pair, &pose);
Note also that since both of these coordinate frames are moving (i.e. in a fixed position with respect to the device, and each other) the pose resulting from this call will not change as the device moves.

Adjust origin of SVG graphic

tl;dr: Given arbitrary SVG <path> how can I adjust the path commands to set a new origin for the file?
Details
I'm using Adobe Illustrator to create vector graphics for use in Android. Illustrator has a bug where under (unknown) circumstances the viewBox and all coordinates created for your graphics do not match the coordinates you set for the canvas and objects in Illustrator. Instead of (for example) viewBox="-7 0 14 20" you might get viewBox="-53 75 14 20". Usually the the path commands will be translated to fit inside that viewBox. Sometimes even that fails.
In a normal SVG file (a) this would rarely even be noticeable, and (b) it would be easy to fix by wrapping everything in a root-level <g transform="translate(46,-75)">. In an Android VectorDrawable this requires an extra <group> that I don't want to add, and (unlike SVG) I can't add a translate to the <path> itself. And in my code, the origin matters.
How can I adjust the path commands for all the paths in an SVG file to offset them by a fixed amount, allowing me to place the origin where I'd like?
A solution is to add explicit transform="translate(…,…)" onto each path that you want to adjust, view the SVG in a web browser, use the code in this question [note: my own] to bake the translate into the path data, and then use the Web inspector to get the changed values. Cumbersome.
Better: I've created a little online tool SVG Reoriginizer [no ads, no revenue] that lets you paste SVG code into the top box, click (or use alt+shift+arrow keys) to adjust the origin, and see the translated path commands shown in the bottom.
The core code is this function, which offsets the viewBox and all absolute path commands by the specified amount, and returns the changed SVG as a string:
function offsetOrigin(svg,dx,dy){
svg.viewBox.baseVal.x -= dx;
svg.viewBox.baseVal.y -= dy;
[].forEach.call(svg.querySelectorAll('path'),function(path){
var segs = path.pathSegList;
for (var i=segs.numberOfItems;i--;){
var seg=segs.getItem(i), c=seg.pathSegTypeAsLetter;
if (/[MLHVCSQTA]/.test(c)){
if ('x' in seg) seg.x -=dx;
if ('y' in seg) seg.y -=dy;
if ('x1' in seg) seg.x1-=dx;
if ('x2' in seg) seg.x2-=dx;
if ('y1' in seg) seg.y1-=dy;
if ('y2' in seg) seg.y2-=dy;
}
}
});
return (new XMLSerializer).serializeToString(svg);
}

projecting unprojected radar images into osmdroid

I have written a radar weather app using osmdroid for map tiles, and manually overlaying NOAA ridge radar data. Everything Is working great except that the radar images are unprojected, while the openstreetmap tiles are in transverse Mercator projection. The weather lies within the bounds it should but the data is distorted.
I see three ways to fix this (in order of preference) but am having trouble with all three:
1) find a source of radar data already projected in mercator - hours of Googling later, I've found nothing
2) programmatically reproject the images right after I download them. Does anyone know a good API for this?
3) project them on the fly, perhaps with openlayers.im reading that can openlayers reproject,but can it be used over top of an osmdroid mapview?
Any ideas? Thanks for any help
Mike
GDAL is the way to go. There is no official Android build that I know of however some people have been successful in getting it running on Android. For example, Nutiteq has a build in the libs folder of their AdvancedMap3D sample project. Put the contents of both armeabi folders in your project's lib folder and you should be able to access the GDAL packages.
Then take a look at the GDAL in Java page. Look at the gdalinfo.java sample to get a feel for how to load and examine the parts of a GDAL dataset. To reproject your dataset, you will do something along the lines of:
SpatialReference sr = new SpatialReference();
sr.ImportFromProj4("+proj=merc +datum=WGS84");
String result[] = new String[1];
sr.ExportToPrettyWkt(result, 1);
String oldProjection = mDataset.getProjection();
String newProjection = result[0];
Dataset newDataset = gdal.AutoCreateWarpedVRT(mDataset, oldProjection, newProjection, gdalconst.GRA_NearestNeighbour, 0.0);
Dataset savedDataset = mDriver.CreateCopy(outpath, newDataset, 0, new String[] { "COMPRESS=LZW", "PREDICTOR=2" }, null, null);
newDataset.delete();
savedDataset.delete();
You may need to make a few adjustments, but that should get you most of the way there.

Categories

Resources