Loading Open Cycle Map on OSMdroid - android

Sorry I am new to Android development.
Wondering if there is any method to load Open Cycle Map using OSMdroid please?
From the website, seems there is no easy way to do so:
https://github.com/osmdroid/osmdroid/wiki/Map-Sources
Therefore, would any one can give me some tips how to do so please?
What I can think the only way is to define Tile Source manually as below.
Wondering if there is any easier way to do so please?
final String[] tileURLs = {"http://a.tile.thunderforest.com/cycle/",
"http://b.tile.thunderforest.com/cycle/",
"http://c.tile.thunderforest.com/cycle/"};
final ITileSource OCM =
new XYTileSource("Open Cycle Map",
0,
19,
512,
".png",
tileUrls,
"from open cycle map");
Thanks a lot

Defining a tile-source is a correct way how to do it. And it's a perfectly fine way, many build-in tile-sources are defined in the same way.
However, according to the documentation at the http://thunderforest.com/maps/opencyclemap/ you should obtain and use API key:
Want to use these tiles? The generic tile format string for the
OpenCycleMap layer is:
https://tile.thunderforest.com/cycle/{z}/{x}/{y}.png?apikey=<insert-your-apikey-here>
Therefore you should include you API key:
final ITileSource OCM =
new XYTileSource("Open Cycle Map",
0,
19,
512,
".png?apikey=<insert-your-apikey-here>",
tileUrls,
"from open cycle map");
(This is just modified code from the question. I didn't test it and therefore some parameters don't have to be correct)

Related

Android - osmdroid - ResourceProxy.string

Can someone explain to me how to use this parameter?
If i use a custom online tile source, can I set it null?
In which case this parameter will be used?
XYTileSource source = new XYTileSource("custom", ResourceProxy.string.?, getMinZoom(), getMaxZoom(), 256, ".png", new String[]{});
According to https://github.com/osmdroid/osmdroid/wiki/How-to-use-the-osmdroid-library :
mapView.setTileSource(TileSourceFactory.MAPNIK);
How can I create a custom tile source?
Thanks.
It was deleted ages ago. If you are on a verison of osmdroid using the string class, it's time to update

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

Microsoft Band Android Tile XML

i am messing around with developing apps for the Band 2 using the Microsoft SDK and the Android Studio. I have successfully tested the application on my device but the problem i am having is how the application gets linked to the tile and how that tile gets added to the health app.
Where does the presentation XML reside? I read the Microsoft Band SDK.pdf section 8.8 SIMPLE CUSTOM TILE EXAMPLE. The example does not specify where the code needs to reside. Do i need to add it to the class file for the app or in a different file? Where does the tile icon get created, in the Android Studio and if so where?
An example of how the class, tile xml, and icon get installed to the band would be nice.
Thanks!
The SDK includes some samples- have a look at the one entitled BandTileEvent to see the full implementation. The quick version is that your tile creation code should create a series of layouts (containing elements with IDs) and icons when it is made, and then to update you'll choose a layout and assign values to the elements' ids. The key elements from the samples look like this (modified for easy readability):
private PageLayout createButtonLayout() {
return new PageLayout(
new FlowPanel(15, 0, 260, 105, FlowPanelOrientation.VERTICAL)
.addElements(new FilledButton(0, 5, 210, 45).setMargins(0, 5, 0 ,0).setId(12).setBackgroundColor(Color.RED))
.addElements(new TextButton(0, 0, 210, 45).setMargins(0, 5, 0 ,0).setId(21).setPressedColor(Color.BLUE))
);
}
This will create a PageLayout object that is used in the tile creation process. This method should be used like this:
BandTile tile = new BandTile.Builder(YOUR_TILE_UUID, "Tile Title", tileIconBitmap)
.setPageLayouts(createButtonLayout())
.setPageIcons(getIconsToUse())
.build();
client.getTileManager().addTile(context, tile);
Once the tile is on the band, you'll need to send an update- it should look something like this:
private void updatePages() throws BandIOException {
client.getTileManager().setPages(tileId,
new PageData(pageId1, 0)
.update(new FilledButtonData(12, Color.YELLOW))
.update(new TextButtonData(21, "Text Button")));
}
Once the tile is on your band, you can register an intent filter that will return these events. Check the SDK samples for the exact intents used- you'll get notified when the tile is opened, closed, and when buttons on it are pressed.

How to apply Straight Skeleton Algorithm in android

I have a requirement like this. Here is my home class snapshot.
It contains several shop shape. And for that I have used this code :
ArrayList<Point> points2 = new ArrayList<Point>();
points2.add(vertex.Get(20, 0));
points2.add(vertex.Get(44, 0));
points2.add(vertex.Get(44, 25));
points2.add(vertex.Get(20, 25));
Polygon view1 = new Polygon(context, points2, android.R.color.holo_orange_dark, R.color.background_light, floor != 0);
view1.setId(0);
views.add(view1);
addView(view1);
This is for static number of shapes.Now requirement is such that number of shapes will be dynamic.and for that i don't need to use same code as above.Client has told to implement Straight Skeleton Algorithm.
I googled about it and found some help for same algorithm implementation in core java.
Java library for creating straight skeleton?
This issue explains in java.I tried it out,that is totaly in core java.I need to implement it on android.and Never worked on such issue before.Need some help if some one have already implemented it on android.
Thanks
You should try this:
https://code.google.com/p/campskeleton/
From this answer:
Java library for creating straight skeleton?

projecting unprojected radar images into osmdroid

I have written a radar weather app using osmdroid for map tiles, and manually overlaying NOAA ridge radar data. Everything Is working great except that the radar images are unprojected, while the openstreetmap tiles are in transverse Mercator projection. The weather lies within the bounds it should but the data is distorted.
I see three ways to fix this (in order of preference) but am having trouble with all three:
1) find a source of radar data already projected in mercator - hours of Googling later, I've found nothing
2) programmatically reproject the images right after I download them. Does anyone know a good API for this?
3) project them on the fly, perhaps with openlayers.im reading that can openlayers reproject,but can it be used over top of an osmdroid mapview?
Any ideas? Thanks for any help
Mike
GDAL is the way to go. There is no official Android build that I know of however some people have been successful in getting it running on Android. For example, Nutiteq has a build in the libs folder of their AdvancedMap3D sample project. Put the contents of both armeabi folders in your project's lib folder and you should be able to access the GDAL packages.
Then take a look at the GDAL in Java page. Look at the gdalinfo.java sample to get a feel for how to load and examine the parts of a GDAL dataset. To reproject your dataset, you will do something along the lines of:
SpatialReference sr = new SpatialReference();
sr.ImportFromProj4("+proj=merc +datum=WGS84");
String result[] = new String[1];
sr.ExportToPrettyWkt(result, 1);
String oldProjection = mDataset.getProjection();
String newProjection = result[0];
Dataset newDataset = gdal.AutoCreateWarpedVRT(mDataset, oldProjection, newProjection, gdalconst.GRA_NearestNeighbour, 0.0);
Dataset savedDataset = mDriver.CreateCopy(outpath, newDataset, 0, new String[] { "COMPRESS=LZW", "PREDICTOR=2" }, null, null);
newDataset.delete();
savedDataset.delete();
You may need to make a few adjustments, but that should get you most of the way there.

Categories

Resources