How to apply Straight Skeleton Algorithm in android - android

I have a requirement like this. Here is my home class snapshot.
It contains several shop shape. And for that I have used this code :
ArrayList<Point> points2 = new ArrayList<Point>();
points2.add(vertex.Get(20, 0));
points2.add(vertex.Get(44, 0));
points2.add(vertex.Get(44, 25));
points2.add(vertex.Get(20, 25));
Polygon view1 = new Polygon(context, points2, android.R.color.holo_orange_dark, R.color.background_light, floor != 0);
view1.setId(0);
views.add(view1);
addView(view1);
This is for static number of shapes.Now requirement is such that number of shapes will be dynamic.and for that i don't need to use same code as above.Client has told to implement Straight Skeleton Algorithm.
I googled about it and found some help for same algorithm implementation in core java.
Java library for creating straight skeleton?
This issue explains in java.I tried it out,that is totaly in core java.I need to implement it on android.and Never worked on such issue before.Need some help if some one have already implemented it on android.
Thanks

You should try this:
https://code.google.com/p/campskeleton/
From this answer:
Java library for creating straight skeleton?

Related

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

Opengl - Creating a mini map from a gl view

I want to create a map from some opengl code that I wrote :
In order to do that I though about taking a screen shot of the upperview of the gl screen.
Yet I cant seem to find how to do that...
any suggestions?
Similar problem has been solved in OSG example code here.
First you need to set your view such that you are looking at center from the TOP VIEW.
osg::Vec3 center = scene->getBound().center();
double radius = scene->getBound().radius();
view->getCamera()->setViewMatrixAsLookAt( center - lookDir*(radius*3.0), center, up );
view->getCamera()->setProjectionMatrixAsPerspective(
30.0f, static_cast<double>(width)/static_cast<double>(height), 1.0f, 10000.0f );
Then, you need to use some OS specific API to do similar to logic below:
osgViewer::ScreenCaptureHandler* scrn = new osgViewer::ScreenCaptureHandler();
osgViewer::ScreenCaptureHandler::WriteToFile* captureOper = new osgViewer::ScreenCaptureHandler::WriteToFile(tmpStr.m_szBuffer, "png");
scrn->setCaptureOperation(captureOper);
scrn->captureNextFrame(*_viewer);
_viewer->frame();
Of course if you are not using OSG then you need to find equivalent APIs (of library you are using) to achieve the same task.

projecting unprojected radar images into osmdroid

I have written a radar weather app using osmdroid for map tiles, and manually overlaying NOAA ridge radar data. Everything Is working great except that the radar images are unprojected, while the openstreetmap tiles are in transverse Mercator projection. The weather lies within the bounds it should but the data is distorted.
I see three ways to fix this (in order of preference) but am having trouble with all three:
1) find a source of radar data already projected in mercator - hours of Googling later, I've found nothing
2) programmatically reproject the images right after I download them. Does anyone know a good API for this?
3) project them on the fly, perhaps with openlayers.im reading that can openlayers reproject,but can it be used over top of an osmdroid mapview?
Any ideas? Thanks for any help
Mike
GDAL is the way to go. There is no official Android build that I know of however some people have been successful in getting it running on Android. For example, Nutiteq has a build in the libs folder of their AdvancedMap3D sample project. Put the contents of both armeabi folders in your project's lib folder and you should be able to access the GDAL packages.
Then take a look at the GDAL in Java page. Look at the gdalinfo.java sample to get a feel for how to load and examine the parts of a GDAL dataset. To reproject your dataset, you will do something along the lines of:
SpatialReference sr = new SpatialReference();
sr.ImportFromProj4("+proj=merc +datum=WGS84");
String result[] = new String[1];
sr.ExportToPrettyWkt(result, 1);
String oldProjection = mDataset.getProjection();
String newProjection = result[0];
Dataset newDataset = gdal.AutoCreateWarpedVRT(mDataset, oldProjection, newProjection, gdalconst.GRA_NearestNeighbour, 0.0);
Dataset savedDataset = mDriver.CreateCopy(outpath, newDataset, 0, new String[] { "COMPRESS=LZW", "PREDICTOR=2" }, null, null);
newDataset.delete();
savedDataset.delete();
You may need to make a few adjustments, but that should get you most of the way there.

Graph/Plots on ANDROID

I am new to android and trying to learn creation or plotting of graphs in android. I've come across 2 libraries:
GraphView
AndroidPlot.
My intent would be to receive some sound file and plot it on a graph. So, for this purpose which library would be better. Also I wanna know, where I can see the complete implementation or definitions of these libraries i.e. the structure and code for the API's used in the above libraries.
Also I have tried some sample codes available on net. But I'm looking for a more sophiticated code which I could develop on eclipse ADT and hence can learn something out of it.
My intent would be to receive some sound file and plot it on a graph
Neither library does this by default. The libraries are used to plot a graph given a set of data points. Getting the data points from the sound file is up to you.
So, for this purpose which library would be better.
Either library should be fine once you get the data points.
Also I wanna know, where I can see the complete implementation or definitions of these libraries i.e. the structure and code for the API's used in the above libraries.
Check out the sources for GraphView and AndroidPlot.
I have used Achartengine some times and it works great. I modified it without difficulties.
If You are drawing simple Line Graph using canvas to draw the graph.
Use AndroidPlot. This code draw the content of Vector(in this case filled of zeros). You only have to pass the info of the wav file to the vector. And you can check this thread for the reading issue.
Android: Reading wav file and displaying its values
plot = (XYPlot) findViewById(R.id.Grafica);
plot.setDomainStep(XYStepMode.INCREMENT_BY_VAL, 0.5);
plot.setRangeStep(XYStepMode.INCREMENT_BY_VAL, 1);
plot.getGraphWidget().getGridBackgroundPaint().setColor(Color.rgb(255, 255, 255));
plot.getGraphWidget().getDomainGridLinePaint().setColor(Color.rgb(255, 255, 255));
plot.getGraphWidget().getRangeGridLinePaint().setColor(Color.rgb(255, 255, 255));
plot.getGraphWidget().setDomainLabelPaint(null);
plot.getGraphWidget().setRangeLabelPaint(null);
plot.getGraphWidget().setDomainOriginLabelPaint(null);
plot.getGraphWidget().setRangeOriginLabelPaint(null);
plot.setBorderStyle(BorderStyle.NONE, null, null);
plot.getLayoutManager().remove(plot.getLegendWidget());
plot.getLayoutManager().remove(plot.getDomainLabelWidget());
plot.getLayoutManager().remove(plot.getRangeLabelWidget());
plot.getLayoutManager().remove(plot.getTitleWidget());
//plot.getBackgroundPaint().setColor(Color.WHITE);
//plot.getGraphWidget().getBackgroundPaint().setColor(Color.WHITE);
plot.setRangeBoundaries(-25, 25, BoundaryMode.FIXED);// check that, these //boundaries wasn't for audio files
InicializarLasVariables();
for(int i=0; i<min/11;i++){
DatoY=0;
Vector.add(DatoY);
}
XYSeries series = new SimpleXYSeries(Vector, SimpleXYSeries.ArrayFormat.Y_VALS_ONLY,"");
LineAndPointFormatter seriesFormat = new LineAndPointFormatter(Color.rgb(0, 0, 0), 0x000000, 0x000000, null);
plot.clear();
plot.addSeries(series, seriesFormat);

Implement page curl on android?

I was surfing the net looking for a nice effect for turning pages on Android and there just doesn't seem to be one. Since I'm learning the platform it seemed like a nice thing to be able to do is this.
I managed to find a page here: http://wdnuon.blogspot.com/2010/05/implementing-ibooks-page-curling-using.html
- (void)deform
{
Vertex2f vi; // Current input vertex
Vertex3f v1; // First stage of the deformation
Vertex3f *vo; // Pointer to the finished vertex
CGFloat R, r, beta;
for (ushort ii = 0; ii < numVertices_; ii++)
{
// Get the current input vertex.
vi = inputMesh_[ii];
// Radius of the circle circumscribed by vertex (vi.x, vi.y) around A on the x-y plane
R = sqrt(vi.x * vi.x + pow(vi.y - A, 2));
// Now get the radius of the cone cross section intersected by our vertex in 3D space.
r = R * sin(theta);
// Angle subtended by arc |ST| on the cone cross section.
beta = asin(vi.x / R) / sin(theta);
// *** MAGIC!!! ***
v1.x = r * sin(beta);
v1.y = R + A - r * (1 - cos(beta)) * sin(theta);
v1.z = r * (1 - cos(beta)) * cos(theta);
// Apply a basic rotation transform around the y axis to rotate the curled page.
// These two steps could be combined through simple substitution, but are left
// separate to keep the math simple for debugging and illustrative purposes.
vo = &outputMesh_[ii];
vo->x = (v1.x * cos(rho) - v1.z * sin(rho));
vo->y = v1.y;
vo->z = (v1.x * sin(rho) + v1.z * cos(rho));
}
}
that gives an example (above) code for iPhone but I have no idea how I would go about implementing this on android. Could any of the Math gods out there please help me out with how I would go about implementing this in Android Java.
Is it possible using the native draw APIs, would I have to use openGL? Could I mimik the behaviour somehow?
Any help would be appreciated. Thanks.
****************EDIT**********************************************
I found a Bitmap Mesh example in the Android API demos: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/BitmapMesh.html
Maybe someone could help me out on an equation to simply fold the top right corner inward diagnally across the page to create a similar effect that I can later apply shadows to to gie it more depth?
I'm doing some experimenting on page curl effect on Android using OpenGL ES at the moment. It's quite a sketch actually but maybe gives some idea how to implement page curl for your needs. If you're interested in 3D page flip implementation that is.
As for the formula you're referring to - I tried it out and didn't like the result too much. I'd say it simply doesn't fit small screen very well and started to hack a more simple solution.
Code can be found here:
https://github.com/harism/android_page_curl/
While writing this I'm in the midst of deciding how to implement 'fake' soft shadows - and whether to create a proper application to show off this page curl effect. Also this is pretty much one of the very few OpenGL implementations I've ever done and shouldn't be taken too much as a proper example.
I just created a open source project which features a page curl simulation in 2D using the native canvas: https://github.com/moritz-wundke/android-page-curl
I'm still working on it to add adapters and such to make it usable as a standalone view.
EDIT: Links updated.
EDIT: Missing files has been pushed to repo.
I'm pretty sure, that you'd have to use OpenGL for a nice effect. The basic UI framework's capabilities are quite limited, you can only do basic transformations (alpha, translate, rotate) on Views using animations.
Tho it might be possible to mimic something like that in 2D using a FrameLayout, and a custom View in it.

Categories

Resources