GMaps: Crazy values for computeDistanceBetween in android? - android

I'm using google.maps.geometry.spherical.computeDistanceBetween() to compute the distance between relatively close points (10-30 meters). This works perfectly in Linux (Chrome and Firefox), but sometimes gives me crazy results in Android. One case that I got was with this:
var p1 = new google.maps.LatLng(-22.960584,-43.206687999999986);
var p2 = new google.maps.LatLng(-22.960584,-43.206939000000034);
alert(google.maps.geometry.spherical.computeDistanceBetween(p1,p2));
It should give 25 meters or so, yet once I got hundred of thousands of meters. Again, it is not always that I get crazy values, just "sometimes", probably related with lots of computations?
Is this a well known bug? If it is, I cannot use this method and would have to make my own.
Thanks,
L.

As far as I can tell, this is an Android bug. I think this could very well explain why in the MyTracks app I usually get randomly points in other continents and huge distances.
I computed the distance function with the method below, and now I always get the correct values. In particular, if this is what it looks, this is a very serious bug for Android app that uses this function.
In case anyone cares, this is the distance function between to LatLng p and q:
function dist(p,q) {
var c = Math.PI/180;
// Google (gives randomly wrong results in Android!)
//return google.maps.geometry.spherical.computeDistanceBetween(p,q);
// Chord
//return 9019995.5222 * Math.sqrt((1-Math.cos(c*(p.lat()-q.lat())))
// + (1-Math.cos(c*(p.lng()-q.lng()))) * Math.cos(c*p.lat()) * Math.cos(c*q.lat()));
// Taylor for chord
return 111318.845 * Math.sqrt(Math.pow(p.lat()-q.lat(),2)
+ Math.pow(p.lng()-q.lng(),2) * Math.cos(c*p.lat()) * Math.cos(c*q.lat()));
}
Notice that these are the computations for the chord, that is, the distance in R^3, not the geodesic distance in the sphere. Certainly more than enough for hiking/car travel computations using GPS. I ended up using the Taylor expansion since it is precise to 1/10 mm, and less tough with the CPU.

Related

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

SKPOITrackerManager not working as excepted

I am using SKPOITrackerManager to track self-defined trackable POIs in navigation mode. The arraylist of SKTrackablePOI objects has many elements which are placed near my route. But only one of them is tracked by onReceivedPOIs(). This method only returns a one-element-list. But it is called 5 to 10 times for exactly one POI. I am sorry that I can not post my complete code here due to project agreement. But here I can show you my settings in an implementation of the SKPOITrackerListener interface:
public void startPOITracking() {
poiTrackerManager = new SKPOITrackerManager(this);
SKTrackablePOIRule skTrackablePOIRule = new SKTrackablePOIRule();
skTrackablePOIRule.setAerialDistance(15000);
skTrackablePOIRule.setRouteDistance(15000);
skTrackablePOIRule.setNumberOfTurns(15000);
skTrackablePOIRule.setMaxGPSAccuracy(15000);
skTrackablePOIRule.setEliminateIfUTurn(false);
skTrackablePOIRule.setMinSpeedIgnoreDistanceAfterTurn(12000);
skTrackablePOIRule.setMaxDistanceAfterTurn(150000);
poiTrackerManager.startPOITrackerWithRadius(100, 0.5);
poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule);
poiTrackerManager.addWarningRulesforPoiType(SKTrackablePOIType.INVALID);
}
I have set the limitions to very high values within SKTrackablePOIRule and still a get only one POI. I can even comment out the line with poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule); and I still receive only one single POI. Maybe someone can help to understand my problem.
Here is something I used in the past:
poiTrackingManager = new SKPOITrackerManager(this);
SKTrackablePOIRule rule = new SKTrackablePOIRule();
rule.setAerialDistance(5000); // this would be our main constraint, stating that all the POIs with 5000m, aerial distance should be detected
rule.setNumberOfTurns(100); // this has to be increased – otherwise some points will be disconsidered
rule.setRouteDistance(10000);//this has to be increased as the real road route will be longer than the aerial distance
rule.setMinSpeedIgnoreDistanceAfterTurn(20); //decrease this to evaluate all candidates
rule.setMaxDistanceAfterTurn(10000); //increase this to make sure we don't exclude any candidates
rule.setEliminateIfUTurn(false); // setting this to true (default) excludes points that require us to make an U-turn to get to them
rule.setPlayAudioWarning(false);
Note: I'm not certain what are the max/min values for these parameters as I've seen some issues when they are too high (they do affect the routing algorithm, more precissely how the road graph is explored so this could explain why at high values it might malfunction) - I would say that you should start with conservative values and then gradually increase them
For startPOITrackerWithRadius I would use different values as if you set the radius to 100 (meters) this would greatly reduce the number of POIs that the SDK is able to analyze (even if the rules are good, the POIs might not be analyzed as they don't fall in the "radius" (aerial distance) around your current position) :
poiTrackingManager.startPOITrackerWithRadius(1500, 0.5);
Also see http://sdkblog.skobbler.com/detecting-tracking-pois-in-your-way/ for more insights on how the POITraker works

Box2d - Is there a way to check whether there is a body at a specific location?

In my game, a body is randomly relocated on the screen after the user does something. However, if the object is relocated on top of another body, then both are pushed slightly (to make room!). I would like to check the location of the randomly generated coordinates first, so that the relocation only takes place if the position is free (within a certain diameter anyway).
Something like.. location.hasBody(). There surely must be a function for this that I haven't found. Thanks!
There is no way to query a world with a point and get the body, but what you can do is query the world with a small box:
// Make a small box.
b2AABB aabb;
b2Vec2 d;
d.Set(0.001f, 0.001f);
aabb.lowerBound = p - d;
aabb.upperBound = p + d;
// Query the world for overlapping shapes.
QueryCallback callback(p);
m_world->QueryAABB(&callback, aabb);
if (callback.m_fixture)
{
//it had found a fixture at that position
}
Solution originally posted here: Cocos2d-iphone forum
Not sure if box2d includes a 'clean' way to do it. I'd just manually iterate over all bodies in the world just before adding a new one, and manually check if their positions + radio/size overlap with the new body shape.
try
b2Vec2 vec = body->GetPosition(); // in meters
or
CGPoint pos = ccp(body->GetPosition().x * PTM_RATIO, body->GetPosition().y * PTM_RATIO); // in pixels

Assigning a rare drop chance to object in simple Actionscript 3.0 game

I am pretty new to all this Flash CS6 Action script 3.0 stuff and I was hoping to find out some different ways to apply a rare drop chance on an movie clip array for AS3. I have a random chance code that works pretty well for enemies, as they fall more often, however I'd like hearts to fall rarely for my player to catch and gain a life.
Here is the code I have so far, it drops far too many hearts. I've tried fiddling with the numbers but I only seem to make it worse. Any suggestions?
function makeHeart():void
{
var chance:Number = Math.floor(Math.random() * 60);
if (chance <= 1 + level)
{
var tempHeart:MovieClip;
tempHeart = new Heart();
tempHeart.speed = 3;
tempHeart.x = Math.round(Math.random() * 800);
tempHeart.cacheAsBitmapMatrix = tempHeart.transform.concatenatedMatrix;
tempHeart.cacheAsBitmap = true;
trace("tempHeart");
addChild(tempHeart);
hearts.push(tempHeart);
}
}
Well, this is either too simple of a question, or I just didn't understand it.
If I did however understand it correctly, here's the way out:
Let's say you want to have a 1% chance of a falling heart. Since you're using a Number class for your chance variable, and Math.random() returns a Number as well, you don't need any transformations.
Math.random() returns a Number (float) between 0 and 1, not including 1
so your code for 1% could look something like this:
var chance:Number = Math.random();
if (chance <= 0.01)
{
//enter code here
}
And yes, since you call less unneeded functions, it works faster.
Math.random() gives a very precise number, far beyond 1/100, so it's possible to make a much less number for chance possibility, here's one value returned from Math.random():
Math.random(); // 0.9044877095147967

Detect if the Person falls down

I am creating an app in android where i need to detect if the person has fall down. I know that this question has been asked and answered as to use vector mathematics in other forums but i am not getting the accurate results out of it.
Below is my code to detect the fall:
#Override
public void onSensorChanged(SensorEvent arg0) {
// TODO Auto-generated method stub
if (arg0.sensor.getType()==Sensor.TYPE_ACCELEROMETER) {
double gvt=SensorManager.STANDARD_GRAVITY;
float vals[] = arg0.values;
//int sensor=arg0.sensor.getType();
double xx=arg0.values[0];
double yy=arg0.values[1];
double zz=arg0.values[2];
double aaa=Math.round(Math.sqrt(Math.pow(xx, 2)
+Math.pow(yy, 2)
+Math.pow(zz, 2)));
if (aaa<=6.0) {
min=true;
//mintime=System.currentTimeMillis();
}
if (min==true) {
i++;
if(aaa>=13.5) {
max=true;
}
}
if (min==true && max==true) {
Toast.makeText(FallDetectionActivity.this,"FALL DETECTED!!!!!" ,Toast.LENGTH_LONG).show();
i=0;
min=false;
max=false;
}
if (i>4) {
i=0;
min=false;
max=false;
}
}
}
To explain the above code i have used the vector sum and checking if the value has reached below or equal to 6(while fall) and suddenly greater than 13.5(while landing) to confirm the fall.
Now i was been told in the forums that if the device is still the vector sum will return the value of 9.8. While fall it should be close to 0 and should go to around 20 while landing. This doesn't seem to happen in my case. Please can anybody suggest if i am going wrong anywhere?
There is a guy who developed an android app for that. Maybe you can get some information from his site: http://ww2.cs.fsu.edu/~sposaro/iFall/. He also made an article explaining how he detected the fall. It is really interesting, you should check it out!
Link for the paper: http://ww2.cs.fsu.edu/~sposaro/publications/iFall.pdf
Resuming, the fall detection is based on the resultant of the X-Y-Z acceleration. Based on this value:
When falling, the falling generally starts with a free fall period, making the resultand drop significantly below 1g.
On the impact on the ground, there is a peak in the amplitude of the resultant, with values higher than 3g.
After that, if the person could not move due to the fall, the resultant will remain close to 1G.
Following will happen if person / phone falls down:
absolute acceleration vector value goes to 0 ( with some noise of course )
there will be fair spike in absolute vector value on landing ( up to maximal value provided by accelerometer )
When phone is immobile, you have vector of modulo earth gravity pointing up
Your code is basically correct, but I would use some averaging because accelerometers used in phones are cheap crap - noisy and lacking precision
To add averaging to your signal means:- moving average. It depends on your windows size. For example. Say I have a one vector with the following numbers: 1,2,3,4,5,6. and my window size is 2. Then the moving average is to take every two numbers from your vector and average them by 2. So you would take 1+2/2, and then move one to the next twos. 2+3/2, and so on.

Categories

Resources