Context
I need to get the current speed of the aircraft.
Efforts
Looking at the documentation, I can see that exists the method WaypointMissionOperator.getAutoFlightSpeed which gives me that info.
Problem
While implementing the code, this method was missing. Was it moved, renamed or something?
Environment
OS: Android
DJI SDK version: 4.13.1
Have a look at the FlightControllerState class, it contains the current status of pretty much all of the aircraft components.
Specifically, you should look at FlightController.setStateCallback(). This returns current information, specifically you want to use getVelocityX, getVelocityY and getVelocityZ.
There are also keys for each, the documentation gives the details.
Having X, Y and Z velocities, the speed can be computed in this way:
final speed = sqrt(pow(velocity.x, 2) + pow(velocity.y, 2) + pow(velocity.z, 2));
Related
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
Currently, I am adding a list of annotations to a mapview with code similar to the following:
// Add to map view
SKAnnotation annotation = new SKAnnotation(i++);
annotation.getLocation().setLongitude(result.longitude);
annotation.getLocation().setLatitude(result.latitude);
annotation.setMininumZoomLevel(1);
annotation.setAnnotationType(SKAnnotation.SK_ANNOTATION_TYPE_PURPLE);
mapView.addAnnotation(annotation, SKAnimationSettings.ANIMATION_POP_OUT);
Yet whenever I view the annotations on the map, they disappear after I zoom out to anything under zoom level 4.0. Looking at the docs for the Annotation class (as well as confirming in the code), I see that the default zoom level is set to 4, yet it seems that my call to .setMinimumZoomLevel is ignored.
Does anyone have any insight into what is happening or if this might be a known bug within the SDK?
I'm using Skobbler 2.5 on Android.
Thanks for any help in the matter!
Based off Ando's comment on the original question and referencing the documentation here, I updated the code to use the workaround he described to allow annotations to show up down to zoom level 2.
Original code:
SKAnnotation annotation = new SKAnnotation(i++);
annotation.getLocation().setLongitude(result.longitude);
annotation.getLocation().setLatitude(result.latitude);
annotation.setMininumZoomLevel(1); // Note: this does not work
annotation.setAnnotationType(SKAnnotation.SK_ANNOTATION_TYPE_PURPLE);
mapView.addAnnotation(annotation, SKAnimationSettings.ANIMATION_POP_OUT);
Updated code:
SKAnnotation annotation = new SKAnnotation(i++);
annotation.getLocation().setLongitude(result.longitude);
annotation.getLocation().setLatitude(result.latitude);
annotation.setMininumZoomLevel(2);
DisplayMetrics metrics = new DisplayMetrics();
getWindowManager().getDefaultDisplay().getMetrics(metrics);
if (metrics.densityDpi < DisplayMetrics.DENSITY_HIGH) {
annotation.setImagePath(SKMaps.getInstance().getMapInitSettings().
getMapResourcesPath() + "/.Common/icon_greypin#2x.png");
// set the size of the image in pixel
annotation.setImageSize(128);
} else {
annotation.setImagePath(SKMaps.getInstance().getMapInitSettings().
getMapResourcesPath()+ "/.Common/icon_greypin#3x.png");
// set the size of the image in pixels
annotation.setImageSize(256);
}
mapView.addAnnotation(annotation, SKAnimationSettings.ANIMATION_POP_OUT);
A couple things to note:
.setImagePath() and .setImageSize() are both deprecated methods in the latest SDK even though they're still referenced in the documentation above. Not sure if that means there is another alternative to displaying images via an absolute path approach, or if they're simply phasing this functionality out.
In my particular example, we're using the purple pin to display annotations, but the absolute path file name for that pin is actually called icon_greypin. It looks like the other pin image file name are named appropriately however.
Anyways, this served as a solution for my particular problem until the SDK is updated, so I'm marking it as the answer and I hope it helps someone else! Thanks to Ando for the step in the right direction!
I am using SKPOITrackerManager to track self-defined trackable POIs in navigation mode. The arraylist of SKTrackablePOI objects has many elements which are placed near my route. But only one of them is tracked by onReceivedPOIs(). This method only returns a one-element-list. But it is called 5 to 10 times for exactly one POI. I am sorry that I can not post my complete code here due to project agreement. But here I can show you my settings in an implementation of the SKPOITrackerListener interface:
public void startPOITracking() {
poiTrackerManager = new SKPOITrackerManager(this);
SKTrackablePOIRule skTrackablePOIRule = new SKTrackablePOIRule();
skTrackablePOIRule.setAerialDistance(15000);
skTrackablePOIRule.setRouteDistance(15000);
skTrackablePOIRule.setNumberOfTurns(15000);
skTrackablePOIRule.setMaxGPSAccuracy(15000);
skTrackablePOIRule.setEliminateIfUTurn(false);
skTrackablePOIRule.setMinSpeedIgnoreDistanceAfterTurn(12000);
skTrackablePOIRule.setMaxDistanceAfterTurn(150000);
poiTrackerManager.startPOITrackerWithRadius(100, 0.5);
poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule);
poiTrackerManager.addWarningRulesforPoiType(SKTrackablePOIType.INVALID);
}
I have set the limitions to very high values within SKTrackablePOIRule and still a get only one POI. I can even comment out the line with poiTrackerManager.setRuleForPOIType(SKTrackablePOIType.INVALID, skTrackablePOIRule); and I still receive only one single POI. Maybe someone can help to understand my problem.
Here is something I used in the past:
poiTrackingManager = new SKPOITrackerManager(this);
SKTrackablePOIRule rule = new SKTrackablePOIRule();
rule.setAerialDistance(5000); // this would be our main constraint, stating that all the POIs with 5000m, aerial distance should be detected
rule.setNumberOfTurns(100); // this has to be increased – otherwise some points will be disconsidered
rule.setRouteDistance(10000);//this has to be increased as the real road route will be longer than the aerial distance
rule.setMinSpeedIgnoreDistanceAfterTurn(20); //decrease this to evaluate all candidates
rule.setMaxDistanceAfterTurn(10000); //increase this to make sure we don't exclude any candidates
rule.setEliminateIfUTurn(false); // setting this to true (default) excludes points that require us to make an U-turn to get to them
rule.setPlayAudioWarning(false);
Note: I'm not certain what are the max/min values for these parameters as I've seen some issues when they are too high (they do affect the routing algorithm, more precissely how the road graph is explored so this could explain why at high values it might malfunction) - I would say that you should start with conservative values and then gradually increase them
For startPOITrackerWithRadius I would use different values as if you set the radius to 100 (meters) this would greatly reduce the number of POIs that the SDK is able to analyze (even if the rules are good, the POIs might not be analyzed as they don't fall in the "radius" (aerial distance) around your current position) :
poiTrackingManager.startPOITrackerWithRadius(1500, 0.5);
Also see http://sdkblog.skobbler.com/detecting-tracking-pois-in-your-way/ for more insights on how the POITraker works
With Unity, the CardboardHead script is added to the main camera and that handles everything quite nicely, but I need to be able to "recenter" the view on demand and the only option I see so far is to rorate the entire scene and it seems like this is something the would address first-hand and I can't find anything in the docs.
With Oculus Mobile SDK (GearVR), it would be OVRCamera.ResetCameraPositionOrientation(Vector3.one, Vector3.zero, Vector3.up, Vector3.zero); though they handle it nicely each time the viewer is put on so it's rarely needed there.
There's a "target" parameter on the CardboardHead that lets you use to another gameobject as a reference for rotation. Or you can use a dummy parent gameobject. Either way, when you want to recenter, you set this reference object's rotation so that the CardboardHead is now pointing forward. Add this function to an script on the CardboardHead (or just add it into that script):
public void Recenter() {
Transform reference = target != null ? target : transform.parent;
if (reference != null) {
reference.rotation = Quaternion.Inverse(transform.rotation) * reference.rotation;
// next line is optional -- try it with and without
reference.rotation = Quaternion.FromToRotation(reference.up, Vector3.up) * reference.rotation;
}
}
Cardboard.SDK.Recenter (); should do the trick.
Recenter orientation Added Recenter() function to Cardboard.SDK, which resets the head tracker so the phone's current heading becomes the forward direction (+Z axis).
Couldn't find the docs for the API/SDK but it's in the release notes for the v0.4.5 Update.
You can rotate the Cardboard Main to point in a certain direction.
This is what worked for me when I wanted the app to start up pointing a certain way. Since the CardboardHead points at Vector3.zero on startup if no target is assigned, I ran a function during Start() for the CardboardMain that would point in the direction I wanted.
Of course, if you're already rotating CardboardMain for some other reason, it may be possible to use this same method by creating a parent of the CardboardHead (child of CardboardMain) and doing the same thing.
This question is a bit old but for Google VR SDK 1.50+ you can do
transform.eulerAngles = new Vector3(newRot.x, newRot.y, newRot.z);
UnityEngine.VR.InputTracking.Recenter();
also, if you don't want to get confused you also need to catch the GvrEditorEmulator instance and Recenter it as well.
#if UNITY_EDITOR
gvrEditorEmulator.Recenter();
#endif
Recentering GvrEditorEmulator though doesn't seem to work very well at the moment but if you disable it you'll see the recentering works for the main camera.
Does Android have a similar method to the iPhone AVAudioPlayer's averagePowerForChannel?
I want to get an average reading of the amplitude of as a value.
I don't think there is a built-in function but you can calculate it yourself.
To do this, calculate the root-mean-square average of continuous samples.
rms = sqrt((sample0^2 + sample1^2 + sample2^2 + sample3^2) / numberOfSamples)
The following links should be helpful to you as they contain the full source code to two excellent Android sound related projects.
Ringdroid
Rehearsal Assistant