I'm facing the following problem while developing Tango app and not sure whether I'm on the right track or not.
What I'm trying to achieve:
User takes a picture. In the background the app saves to persistent the current point cloud and pose.
The server is getting that image and doing some magic processing behind the scene and sends (x,y) coordinate back to the app(Async and unrelated to current Tango session).
Restart the app, start a new tango session and show a 3d object at (x,y) using the persist copy of the point cloud and pose.
I expect that I'll be able to use these parameters - (x,y), point cloud and Pose in the following algorithm and get a Pose, which is a Rajawali object that RajawaliRenderer knows how to render.
tango initialization is accoring to the following coordinate frame:
TANGO_WORLD_BASE_COORDINATE_FRAME = new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
);
Plan Fit using intersection point -
private void convertByIntersectionPoint(float x, float y,
TangoPointCloudData tangoPointCloudData, TangoPoseData devicePose,
TangoPoseData colorTdepthPose) {
if (tangoPointCloudData != null) {
TangoSupport.IntersectionPointPlaneModelPair intersectionPointPlaneModelPair =
TangoSupport.fitPlaneModelNearPoint(tangoPointCloudData,
colorTdepthPose, x, y);
if (devicePose.statusCode == TangoPoseData.POSE_VALID) {
mRenderer.updateObjectPose(
intersectionPointPlaneModelPair.intersectionPoint,
intersectionPointPlaneModelPair.planeModel,
devicePose);
}
}
}
It throws TangoErrorException on TangoSupport.fitPlaneModelNearPoint.
To my understanding the fitPlaneModelNearPoint method should do pure algorithm that doesn't rely on current Tango session but I cannot be sure because I don't have its implementation.
Any help'd be much appreciated.
Okay it was totally my mistake.
There was a bug in serializing the point cloud.
Gson library do not know how to deserialize into subclass and always construct into a parent class - which in this case creates corrupted data
Related
I'm developing an app using ARCore. In this app I need:
1) to place an object always staying at the same pose in world space. Following the "Working with Anchors" article recommendations (https://developers.google.com/ar/develop/developer-guides/anchors) I'm attaching an anchor to the ARCore Session. That is, I'm not using Trackables at all.
2) as a secondary requisite the object must be placed automatically, that is, without tapping on the screen.
I've managed to solve the two requisites, having now the object "floating" in front of me, this way (very common code):
private void onSceneUpdate(FrameTime frameTime) {
...
if (_renderable!=null && _anchorNode==null) {
float[] position = {0f,0f,-10f};
float[] rotation = {0,0,0,1};
//
Anchor anchor=_arFragment.getArSceneView().getSession().createAnchor(new Pose(position,rotation));
//
_anchorNode = new AnchorNode(anchor);
_anchorNode.setRenderable(_renderable);
_anchorNode.setParent(_arFragment.getArSceneView().getScene());
_anchorNode.setLocalScale(new Vector3(0.01f,0.01f,0.01f)); //cm -> m
...
}
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate:
float[] position = {0f,HERE_THE_VALUE_TO_SUBTRACT_FROM_CAMERA_HEIGHT,-10f};
Certainly, it's an easy implementation when plane Trackables are used but here I have the requisites above-named.
I've managed to solve the two requisites, having now the object "floating" in front of me.
As i want the object to be on the floor, I need to find out what the height of my physical (device) camera above the floor is, in order to subtract that number from the current object's Y coordinate.
Trying with different camera/device Pose retrieval APIs, namely: frame.getAndroidSensorPose(), frame.getCamera().getPose() and frame.getCamera().getDisplayOrientedPose() are showing not valid values.
Thanks for your advice.
P.S.:Certainly, it's an easy implementation when plane Trackables are used but here I have other requisites, as above-named.
EDIT after Michael Dougan comments.
Well I think we have then two ways to achieve the requisites:
1) leave the code w/o changes, keeping on using the Session Anchor, asking the user to launch the app and the to follow a "calibration process" which the device on the floor. As this is a professional use app, and not a consumer one, we think it is perfectly suitable;
2) go ahead with the good-and-old Trackables, by means of the usual floor as an anchor, including the pose of that anchor in the calculation of the position of the 3D model.
I am working with Google Project Tango and I tried a basic example with getting pose data:
TangoCoordinateFramePair pair;
pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.target = TANGO_COORDINATE_FRAME_CAMERA_COLOR;
base = TANGO_SUPPORT_ENGINE_OPENGL;
target = TANGO_SUPPORT_ENGINE_OPENGL;
error = TangoSupport_getPoseAtTime(poseTimestamp, pair.base, pair.target, base, target, ROTATION_0, &pose);
This gives TANGO_SUCCESS.
However, if I only change base to this
pair.base = TANGO_COORDINATE_FRAME_IMU;
...I keep getting TANGO_INVALID.
I tried using C API and Unity SDK, and both have a same invalid result.
Why is that? Why can't I use TANGO_COORDINATE_FRAME_IMU?
I am trying to fix Camera offset as mentioned here:
Camera-Offset | Project Tango
but without any success...
TangoSupport_getPoseAtTime only works for getting a pose between a fixed coordinate frame and a moving coordinate frame. The TANGO_INVALID error results from the fact that TANGO_COORDINATE_FRAME_CAMERA_COLOR and TANGO_COORDINATE_FRAME_IMU are both moving coordinate frames.
In order to find the offset between TANGO_COORDINATE_FRAME_IMU and TANGO_COORDINATE_FRAME_CAMERA_COLOR (or between any pair of moving coordinate frames), you need to use TangoService_getPoseAtTime instead.
This code snippet should give you the transform you're looking for:
TangoCoordinateFramePair pair;
pair.base = TANGO_COORDINATE_FRAME_IMU;
pair.target = TANGO_COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData pose;
TangoErrorType result = TangoService_getPoseAtTime(0.0, pair, &pose);
Note also that since both of these coordinate frames are moving (i.e. in a fixed position with respect to the device, and each other) the pose resulting from this call will not change as the device moves.
I am using Unity with Tango and I am having problems getting pose data.
Unity application with Tango Unity SDK is built for Android device, device gets pose data and it sends it to the computer where additional processing is done using OpenGL.
My question is, in which coordinate system is pose data returned since I can't define engine like with C API?
Unity handles geting pose data like this, and nothing additional could be sent:
#if UNITY_EDITOR
GetEmulatedPoseAtTime(poseData, timeStamp, framePair);
#else // ANDROID
int returnValue = API.TangoService_getPoseAtTime(timeStamp, framePair, poseData);
if (returnValue != Common.ErrorType.TANGO_SUCCESS)
{
Debug.Log(CLASS_NAME + ".GetPoseAtTime() Could not get pose at time : " + timeStamp);
}
#endif
Just to prove that my application with OpenGL works as it should, I've created Tango project using C API with the same idea (get pose data and send it):
TangoCoordinateFramePair pair;
pair.base = TANGO_COORDINATE_FRAME_START_OF_SERVICE;
pair.target = TANGO_COORDINATE_FRAME_DEVICE;
base = TANGO_SUPPORT_ENGINE_OPENGL;
target = TANGO_SUPPORT_ENGINE_OPENGL;
error = TangoSupport_getPoseAtTime(poseTimestamp, pair.base, pair.target, base, target, ROTATION_0, &pose);
... and this works.
I thought that data is maybe in Tango Coordinate System and I tried to convert pose data with C# equivalent functions to QuatTangoToGl and Vec3GlToTango form here, but still, it is not correct.
So, in which coordinate system is pose data in Unity SDK and is it possible to somehow define which engine I want?
I realized that I could expose function TangoSupport_getPoseAtTime in TangoSupport and add enums EngineType and RotationType (values are matched with C API).
So, I added this in TangoSupport.cs under TangoSupportAPI:
[DllImport(TANGO_SUPPORT_UNITY_DLL)]
public static extern int TangoSupport_getPoseAtTime(
double timestamp, TangoEnums.TangoCoordinateFrameType baseFrame, TangoEnums.TangoCoordinateFrameType targetFrame,
Common.EngineType baseEngine, Common.EngineType targetEngine, Common.RotationType rotation, [In, Out] TangoPoseData pose);
and added proper function in TangoSupport class.
Now I get poses that are set correctly in OpenGL project.
Without defining engine types, given poseData is for Tango Coordinate System.
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
We track a Tablet with Markers and the Optitrack System. Therefore we use a JNI Wrapper to have access on functions of the NatNet SDK. After receiving the Position data on the server we stream(real time) it back to the client- the tablet itself and render an Augmented Reality Scene with the libgdx framework. The Target Platform is Android.
Here is some sample data we are receiving (7 values: x,y,z, qx, qy, qz, qw):
-0,465436 0,888108 -0,991635 0,331507 0,091413 -0,379475 0,858921
-0,438584 0,888583 -0,982334 0,356608 0,092872 -0,364935 0,855002
-0,414451 0,892762 -0,973772 0,365460 0,096244 -0,348293 0,857828
-0,394074 0,900471 -0,963359 0,365230 0,109444 -0,323559 0,865990
The first three values describe the position in the room. We scale the values of the z-axis by the factor of 3 to have a autentic translation from the small values we are receiving to the size we need in our render libgdx scene on the tablet itself and it works fine! The last 4 values are for the rotation of the tracked tablet as an quaternion. This is really new to me as I never worked with quaternion before.
Libgdx supports rotations in a 3D scene with quaternions, but after handing the values over to the concrete modelInstance the rotation is totally unexpected and full of errors. Here is the important code:
//gets the rotation of the latest received rigidbody position data
//and stores them into a libgdx Quaternion
Quaternion rotation = rigidBody.getQuat();
modelInst.transform.rotate(rotation);
...
public Quaterion getQuat(){
float qx = rigidBody.qx;
float qy = rigidBody.qy;
float qz = rigidBody.qy;
float qw = rigidBody.qw;
Quaternion rot = new Quaternion(qx, qy, qz, qw);
return rot;
}
Looks obvious to me so far. But is does not work. When I rotate the tablet it tranlates too and rotates in wrong and unexpected directions without translating the tablet. I've been looking for solutions for a week now. First I tried different permutations of the parameters which I hand over to the libgdx Quaternion class. The rotation values seem to be in a relative(local) form in a right handed coordinate system. At least this is whats written in the official NatNet User Guide on page 12:
NatNet User Guide
. I think libgdx uses absolute Quaternions, but I couldnt figure it out for sure. If this would be the case...how could I transform relative Quaternions to absolute ones? Or maybe it has to do something with our scaling of the z-Axis values?
We appreciate ervery help. Thank you in advance!