Project Tango has Motion Tracking API. I'm curious what's the best way to track a motion in similar way (i.e. track position and orientation of the user's device in full six degrees of freedom) on standard Android and iOS devices using any kind of 3D party SDKs and/or physical additions (like markers or beacons)?
You might be interested in visual odometry.
From this documentation:
Tango implements Motion Tracking using visual-inertial odometry, or VIO, to estimate where a device is relative to where it started.
Standard visual odometry uses camera images to determine a change in position by looking at the relative position of different features in those images. For example, if you took a photo of a building from far away and then took another photo from closer up, it would be possible to calculate the distance the camera moved based on the change in size and position of the building in the photos.
Visual-inertial odometry supplements visual odometry with inertial motion sensors capable of tracking a device's rotation and acceleration. This allows a Tango device to estimate both its orientation and movement within a 3D space with even greater accuracy. Unlike GPS, Motion Tracking using VIO works indoors.
Related
Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.
I tried to achieve using google's cloud Anchors, but it has a limitation of 24hrs (after that the cloud anchors become invalid).
And another way is creating the replica of Unity, but that would be too lengthy process.
Any other ways please suggest me or any idea: https://www.insidernavigation.com/#solution - how they achieved it?
And how to save the common coordinate system in cloud or locally?
Current versions of ARCore and ARKit have limited persistence capabilities. So a workaround - which I think is what they use in that site you linked, is to use images/QR codes which they use to localise the device with a real world position and then use the device's SLAM capabilities to determine the device's movement and pose.
So for example, you can have a QR code or image that represents position 1,1 facing north in the real world. Conveniently, you can use ARCore/ARKit's to detect that image. When that specific image is tracked by the device, you can then confidently determine that the device is position 1, 1 (or close to it). You then use that information to plot a dot on a map at 1,1.
As you move, you can track the deltas in the AR camera's pose (position and rotation) to determine if you moved forward, turned etc. You can then use these deltas to update the position of that dot on your map.
There is intrinsic drift in this, as SLAM isn't perfect. But the AR frameworks should have some way to compensate against this using feature detection, or the user can re-localize by looking for another QR/image target.
As far as my knowledge is concern, This Virtual Positioning system has not been introduced yet in Google arcore. The link you provided, these guys are using iBeacon for positioning.
yup I believe it could be possible. Currently most developed ways have its limitation .I am working on to find another way with the fusion of Cloud Anchors with IBeacon.
I am trying to implement Extended tracking. Everything works fine,until user move device fastly. Basically I tracked an image and see 3d model. It remain in real world there If I move my camera here and there but at slow speed but if I move my device fastly 3d model will stick to view of my screen, which is not right. I guess its a bug in Vuforia.
Thanks,
Vanshika
It is not a bug. Extended tracking uses the visual information in the camera images from frame to frame to try and keep track of where the camera is, relative to the trackable -- there is no other way, a camera does not have position-tracking hardware. If the device is moved slowly, successive frames of the camera will partly contain 'the same' things, and can try to determine its own movement from that information (although there will be some drift). When the camera moves too fast, there is no information shared from frame to frame for the camera to use to determine its own viewpoint change. I believe the 3d model will only 'stick' to your screen if you do not disable it / its renderer when tracking of it is lost, i.e. in an OnTrackingLost() type of method, as found in the DefaultTrackableEventHandler.
I’m trying to create a simple AR simulation in Unity, and I want to speed up the process of re-localizing based on the ADF after I lose tracking in game. For example, is it better to have landmarks that are 3D shapes in the environment that are unchanging, or is it better to have landmarks that are 2D markings?
If it has to be one of these two, I would say 2D marking (visual features) would be preferred. So first, Tango is not using depth sensor for relocalization or pose estimations, 3D geometry is not necessary helping on the tracking. In a extremely case, if the device is in a pure white environment (with no shadows) with lots of boxes in it, it will still lost tracking eventually, because there's no visual features being tracking.
On the other hand, if there's a empty room, with lots of poster in it. Even it's not that "interesting" from its geometry. But it is good for tracking because it has enough visual feature to tracking.
Motion tracking API of Tango uses MonoSLAM algorithm. It uses wideangle camera and motion sensors to estimate pose of device. It doesn't use depth information into consideration to estimate pose vector of device.
In general SLAM algorithms uses feature detectors like Harris corner detection, FAST feature detection to detect features and track them. So it's better to put up 2D markers with rich of features like say any random pattern or any painting. This will help in feature tracking in case of MonoSLAM and generating rich ADF. Putting up 2D patterns at different places and at different 3D levels will even improve tracking of project tango.
I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!