Area learning after Google Tango - android

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).
Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.
What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?

Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]
Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]
Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]
[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding
[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/
[3] Google 'Visual Positioning Service' AR Tracking in Action
https://www.youtube.com/watch?v=L6-KF0HPbS8
[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/

Now at Google Developers channel on YouTube there are Google ARCore videos.
These videos will learn users how to create shared AR experiences across Android and iOS devices and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchors, Augmented Images, Augmented Faces and Sceneform. You'll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.
Hope this helps.

Related

ARCore alternative on Android

I'm developing an Android app with Augmented Reality in order to display points of interests at given location. I do not need face, plane or object recognition, only placing some points at specific locations (lat/long).
It seems ARCore on Android only supports few devices, my customer requires more devices supported as the AR view is the core of the app.
I was wondering if there are alternatives to ARCore on Android that supports placing points of interest at some coordinates, covering a large number of Android devices.
Thanks for any tip.
Well, there is this location based AR framework for Android: https://github.com/bitstars/droidar
However it hasn't been maintained for quite a long time. You can also look at vuforia, however it's not free:
https://developer.vuforia.com/

Is it possible to develop Indoor navigation by using ARCore?

I tried to achieve using google's cloud Anchors, but it has a limitation of 24hrs (after that the cloud anchors become invalid).
And another way is creating the replica of Unity, but that would be too lengthy process.
Any other ways please suggest me or any idea: https://www.insidernavigation.com/#solution - how they achieved it?
And how to save the common coordinate system in cloud or locally?
Current versions of ARCore and ARKit have limited persistence capabilities. So a workaround - which I think is what they use in that site you linked, is to use images/QR codes which they use to localise the device with a real world position and then use the device's SLAM capabilities to determine the device's movement and pose.
So for example, you can have a QR code or image that represents position 1,1 facing north in the real world. Conveniently, you can use ARCore/ARKit's to detect that image. When that specific image is tracked by the device, you can then confidently determine that the device is position 1, 1 (or close to it). You then use that information to plot a dot on a map at 1,1.
As you move, you can track the deltas in the AR camera's pose (position and rotation) to determine if you moved forward, turned etc. You can then use these deltas to update the position of that dot on your map.
There is intrinsic drift in this, as SLAM isn't perfect. But the AR frameworks should have some way to compensate against this using feature detection, or the user can re-localize by looking for another QR/image target.
As far as my knowledge is concern, This Virtual Positioning system has not been introduced yet in Google arcore. The link you provided, these guys are using iBeacon for positioning.
yup I believe it could be possible. Currently most developed ways have its limitation .I am working on to find another way with the fusion of Cloud Anchors with IBeacon.

ARCore: Emulator and unity

I'd like to test ARCore using Unity/c# before buying an android device - can I use Unity and ARCore emulator without having a device to put together an AR app but just using a camera from my PC, and does the camera require a specific spec?
I read Android Studio Beta now supports ARCore in the Emulator to test an app in a virtual environment right from the desktop, but can't tell if the update is integrated into Unity.
https://developers.googleblog.com/2018/02/announcing-arcore-10-and-new-updates-to.html
Any tips how people may be interacting with the app using a pc camera would be really helpful.
Thank you for your help !
Sergio
ARCore uses a combination of the device's IMU and camera. The camera tracks feature points in space and uses a cluster of those points to create a plane for your models. The IMU generates 3D sensor data which is passed to ARCore to track the device's movements.
Judging from the requirements above we can say a webcam just isn't going to work, since it lacks the IMU needed by ARCore. Just the camera won't be able to track the device's position, which may lead to objects drifting all over the place (If you managed to get it working at all). Even Google's page or reddit threads indicate that it just won't work.

2D markings vs 3D objects: How to optimize re-localizing with ADF using Google Tango?

I’m trying to create a simple AR simulation in Unity, and I want to speed up the process of re-localizing based on the ADF after I lose tracking in game. For example, is it better to have landmarks that are 3D shapes in the environment that are unchanging, or is it better to have landmarks that are 2D markings?
If it has to be one of these two, I would say 2D marking (visual features) would be preferred. So first, Tango is not using depth sensor for relocalization or pose estimations, 3D geometry is not necessary helping on the tracking. In a extremely case, if the device is in a pure white environment (with no shadows) with lots of boxes in it, it will still lost tracking eventually, because there's no visual features being tracking.
On the other hand, if there's a empty room, with lots of poster in it. Even it's not that "interesting" from its geometry. But it is good for tracking because it has enough visual feature to tracking.
Motion tracking API of Tango uses MonoSLAM algorithm. It uses wideangle camera and motion sensors to estimate pose of device. It doesn't use depth information into consideration to estimate pose vector of device.
In general SLAM algorithms uses feature detectors like Harris corner detection, FAST feature detection to detect features and track them. So it's better to put up 2D markers with rich of features like say any random pattern or any painting. This will help in feature tracking in case of MonoSLAM and generating rich ADF. Putting up 2D patterns at different places and at different 3D levels will even improve tracking of project tango.

Desktop based Augmented Reality Application

I am developing an AR based Application which contains around 30-50 models. Is it possible to develop it on Android cause there might be Memory problem in mobile devices. Is there any Desktop based AR API/SDK that can be used with 3D animation??
Yes you can create an android application for augmented reality. There are many applications on android market especially, the one in GPS. However handling 50 models might cause a memory problem. However in high end devices like Samsung Galaxy S4 and Note 2, i dont think so you might face memory issue. Further you can also place your models in a dedicated server from where your application can fetch it. This can reduce the chances memory issues.
Some basic examples for AR on android are given here:
http://readwrite.com/2010/12/01/3-augmented-reality-tutorials#awesm=~ohLxX5jDGJLml9
AR application for desktop i haven't worked on it. I think this might help:
http://www.arlab.com/
Does desktop application include WebGL applications in the web browser?
If so, then you might want to check out skarf.js, a framework that I have written for handling JavaScript augmented reality libraries in Three.js (JavaScript 3D library that wraps WebGL). It currently integrates two JavaScript-based augmented reality libraries: JSARToolKit and js-aruco.
The skarf.js framework takes care of a number of things for you, including automatic loading of models when the associated markers are detected (association is specified in a JSON file). There is also a GUI marker system which allows users to control settings using AR markers.
Integration with Three.js is just one line of code to create a Skarf instance and another line of code to update.
There are videos, live demos, source codes, examples and documentation available. Check out http://cg.skeelogy.com/skarfjs/ for more info.

Categories

Resources