3d Object as marker in augmented reality - android

I want to make an application of augmented reality, and my application would have to detect 3d object as marker or real object i googled and find some work who are doing same thing like use building as marker or any other place but it did through GPS system i don't want to do that kind of thing, i need to work marker detection which is already working for 2d in Vuforia sdk. I want to know that is this possible that i can use 3d object or real object as a marker in augmented reality, If yes then would you suggest me that which any sdk for which will help me to build this in iPhone and android like Vuforia.
Thanks in advance, have a great day.

Metaio is a fairly easy to use AR SDK available for both Android and iPhone that has 3D tracking capabilities:
https://dev.metaio.com/sdk/tracking-configuration/optical-tracking-technologies/markerless-3d/
The non-watermarked version that supports 3D tracking, however, is expensive. A free alternative (currently) is to use the Junaio browser that Metaio makes to provide a 3D tracking AR experience:
http://www.junaio.com/develop/quickstart/3d-tracking-and-junaio/
I have experience with both of them but I have not used the 3D tracking features yet so I am not sure if you could track an actual building, but from the demos things such as boxes and similar objects appear to be supported well.
Good luck!

Related

ARCore alternative on Android

I'm developing an Android app with Augmented Reality in order to display points of interests at given location. I do not need face, plane or object recognition, only placing some points at specific locations (lat/long).
It seems ARCore on Android only supports few devices, my customer requires more devices supported as the AR view is the core of the app.
I was wondering if there are alternatives to ARCore on Android that supports placing points of interest at some coordinates, covering a large number of Android devices.
Thanks for any tip.
Well, there is this location based AR framework for Android: https://github.com/bitstars/droidar
However it hasn't been maintained for quite a long time. You can also look at vuforia, however it's not free:
https://developer.vuforia.com/

Area learning after Google Tango

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).
Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.
What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?
Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]
Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]
Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]
[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding
[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/
[3] Google 'Visual Positioning Service' AR Tracking in Action
https://www.youtube.com/watch?v=L6-KF0HPbS8
[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/
Now at Google Developers channel on YouTube there are Google ARCore videos.
These videos will learn users how to create shared AR experiences across Android and iOS devices and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchors, Augmented Images, Augmented Faces and Sceneform. You'll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.
Hope this helps.

AR Object tracking Android

I am looking for an AR SDK on Android, that allows 3D object tracking with an OBJ-file or similar as the target reference.
We previously used Metaio for this, but since it was bought by Apple, we are searching for an alternative.
Vuforia is not an option, because it uses feature tracking for 3d object tracking and as opposed to edge based tracking.
Is there another AR SDK for Android that supportes edge based CAD tracking?

2D markings vs 3D objects: How to optimize re-localizing with ADF using Google Tango?

I’m trying to create a simple AR simulation in Unity, and I want to speed up the process of re-localizing based on the ADF after I lose tracking in game. For example, is it better to have landmarks that are 3D shapes in the environment that are unchanging, or is it better to have landmarks that are 2D markings?
If it has to be one of these two, I would say 2D marking (visual features) would be preferred. So first, Tango is not using depth sensor for relocalization or pose estimations, 3D geometry is not necessary helping on the tracking. In a extremely case, if the device is in a pure white environment (with no shadows) with lots of boxes in it, it will still lost tracking eventually, because there's no visual features being tracking.
On the other hand, if there's a empty room, with lots of poster in it. Even it's not that "interesting" from its geometry. But it is good for tracking because it has enough visual feature to tracking.
Motion tracking API of Tango uses MonoSLAM algorithm. It uses wideangle camera and motion sensors to estimate pose of device. It doesn't use depth information into consideration to estimate pose vector of device.
In general SLAM algorithms uses feature detectors like Harris corner detection, FAST feature detection to detect features and track them. So it's better to put up 2D markers with rich of features like say any random pattern or any painting. This will help in feature tracking in case of MonoSLAM and generating rich ADF. Putting up 2D patterns at different places and at different 3D levels will even improve tracking of project tango.

Android Application : Augmented Reality or Image Recognition

I am interested in developing an Android Application that employs the Android Devices Camera to detect moving "Targets".
The three types of targets I need to detect and distinguish between are pedestrians, runners (joggers) and cyclists.
The augmented realities SDK's I have looked at only seem to offer face recognition which doesn't sound like they can detect entire people.
Have i misunderstood what Augmented Realities SDK can provide?
There is a big list of AR SDKs (also for Android platform):
Augmented reality SDKs
However, to be honest I strongly doubt that you will find any (doesn't matter free or payed) SDK for your task. It is to specific so you should probably write it by yourself using OpenCV.
OpenCV will allow you to detect objects (more or less) and then you will need to write some algorithm for classification. I would recommend classification based on object speed.
Then, when you have your object classified you can add any AR SDK to add something to your picture.

Categories

Resources