Location based AR - android

I am new for Augmented Reality though I know Android development.
I am trying to create an app whose main aim is to overlay the camera preview with some image if the device camera is pointing to a particular building or place. The camera preview will be overlaid with some image if and only if camera is pointing to correct building and correct direction. The overlay image & its related data will be uploaded from back-end. I have gone through mixar but it is not giving the correct solution.
In this I am not getting the Elevation / altitude concept. From where I will get this. Which opensource sdk is better for this app? How to crack this application?

For altitude you should forget it. The altitude return by GPS is terrible. Just crossing from one side of the street to the other side, the altitude returned by GPS could be different by 50 meter. Also, the direction of the back camera using sensors will not be accurate enough for buildings close together. If you restrict your app for known buildings or places then you can adjust using some image recognition but still it is very hard.

try to download wikitude sample app from wikitude website.it contain sample project simpleArbrowser that is what u are searching.hope this will help u

I don't know of any opensource sdk but I used Wikitude SDK which is quite useful for you implementation.The main advantage about wikitude sdk is that you can use your own server for back-end data. Which is not possible with the other alternative that is layar. And #Hoan is right about getting altitude information from GPS data is really inacurate but you can give it a go as the inaccuracy is only visible if you are really near the Point of Interest. It works okay for a distance greater than 250 meters or so(not confirmed). The only problem you might get is from the compass deflections which are really great when you are near a high EM field or near metals. But that's a chance you'd have to take and nothing can be done about it.

If you want to place AR objects based on the LAT and LONG of the location with a given altitude.
Well, now that's possible using Google's ARCore Geospatial API.
Docs:
https://developers.google.com/ar/develop/geospatial

Related

is it posible to accurately place a circle relevant to real life object using arcore?

Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.

How to integrate Location based Augmented Reality in Android

I have been searching seens long about augmented reality examples or documents which covers location based services.
For Example i have to create such type of app which gives location based marker on camera view.Whenever i move camera to the direction of latitude and longitude i need to display AR marker on that direction.some thing like it is used to navigate user to that direction.
There are many SDKs available for this like.
GoogleCore,
Vuforia,
Wikitude
But main problem of Google Core is it is not supported with all devices and i have also checked other example which needs compass sensor but which is also i dont want because compass sensor also not available in all devices
Also,I have searched in Vuforia and Wikitude but i am not getting how to display AR object with respect to latitude and longitude
There are lots off calculation available but i am not able to identify actual one to use.Please help if anyone knows about it
Thanks in advance!!!

Is it possible to develop Indoor navigation by using ARCore?

I tried to achieve using google's cloud Anchors, but it has a limitation of 24hrs (after that the cloud anchors become invalid).
And another way is creating the replica of Unity, but that would be too lengthy process.
Any other ways please suggest me or any idea: https://www.insidernavigation.com/#solution - how they achieved it?
And how to save the common coordinate system in cloud or locally?
Current versions of ARCore and ARKit have limited persistence capabilities. So a workaround - which I think is what they use in that site you linked, is to use images/QR codes which they use to localise the device with a real world position and then use the device's SLAM capabilities to determine the device's movement and pose.
So for example, you can have a QR code or image that represents position 1,1 facing north in the real world. Conveniently, you can use ARCore/ARKit's to detect that image. When that specific image is tracked by the device, you can then confidently determine that the device is position 1, 1 (or close to it). You then use that information to plot a dot on a map at 1,1.
As you move, you can track the deltas in the AR camera's pose (position and rotation) to determine if you moved forward, turned etc. You can then use these deltas to update the position of that dot on your map.
There is intrinsic drift in this, as SLAM isn't perfect. But the AR frameworks should have some way to compensate against this using feature detection, or the user can re-localize by looking for another QR/image target.
As far as my knowledge is concern, This Virtual Positioning system has not been introduced yet in Google arcore. The link you provided, these guys are using iBeacon for positioning.
yup I believe it could be possible. Currently most developed ways have its limitation .I am working on to find another way with the fusion of Cloud Anchors with IBeacon.

How does yelp monocle work

I was wondering how the Yelp monocle works. It a cool feature , especially in terms of augmented reality. So I know they access GPS and compass data. And they have data about places nearby like hotels and bars etc. But how do they calculate the orientation w.r.t each other in real time, as I rotate my device. So if my device is pointing east and theres a pizza place to the north. I rotate my device to north. Now how does it know that I'm facing a pizza place here. What is the crucial bit of information being used to calculate this ?
I am thinking of developing similar kind of app for android. Please let me know how I can approach this ..
Well, when you know where you are, you know where you are facing and where your target is, you can calculate the rest. It's basic trigonometry.

Augment Reality for Android

I have quite strange augment reality case to implement. Most AR frameworks I've found can be classified on 2 groups:
GPS based ones
Based on visual markers (something like a QR code) located in real world.
Basically here is a list:
AndAr https://code.google.com/p/andar/
Mixare https://code.google.com/p/mixare/
DroidAr https://code.google.com/p/droidar/
But this does not fit my case, in simple words, I do need to place visual marker flying in a room near by one or several physical assets. I do have all needed coordinates, but I don't sure how I can show marker flying 2 meters in front of a phone, because all above mentioned frameworks positioning api based on degrees, minutes and seconds. Don't sure how I can correlate those 2 coordinates system.
I use BeyondAR for this. is small, simple, free and open source.
The Wikitude SDK allows you to put markers (so called GeoObjects) relative to a position. The position could be the user's position or any position defined by latitude, longitude, altitude (in your case this could be the initial user's position). The relative position is defined in meters north/east of the initial position.
For more information have a look at the documentation at: http://www.wikitude.com/external/doc/documentation/3.0/Reference/JavaScript%20Reference/index.html
It includes both Geo-based AR and Image Recognition & Tracking.
Disclaimer: I'm working for Wikitude
I use BeyondAR for this. is small, simple, free and open source. Here the linke
BeyondAR Framework

Categories

Resources