My task is to develop application for Android that should be used by tourist. Basic use case: I am going through old part of some town and then i start my app, point with camera to some place and some old building that is already gone will be present in its place as it was before.
My first direction that i was exploring was location based recognizing, I tried some frameworks like Wikitude, MetaIO and DroidAR. None of these was 100% fulfilling my need, because (in my opinion), noone was using (for its robustness) the newest tools that should make easier this task, like new Google Play Services Location API. I dont know if I could do better but I would prefer not to write my own solution.
I am now thinking about exploring marker based recognition but it would require additional work to place some markers to desired places and I dont believe that user would be in right angle and distance to that marker. I have seen some video that used some sort of edge detection but none of frameworks I used had this feature.
Do you know about some direction, technology or idea that I could explore and may lead to successful solution?
Augmented Reality will transfer real coordinates system to camera coordinates system. In AR Location-based, the real coordinate is Geographic coordinate system. We will convert the GPS coordinate (Latitude, Longitude, Altitude) to Navigation coordinate (East, North, Up), then transfer Navigation coordinate to Camera coordinate and display it on camera view.
I just create demo for you, not using any SDK
https://github.com/dat-ng/ar-location-based-android
I personally recommend you to use "Wikitude". Because I have created AR app for android using Wikitude SDk.
Also here I'm providing you app link which has been developed by Wikitude itself.
See below link :
https://play.google.com/store/apps/details?id=com.wikitude&hl=en
This app will give you brief idea about exploring place details using Wikitude Sdk. These sdk have free as well as paid library. It is well documented & very easy to implement. Also they have given very good sample practices for beginners.
Refer this link :
http://www.wikitude.com/products/wikitude-augmented-reality-sdk-mobile/wikitude-sdk-android/
I hope this will take you on track.
you already had some great ideas about your app. I guess these links will make you to learn more.
See links below:
http://net.educause.edu/ir/library/pdf/ERB1101.pdf
http://www.adristorical-lands.eu/index.php/sq/augmented-reality-app
Hope this will help you to go further in your project. Thank you.
Related
I spent several hours looking for simple solution and still haven't found one.
MapBox style editor uses this simple feature. That you can hover and click over map, and it shows small popup stating all terrain classes you enabled in your map.
Question, how to do it in Android version of MapBox given I have installed my style. Now I want click on any place in the map and get the same popup stating, for example, that this is building, woods, background here. Or other place would satte, that this is major road.
This IS doable as MapBox studio itself shows. i can't believe it uses some API not available for anyone, as this is one API no map provider gives, while still able correctly draw terrain. What so complex to add this API?
And NO I am not interested in address. I am interested exactly on terrain, for simple task - distinguiosh water from non-water, road from non-road, building, from non-building, don't care where it is by address, so reverse geolocation does not work. Or simpler - I need SIMPLER geolocation, than address.
Your questions kind of confusing but I'll try and help. If I'm reading correctly, you are trying to create an Android app that uses an API similar to Mapbox Studio that allows the user to select/distinguish the difference between objects on the map such as buildings, water, forest, etc.
If this is the case, then first you must understand that Mapbox Studio is using OpenStreetMap data to distinguish between objects. These objects are stored in a database with tags. It's tough to explain so i'll just leave a brief reading wiki page that might help.
To my knowledge, there isn't any API's specific to Android that will give you the kind of information you're looking for. However, if I was in your dilemma I'd take a look at the Overpass API as it's a complex query tool that allows you to send coordinates to it and it will return all the tags (such as building or water) at that location within a JSON object. From there you can parse and use the data in your app. It is very powerful so I suggest reading up on how to use it and test using a website called Overpass Turbo, that's if you decide to use it.
Nevertheless, I hope this helps and I understood your question correctly.
I am developing an augmented-reality app to be used on both Google's Project Tango tablet, and on ordinary android devices. The AR on the normal devices is being powered by Vuforia, so its libraries are available in the development of the app.
While the Tango's capabilities offer a unique opportunity to create a marker-free AR system, the Pose data has significant drift that makes it difficult to justify Tango development due to the data's instability.
When Vuforia was being researched for eventual inclusion into the app, I came across its Extended Tracking capabilities. It uses some advanced Computer Vision to provide tentative information on the device's location without having the AR marker onscreen. I tried out the demo, and it actually works great. Fairly accurate within reason, and minimal drift (especially when compared to the Tango's pose data!)
I would like to implement this extended tracking feature into the Tango version of the app, but after viewing the documentation it appears that the only way to take advantage of the extended tracking feature is to activate it while viewing an AR marker, and then the capability takes over once the marker disappears from view.
Is there any way to activate this Extended Tracking feature without requiring an AR marker to source its original position, and simply use it to stabilize and correct error in the Tango's pose data? This seems like the most realistic solution to the drift problem that I've come up with yet, and I'd really like to be able to take advantage of this technology.
this is my first answer on stack overflow, so I hope it can help!
I too have asked myself the same question for vuforia, as it can often be more stable with extended tracking than with a marker, like when far from a marker, or/and at an angle for example, it can be unstable, if I then cover up the marker, therefor forcing the extended tracking, it works better! I've not come across a way to just use extended tracking, but I haven't looked very far.
My suggestion is that you look into maybe using a UDT (user defined target) In the vuforia examples, you can find how to use UDT. They are made so that the user can take a photo of whatever he likes as a target. but what you could maybe do, is take this photo automatically, without user input, and the use this UDT, and the extended tracking from the created target.
A suggestion I thought useful. Personally I find the tracking of the tango amazing and much better than vuforia's extended tracking (to be expected with the extra sensors) But i suppose it all depends on the environment.
Good luck, hope this suggestion could work,
Beau
I want to provide a service of augmented reality in my app using the location of the user. For example, if the user frames with the device's camera a monument, it must be provided a description on it.
How can i implement it on an Android app?
What framework I need to install?
Where I can find a few examples showing the basic functions?
EDIT
Rather than display the information on the monuments framed by the device, i could simply show in which direction are located certain points of interest. But, given a certain direction (eg north), how can i determine what is in that direction within a certain radius?
I think this is a whole field of study...
for example in Android, for implementing location you have to use the LocationManager.
To do the thing of the monument, you have to use iBeacon for Android for example.
Briefly, what you're looking for is "IPS - Indoor Position System". I dare to say this is not a place to ask for a "whole app projectation"
Good luck.
I have found the solution to the question above by myself. I'm using Metaio for android! It is a powerfull tool which provides a lot of examples about Augmented Reality!
I want to make a simple location based app for android. What are my (free) options. I looked at SDKs like Wikitude and Vuforia, but they are too much for my needs. I do not need image recognition and stuff... I just need to see on camera direction and distance to some coordinate - location.
Try Junaio and Metaio augmented reality frameworks. They are easy to work with. And also take a look at this Augmented Reality Framework comaprison table. You can choose GPS feature enabled Frameworks from it.
I suggest you to use api's called Junaio or Metaio. Before somedays I have just worked with it. It is very nice to work with and results are also very good with nice efficiency. Just try with it. Here is the link.
As far as I understood, Vuforia is a good starting point for developing AR-Applications on the Android Plattform.
The Docs for Simple Virtual Buttons are quite good, but how would one combine this with location based data?
For Example:
On the application level, both markers and location based data should be used; so one would need f.e. Vuforia and another component for integrating location based data.
To get a deeper insight of what should be possible, here is an example:
You go through a landscape were the phone can
1.) recognize its position and view location based points on the screen and
2.) recognize objects in your view and perform actions upon "touching" those (virtual buttons, I learned..)
So my final question is:
Do you know examples of frameworks or demo-apps, where such a task is being accomplished by tying Vuforia together with location based AR Product/Framework XYZ?
Please excuse me, If I am not as precise as needed-I searched SO, but (as far as I saw) there are no such questions already.
for location based AR like wikitude Layar etc who draw poi based on location on camera view you have to get poi from different Api like openstreet ,twitter,etc after parsing and put these data on cameraview. sensor values and gps values are used.
to start with you can go through Mixare code which is open source .
you can implement this code with Vuforia cameraview.
follow my ans here for details. Gud luck