I am developing an augmented-reality app to be used on both Google's Project Tango tablet, and on ordinary android devices. The AR on the normal devices is being powered by Vuforia, so its libraries are available in the development of the app.
While the Tango's capabilities offer a unique opportunity to create a marker-free AR system, the Pose data has significant drift that makes it difficult to justify Tango development due to the data's instability.
When Vuforia was being researched for eventual inclusion into the app, I came across its Extended Tracking capabilities. It uses some advanced Computer Vision to provide tentative information on the device's location without having the AR marker onscreen. I tried out the demo, and it actually works great. Fairly accurate within reason, and minimal drift (especially when compared to the Tango's pose data!)
I would like to implement this extended tracking feature into the Tango version of the app, but after viewing the documentation it appears that the only way to take advantage of the extended tracking feature is to activate it while viewing an AR marker, and then the capability takes over once the marker disappears from view.
Is there any way to activate this Extended Tracking feature without requiring an AR marker to source its original position, and simply use it to stabilize and correct error in the Tango's pose data? This seems like the most realistic solution to the drift problem that I've come up with yet, and I'd really like to be able to take advantage of this technology.
this is my first answer on stack overflow, so I hope it can help!
I too have asked myself the same question for vuforia, as it can often be more stable with extended tracking than with a marker, like when far from a marker, or/and at an angle for example, it can be unstable, if I then cover up the marker, therefor forcing the extended tracking, it works better! I've not come across a way to just use extended tracking, but I haven't looked very far.
My suggestion is that you look into maybe using a UDT (user defined target) In the vuforia examples, you can find how to use UDT. They are made so that the user can take a photo of whatever he likes as a target. but what you could maybe do, is take this photo automatically, without user input, and the use this UDT, and the extended tracking from the created target.
A suggestion I thought useful. Personally I find the tracking of the tango amazing and much better than vuforia's extended tracking (to be expected with the extra sensors) But i suppose it all depends on the environment.
Good luck, hope this suggestion could work,
Beau
Related
I want to develop an android augmented reality application, in which app should have a function to reconstruct a destructed objects(ex : buildings/statues) as shown in the following video link
https://www.youtube.com/watch?v=WOVjISxlhpU
I have gone through metaio, wikitude and vuforia sites each has some difficulties it self. and at Last i found vuforia has a feature call Smart Terrain where it is used for 3D animation and game development , the issue is only limited tutorials available to develop a customized application.
With that above link i found armedia.it and hyperspaces.inglobetechnologies.com those too have limited tutorials with the code.
Please let me know if any other SDK available their to fulfill my app feature share if there any useful tutorial to do such for the above sdk's
Thanks in advance
I do not think that there are any publicly available tools that can help you do this, except perhaps that armedia app. Reading through it, it seems like their approach is kind of laborious and fragile (align photos of user viewpoints with accurate 3D model). If you can't work through the tutorials these tools have, then posting here isn't going to get you what you want: SO is for asking specific technical questions (e.g., help fixing problems in code you have tried to build), not for general guidance and help.
FWIW, Vuforia's visual tracking will not work for something like this, I don't think, as is aimed at MUCH smaller scale things (for which you can build a target or object); typically things smaller than a person. Metaio is no longer available (Apple bought them).
Non-visual tracking (GPS + orientation) is not sufficient to attain this kind of tight registration.
I want to provide a service of augmented reality in my app using the location of the user. For example, if the user frames with the device's camera a monument, it must be provided a description on it.
How can i implement it on an Android app?
What framework I need to install?
Where I can find a few examples showing the basic functions?
EDIT
Rather than display the information on the monuments framed by the device, i could simply show in which direction are located certain points of interest. But, given a certain direction (eg north), how can i determine what is in that direction within a certain radius?
I think this is a whole field of study...
for example in Android, for implementing location you have to use the LocationManager.
To do the thing of the monument, you have to use iBeacon for Android for example.
Briefly, what you're looking for is "IPS - Indoor Position System". I dare to say this is not a place to ask for a "whole app projectation"
Good luck.
I have found the solution to the question above by myself. I'm using Metaio for android! It is a powerfull tool which provides a lot of examples about Augmented Reality!
My task is to develop application for Android that should be used by tourist. Basic use case: I am going through old part of some town and then i start my app, point with camera to some place and some old building that is already gone will be present in its place as it was before.
My first direction that i was exploring was location based recognizing, I tried some frameworks like Wikitude, MetaIO and DroidAR. None of these was 100% fulfilling my need, because (in my opinion), noone was using (for its robustness) the newest tools that should make easier this task, like new Google Play Services Location API. I dont know if I could do better but I would prefer not to write my own solution.
I am now thinking about exploring marker based recognition but it would require additional work to place some markers to desired places and I dont believe that user would be in right angle and distance to that marker. I have seen some video that used some sort of edge detection but none of frameworks I used had this feature.
Do you know about some direction, technology or idea that I could explore and may lead to successful solution?
Augmented Reality will transfer real coordinates system to camera coordinates system. In AR Location-based, the real coordinate is Geographic coordinate system. We will convert the GPS coordinate (Latitude, Longitude, Altitude) to Navigation coordinate (East, North, Up), then transfer Navigation coordinate to Camera coordinate and display it on camera view.
I just create demo for you, not using any SDK
https://github.com/dat-ng/ar-location-based-android
I personally recommend you to use "Wikitude". Because I have created AR app for android using Wikitude SDk.
Also here I'm providing you app link which has been developed by Wikitude itself.
See below link :
https://play.google.com/store/apps/details?id=com.wikitude&hl=en
This app will give you brief idea about exploring place details using Wikitude Sdk. These sdk have free as well as paid library. It is well documented & very easy to implement. Also they have given very good sample practices for beginners.
Refer this link :
http://www.wikitude.com/products/wikitude-augmented-reality-sdk-mobile/wikitude-sdk-android/
I hope this will take you on track.
you already had some great ideas about your app. I guess these links will make you to learn more.
See links below:
http://net.educause.edu/ir/library/pdf/ERB1101.pdf
http://www.adristorical-lands.eu/index.php/sq/augmented-reality-app
Hope this will help you to go further in your project. Thank you.
For my final year project at university, I am extending an application called Rviz for Android. This is an application for Android tablets that uses ROS (robot operating system) to display information coming from robots. The main intent of the project is essentially the opposite of traditional augmented reality - instead of projecting something digital onto a view of the real world, I am projecting a view of the real world, coming from the tablet's camera, onto a digital view of the world (an abstract map). The intended purpose of the application is that the view of the real world should move on the map as the tablet moves around.
To move the view of the camera feed on screen, I am using the tablet's accelerometer and calculating distance travelled. This is inherently flawed, and as such, the movement is far from accurate (this itself doesn't matter that much - it's great material for my report). To improve the movement of the camera feed, I wish to use markers placed at predefined positions in the real world, with the intent that, if a marker is detected, the view jumps to the position of the marker. Unfortunately, while there are many SDKs out there that deal with marker detection (such as the Qualcomm SDK), they are all geared towards proper augmented reality (that is to say, overlaying something on top of a marker).
So far, the only two frameworks I have identified that could be somewhat useful are OpenCV (which looks very promising indeed, though I'm not very experienced with C++) and AndAR, which again seems very focused on traditional AR uses, but I might be able to modify. Would either of these frameworks be appropriate here? Is there any other way I could implement a solution?
If it helps at all, this is the source code of the application I am extending, and this is the source code for ROS on Android (the code which uses the camera is in the "android_gingerbread_mr1" folder. I can also provide a link to my extensions to Rviz, if that would also help. Thank you very much!
Edit: the main issue I'm having at the moment is trying to integrate the two separate classes which access the camera (JavaCameraView in OpenCV, and CameraPreviewView in ROS). They both need to be active at the same time, but they do different things. I'm sure I can combine them. As previously mentioned, I'll link to/upload the classes in question if needed.
Have a look at the section about Template Matching in the OpenCV documentation. This thread may also be useful.
So I've managed to find a solution to my problem, and it's completely different from what I thought it would be. All of the image processing is offloaded onto a computer, and is performed by a ROS node. This node uses a library called ArUco to detect markers. The markers are generated by a separate program provided with the library, and each has its own unique ID.
When a marker is detected, a ROS message is published containing the marker's ID. The app on the tablet receives the message, and moves the real-world view according to which marker it receives. It works pretty well, though it's a bit unreliable, because I have to use a low image quality to make rendering and transport of the image quicker. And that's my solution! I may post my source code here once the project is completely finished.
I read this paper about real-time object recognition, and found it very interesting. I'm currently working on an location-aware app that is similar to the sport of orienteering.
For control point representation (a check point/waypoint) the initial idea was to use QR code scanning to verify a user of having been there; and thereby giving directions to the next control point.
Now it looks much more interesting to use some sort of natural object/scene recognition to verify users on a specific location in a route (if possible and effective).
Does anyone know more about this with specific code examples or experimentation, not just a conceptual presentation of it?
Complex solutions and in early stages of development, prone to inaccuracies. Have a look at OpenCV to begin with.