best sensor selection for ar application - android

Hello i want to ask which is the best sensor i can use for an augmented-reality application? my augmented reality app is using the mobiles camera and finds points of interest in the live view. i want to detect when the poi is in the field of view of my camera. I have read a lot of articles and i want to decide which option is the best. Here are my choices:
1)Compass with accelometer
2)Rotation vector
i though that the only solution is the 1) but finally i think that the 2) is more simple to create and more accurate than the first. Thanks in advance!

Mostly, you need to use multiple sensors. I've written a POI app, and use the accelerometer, GPS, compass and orientation sensors to get the current device position and field of view.
You should probably start from the basics of Augmented Reality, before attempting an app like a POI one, as they are considerably complex.
Might I humbly recommend my book, Pro Android Augmented Reality?
Even if you don't get a copy, you can still pick up the source code from it's GitHub repo. Chapter 9 contains code for a POI example app that shows nearby tweets and wikipedia articles.

Related

Android - Augmented Reality using METAIO

I want to provide a service of augmented reality in my app using the location of the user. For example, if the user frames with the device's camera a monument, it must be provided a description on it.
How can i implement it on an Android app?
What framework I need to install?
Where I can find a few examples showing the basic functions?
EDIT
Rather than display the information on the monuments framed by the device, i could simply show in which direction are located certain points of interest. But, given a certain direction (eg north), how can i determine what is in that direction within a certain radius?
I think this is a whole field of study...
for example in Android, for implementing location you have to use the LocationManager.
To do the thing of the monument, you have to use iBeacon for Android for example.
Briefly, what you're looking for is "IPS - Indoor Position System". I dare to say this is not a place to ask for a "whole app projectation"
Good luck.
I have found the solution to the question above by myself. I'm using Metaio for android! It is a powerfull tool which provides a lot of examples about Augmented Reality!

Geolocation based augmented reality

My task is to develop application for Android that should be used by tourist. Basic use case: I am going through old part of some town and then i start my app, point with camera to some place and some old building that is already gone will be present in its place as it was before.
My first direction that i was exploring was location based recognizing, I tried some frameworks like Wikitude, MetaIO and DroidAR. None of these was 100% fulfilling my need, because (in my opinion), noone was using (for its robustness) the newest tools that should make easier this task, like new Google Play Services Location API. I dont know if I could do better but I would prefer not to write my own solution.
I am now thinking about exploring marker based recognition but it would require additional work to place some markers to desired places and I dont believe that user would be in right angle and distance to that marker. I have seen some video that used some sort of edge detection but none of frameworks I used had this feature.
Do you know about some direction, technology or idea that I could explore and may lead to successful solution?
Augmented Reality will transfer real coordinates system to camera coordinates system. In AR Location-based, the real coordinate is Geographic coordinate system. We will convert the GPS coordinate (Latitude, Longitude, Altitude) to Navigation coordinate (East, North, Up), then transfer Navigation coordinate to Camera coordinate and display it on camera view.
I just create demo for you, not using any SDK
https://github.com/dat-ng/ar-location-based-android
I personally recommend you to use "Wikitude". Because I have created AR app for android using Wikitude SDk.
Also here I'm providing you app link which has been developed by Wikitude itself.
See below link :
https://play.google.com/store/apps/details?id=com.wikitude&hl=en
This app will give you brief idea about exploring place details using Wikitude Sdk. These sdk have free as well as paid library. It is well documented & very easy to implement. Also they have given very good sample practices for beginners.
Refer this link :
http://www.wikitude.com/products/wikitude-augmented-reality-sdk-mobile/wikitude-sdk-android/
I hope this will take you on track.
you already had some great ideas about your app. I guess these links will make you to learn more.
See links below:
http://net.educause.edu/ir/library/pdf/ERB1101.pdf
http://www.adristorical-lands.eu/index.php/sq/augmented-reality-app
Hope this will help you to go further in your project. Thank you.

Simple Marker Detection on Android (not traditional augmented reality), possibly with OpenCV

For my final year project at university, I am extending an application called Rviz for Android. This is an application for Android tablets that uses ROS (robot operating system) to display information coming from robots. The main intent of the project is essentially the opposite of traditional augmented reality - instead of projecting something digital onto a view of the real world, I am projecting a view of the real world, coming from the tablet's camera, onto a digital view of the world (an abstract map). The intended purpose of the application is that the view of the real world should move on the map as the tablet moves around.
To move the view of the camera feed on screen, I am using the tablet's accelerometer and calculating distance travelled. This is inherently flawed, and as such, the movement is far from accurate (this itself doesn't matter that much - it's great material for my report). To improve the movement of the camera feed, I wish to use markers placed at predefined positions in the real world, with the intent that, if a marker is detected, the view jumps to the position of the marker. Unfortunately, while there are many SDKs out there that deal with marker detection (such as the Qualcomm SDK), they are all geared towards proper augmented reality (that is to say, overlaying something on top of a marker).
So far, the only two frameworks I have identified that could be somewhat useful are OpenCV (which looks very promising indeed, though I'm not very experienced with C++) and AndAR, which again seems very focused on traditional AR uses, but I might be able to modify. Would either of these frameworks be appropriate here? Is there any other way I could implement a solution?
If it helps at all, this is the source code of the application I am extending, and this is the source code for ROS on Android (the code which uses the camera is in the "android_gingerbread_mr1" folder. I can also provide a link to my extensions to Rviz, if that would also help. Thank you very much!
Edit: the main issue I'm having at the moment is trying to integrate the two separate classes which access the camera (JavaCameraView in OpenCV, and CameraPreviewView in ROS). They both need to be active at the same time, but they do different things. I'm sure I can combine them. As previously mentioned, I'll link to/upload the classes in question if needed.
Have a look at the section about Template Matching in the OpenCV documentation. This thread may also be useful.
So I've managed to find a solution to my problem, and it's completely different from what I thought it would be. All of the image processing is offloaded onto a computer, and is performed by a ROS node. This node uses a library called ArUco to detect markers. The markers are generated by a separate program provided with the library, and each has its own unique ID.
When a marker is detected, a ROS message is published containing the marker's ID. The app on the tablet receives the message, and moves the real-world view according to which marker it receives. It works pretty well, though it's a bit unreliable, because I have to use a low image quality to make rendering and transport of the image quicker. And that's my solution! I may post my source code here once the project is completely finished.

How to use Qualcoomm Vuforia in combination with location based markers

As far as I understood, Vuforia is a good starting point for developing AR-Applications on the Android Plattform.
The Docs for Simple Virtual Buttons are quite good, but how would one combine this with location based data?
For Example:
On the application level, both markers and location based data should be used; so one would need f.e. Vuforia and another component for integrating location based data.
To get a deeper insight of what should be possible, here is an example:
You go through a landscape were the phone can
1.) recognize its position and view location based points on the screen and
2.) recognize objects in your view and perform actions upon "touching" those (virtual buttons, I learned..)
So my final question is:
Do you know examples of frameworks or demo-apps, where such a task is being accomplished by tying Vuforia together with location based AR Product/Framework XYZ?
Please excuse me, If I am not as precise as needed-I searched SO, but (as far as I saw) there are no such questions already.
for location based AR like wikitude Layar etc who draw poi based on location on camera view you have to get poi from different Api like openstreet ,twitter,etc after parsing and put these data on cameraview. sensor values and gps values are used.
to start with you can go through Mixare code which is open source .
you can implement this code with Vuforia cameraview.
follow my ans here for details. Gud luck

Image recognition for Android augmented reality app

First, sorry abour my poor english.
I'm planning to build an augmented reality app for android mobile platform and the main feature is the ability of the user to take a shoot of a shop and the application recognize the shop that he is photographing. I Do not know if the best option would be to use an image recognition api as many existing, but I think it would be something more specific. Maybe own a bank of images would help.
My plan was to have a database of stores with their locations and use one of many tools for image recognition and search in my database to the same location. But I found that all search engines images (kooba, iqengines, etc.) are not free and not a little cheaper. So would a tool that could use a limited catalog, like shops images in a shopping mall for example and send photos of smartphones (both android or iphone).
Can someone help me get started?
I've been doing something similar for my dissertation at University. I developed an application which detected signposts, read the content on them, then personalised / prioritised it depending on the user's preferences (with mixed success).
As part of this I had to look into Image Recognition.
Two things you may want to look at are:
The Qualcomm QCAR SDK. This was a little bit too image specific for what I was after, but if you were to do it on a small range of shops it may work. This would require a collection of shop images to match against - I don't know how successful it would be.
What I implemented used JavaCV (a conversion of OpenCV), which also has an Android conversion. It seems to allow for image recognition a bit more generally than the previous option which is why I used it. It would require you to run your own training to create a classifier though (unless there is another way of doing image recognition within it). But there are a number of guides which can help with that.
I used it for recognising signposts with reasonable success off just some basic training, though did tend to recognise a number of false positives.
Within my application I then used location to match up with previous detections etc.
Hopefully these may get you started.

Categories

Resources