Im readig for the last two days and not sure about this, would like to know if there is any way to read in my program (Android app) where the user is looking with the mobile (For example: User is looking to northeast with this degrees).
There is any API to do this or I have to program it with the compass and accelerometer of the mobile?
Edit:
What i want is this blue dot know where is pointing that triangular area.
Thanks
Related
I have a question about using ARcore.
How could one develop an app that recognizes the environment and distributes certain virtual elements to specific places in the scene? For example, when viewing a hallway with a few doors, a ladder and an exit, the app places a virtual board (sign) over the ladder with the word 'ladder' written upon it; upon each door, a board with the name of the room and a board saying 'output' on the exit. Is this possible? It is not a GeolocationApp, because GPS would not be used. wanted to do this from the recognition of the environment.
Even with Vuforia I found it difficult to do so, and so far I have not.
Can someone help me? Is there a manual or tutorial about it? Preferably not on video.
I thank everyone.
You will not go to space today
You want to do something that virtually no software can do yet. The only system I've seen that can do anything even remotely close to what you want is the Microsoft Hololens. And even it can't do what you're asking. The hololens can only identify the physical shape of the environment, providing details such as "floor like, 3.7 square meters" and "wall like, 10.2 square meters," and it does so in 8 cm cube increments (any given cube is updated once every few minutes).
On top of that, you want to identify "doors." Doors come in all shapes and sizes and in different colors. Recognizing each door in a hallway and then somehow identifying each one with a room number from a camera image? Yeah, no. We don't have the technology to do that yet, not outside Google Labs and Boston Dynamics.
You will not find what you are looking for.
I am very new to android, and I basically have almost no experience. Recently my client got an idea where he wants to have a custom map (in .png/.jpg/.jpeg format) on which, using GPS only, will be displayed his location using a marker, and location where he is supposed to get. Those two markers have to be connected with a "path", that will be like some sort of navigation from one marker to another. One of the requests is that it must be done without any usage of Google maps. My question here is - is that even possible to be done like this?
The only idea that I got is to get coordinate from GPS, make a proportion pixels on the image to coordinates and put a marker on where the user is supposed to be. Is there a better option than this?
I think you can try out this method in the below link.
Which can turn your images into interactive map layers that can be displayed in websites, used in mobile phones, tablets, GPS devices, map mashups or opened in the desktop GIS software, Google Maps or Google Earth.
http://www.maptiler.com/
I was wondering how the Yelp monocle works. It a cool feature , especially in terms of augmented reality. So I know they access GPS and compass data. And they have data about places nearby like hotels and bars etc. But how do they calculate the orientation w.r.t each other in real time, as I rotate my device. So if my device is pointing east and theres a pizza place to the north. I rotate my device to north. Now how does it know that I'm facing a pizza place here. What is the crucial bit of information being used to calculate this ?
I am thinking of developing similar kind of app for android. Please let me know how I can approach this ..
Well, when you know where you are, you know where you are facing and where your target is, you can calculate the rest. It's basic trigonometry.
I am new for Augmented Reality though I know Android development.
I am trying to create an app whose main aim is to overlay the camera preview with some image if the device camera is pointing to a particular building or place. The camera preview will be overlaid with some image if and only if camera is pointing to correct building and correct direction. The overlay image & its related data will be uploaded from back-end. I have gone through mixar but it is not giving the correct solution.
In this I am not getting the Elevation / altitude concept. From where I will get this. Which opensource sdk is better for this app? How to crack this application?
For altitude you should forget it. The altitude return by GPS is terrible. Just crossing from one side of the street to the other side, the altitude returned by GPS could be different by 50 meter. Also, the direction of the back camera using sensors will not be accurate enough for buildings close together. If you restrict your app for known buildings or places then you can adjust using some image recognition but still it is very hard.
try to download wikitude sample app from wikitude website.it contain sample project simpleArbrowser that is what u are searching.hope this will help u
I don't know of any opensource sdk but I used Wikitude SDK which is quite useful for you implementation.The main advantage about wikitude sdk is that you can use your own server for back-end data. Which is not possible with the other alternative that is layar. And #Hoan is right about getting altitude information from GPS data is really inacurate but you can give it a go as the inaccuracy is only visible if you are really near the Point of Interest. It works okay for a distance greater than 250 meters or so(not confirmed). The only problem you might get is from the compass deflections which are really great when you are near a high EM field or near metals. But that's a chance you'd have to take and nothing can be done about it.
If you want to place AR objects based on the LAT and LONG of the location with a given altitude.
Well, now that's possible using Google's ARCore Geospatial API.
Docs:
https://developers.google.com/ar/develop/geospatial
I want to create some sort of graphical arrow, or possibly draw an arrow over a compass to show the user what direction the wind is coming from. This would obviously change, given the orientation of the persons handset.
My application can tell me what direction (in degrees) the wind direction is coming from.
My question is, what is the best way to implement something like this?
Thanks
In your Exclipse create a new Android project and select "Create project from existing sample". Choose target android version and then ApiDemos. There you will find a Compass application and many other examples which can help you draw your screen.
I guess the best would be if your wind arrow would be in 3D or simulated 3D, so that it does not matter how the user is holding his device, for he would always look at the wind arrow from an elevated virtual vintage point.
In the same ApiDemos there is also "Sensors" demo which draws the physical orientation of the device.
Draw a compass, draw the wind arrow accordingly.
If the device knows its orientation, rotate the whole thing so that N on the compass points to actual North.
Then ask users whether they are happy with this setup, if not, why, improve, etc. But start with something dead simple, like the above.