I'm wondering if it's possible to switch to a "navigation view" with the Google Maps API.
By navigation view, I mean the "3D Follower Perspective" one experiences when using Google Maps API to navigate to a certain location.
I do not want to use any navigation functionality, I just want something like this follower perspective.
I've seen this view on a Windows Phone App and it additionally used about every sensor the device offered, such as compass and gyroscope to achieve almost some kind of an augmented reality feeling as the map turned the same way as oneself did.
Is anything comparable available on Android?
This is certainly possible.
Google I/O 2013 had nice demo that shows something similar to what you want to achieve.
Start watching at 21:30: http://www.youtube.com/watch?v=_oZiK_NJuG8
They actually show some code there that you will use.
Related
My application's output would be a panorama photo. I looked around for solutions but I couldn't find one which I liked. Almost all of what I saw were an exact copy of a many years old project back on code.google, using GL ES 10 only.
I'd like possible good performance and I also want to avoid reinventing the wheel. Google's Open Spherical API (https://developers.google.com/streetview/open-spherical-camera/) is great, but I need an implementation for a viewer.
I'm wondering if it's possible to reuse StreetViewPanorama's capabilities to implement a 360 viewer leaving out the Google Maps portion. Looks like that getStreetViewPanorama requires a MapView. If not StreetViewPanorama, is there another API I could use?
Or is there some intent I can fire? In that case I should tell though that I periodically would like to update the displayed 360 image.
I spent several hours looking for simple solution and still haven't found one.
MapBox style editor uses this simple feature. That you can hover and click over map, and it shows small popup stating all terrain classes you enabled in your map.
Question, how to do it in Android version of MapBox given I have installed my style. Now I want click on any place in the map and get the same popup stating, for example, that this is building, woods, background here. Or other place would satte, that this is major road.
This IS doable as MapBox studio itself shows. i can't believe it uses some API not available for anyone, as this is one API no map provider gives, while still able correctly draw terrain. What so complex to add this API?
And NO I am not interested in address. I am interested exactly on terrain, for simple task - distinguiosh water from non-water, road from non-road, building, from non-building, don't care where it is by address, so reverse geolocation does not work. Or simpler - I need SIMPLER geolocation, than address.
Your questions kind of confusing but I'll try and help. If I'm reading correctly, you are trying to create an Android app that uses an API similar to Mapbox Studio that allows the user to select/distinguish the difference between objects on the map such as buildings, water, forest, etc.
If this is the case, then first you must understand that Mapbox Studio is using OpenStreetMap data to distinguish between objects. These objects are stored in a database with tags. It's tough to explain so i'll just leave a brief reading wiki page that might help.
To my knowledge, there isn't any API's specific to Android that will give you the kind of information you're looking for. However, if I was in your dilemma I'd take a look at the Overpass API as it's a complex query tool that allows you to send coordinates to it and it will return all the tags (such as building or water) at that location within a JSON object. From there you can parse and use the data in your app. It is very powerful so I suggest reading up on how to use it and test using a website called Overpass Turbo, that's if you decide to use it.
Nevertheless, I hope this helps and I understood your question correctly.
I am developing an augmented-reality app to be used on both Google's Project Tango tablet, and on ordinary android devices. The AR on the normal devices is being powered by Vuforia, so its libraries are available in the development of the app.
While the Tango's capabilities offer a unique opportunity to create a marker-free AR system, the Pose data has significant drift that makes it difficult to justify Tango development due to the data's instability.
When Vuforia was being researched for eventual inclusion into the app, I came across its Extended Tracking capabilities. It uses some advanced Computer Vision to provide tentative information on the device's location without having the AR marker onscreen. I tried out the demo, and it actually works great. Fairly accurate within reason, and minimal drift (especially when compared to the Tango's pose data!)
I would like to implement this extended tracking feature into the Tango version of the app, but after viewing the documentation it appears that the only way to take advantage of the extended tracking feature is to activate it while viewing an AR marker, and then the capability takes over once the marker disappears from view.
Is there any way to activate this Extended Tracking feature without requiring an AR marker to source its original position, and simply use it to stabilize and correct error in the Tango's pose data? This seems like the most realistic solution to the drift problem that I've come up with yet, and I'd really like to be able to take advantage of this technology.
this is my first answer on stack overflow, so I hope it can help!
I too have asked myself the same question for vuforia, as it can often be more stable with extended tracking than with a marker, like when far from a marker, or/and at an angle for example, it can be unstable, if I then cover up the marker, therefor forcing the extended tracking, it works better! I've not come across a way to just use extended tracking, but I haven't looked very far.
My suggestion is that you look into maybe using a UDT (user defined target) In the vuforia examples, you can find how to use UDT. They are made so that the user can take a photo of whatever he likes as a target. but what you could maybe do, is take this photo automatically, without user input, and the use this UDT, and the extended tracking from the created target.
A suggestion I thought useful. Personally I find the tracking of the tango amazing and much better than vuforia's extended tracking (to be expected with the extra sensors) But i suppose it all depends on the environment.
Good luck, hope this suggestion could work,
Beau
I want to provide a service of augmented reality in my app using the location of the user. For example, if the user frames with the device's camera a monument, it must be provided a description on it.
How can i implement it on an Android app?
What framework I need to install?
Where I can find a few examples showing the basic functions?
EDIT
Rather than display the information on the monuments framed by the device, i could simply show in which direction are located certain points of interest. But, given a certain direction (eg north), how can i determine what is in that direction within a certain radius?
I think this is a whole field of study...
for example in Android, for implementing location you have to use the LocationManager.
To do the thing of the monument, you have to use iBeacon for Android for example.
Briefly, what you're looking for is "IPS - Indoor Position System". I dare to say this is not a place to ask for a "whole app projectation"
Good luck.
I have found the solution to the question above by myself. I'm using Metaio for android! It is a powerfull tool which provides a lot of examples about Augmented Reality!
I would like to create a navigation application, with mapquest sdk for android, that gives real time turn by turn directions using GPS after a route is created, like when you start a navigation on google maps on button press. Is it possible to implement the feature using just the mapquest sdk or would I require any other api.
Say if there was a possibility, then, is there a way to extract that guidance('turn left/right') cue and use it with in the program?
Yes, it is possible. I'm currently developing my Bachelor Thesis (a mobile navigation service) with the MapQuest API for Android. So as soon as I submitted it I can offer you the source code, too (will be in about a month). However, I can give you some help of course though.
Here is a nice tutorial from MapQuest of how to implement the route functionality. This is not a real time turn by turn guidance but gives you a first impression where to begin!
You can get all instructions from MapQuest here.
If you prefer another API to display the map this is not a problem as the guidance includes all shape points of the route. I chose the MapQuest API for this though as it is a bit easier to display the map for a first glance. However, I recommend you to draw the route yourself on the map though because the implemented method does not always work properly.
Hope I could help you with that and if you are willing to wait a month, I will post here the link to my GitHub repository with the source code.
Best,
Marius
EDIT:
So I submitted my work and can now give you access to my source code. You find my GitHub repository here.
I think the function getGuidance() in the NaviActivity will be a good starting point for your application. It calls the guidance information from MapQuest and converts the information into a JSONObject. The Route class then grabs the required information and sorts them in arrays.
I hope that this will help you with your application. For further questions do not hesitate to ask :)
Best,
Marius