Google Maps VR in Android Application - android

I have an Android application with a map at the moment.
The native Google Maps application has a button that appears in street view, and when double tapped, it converts street view to VR mode. More specifically, it splits the street view in two for a separate image for each eye, to be used with Google cardboard. It then also tracks device movements and adjusts the FOV accordingly.
I'm building my own app, and would like to use this VR feature of maps in my own application.
One thought I have is to load two street view images immediately adjacent to each other, and then update the FOV by reading the device sensors.
I can see this being slow as my implementation is not likely to be as snappy as Googles.
Is there a way to use Google's own VR mode in a native app?
If not, what would you suggest doing to achieve the same effect?

Related

Area learning after Google Tango

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).
Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.
What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?
Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]
Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]
Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]
[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding
[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/
[3] Google 'Visual Positioning Service' AR Tracking in Action
https://www.youtube.com/watch?v=L6-KF0HPbS8
[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/
Now at Google Developers channel on YouTube there are Google ARCore videos.
These videos will learn users how to create shared AR experiences across Android and iOS devices and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchors, Augmented Images, Augmented Faces and Sceneform. You'll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.
Hope this helps.

Android markers without using Google maps

I am very new to android, and I basically have almost no experience. Recently my client got an idea where he wants to have a custom map (in .png/.jpg/.jpeg format) on which, using GPS only, will be displayed his location using a marker, and location where he is supposed to get. Those two markers have to be connected with a "path", that will be like some sort of navigation from one marker to another. One of the requests is that it must be done without any usage of Google maps. My question here is - is that even possible to be done like this?
The only idea that I got is to get coordinate from GPS, make a proportion pixels on the image to coordinates and put a marker on where the user is supposed to be. Is there a better option than this?
I think you can try out this method in the below link.
Which can turn your images into interactive map layers that can be displayed in websites, used in mobile phones, tablets, GPS devices, map mashups or opened in the desktop GIS software, Google Maps or Google Earth.
http://www.maptiler.com/

Making scene making from real clicked images for Google cardboard

I know about Google cardboard and I want to make say a campus tour of any company which will be handled by head movements in Google Cardboard with unity how to make campus buildings from the real images which I clicked by my camera.I am new to unity and much aware with android coding.could you link any unity tutorial.
And second thing my approach toward this idea is good with unity or it should be with android.
i want to make thing like this youtube link which i want Please suggest
In theory you could position a great lot of images in 3D space and have a scene with a very modern-art-like look - but it's much easier to do a spherical image and drag it to a sphere.
You can use a digital camera and hugin to stitch photos manually for better quality or just take any android phone with a gyroscope and semi-good camera and do a photosphere.
After getting a spherical image, just drag it to a sphere with reversed normals and put the VR camera in it's center - voilla, you've got a VR app.
A good idea would be to allow the user some interaction, moving between scenes etc. Usually there is a point where you look and either wait a bit or press the cardboard button.

StreetView gyroscope functionality

In the official Google Maps application when a user rotates the device, StreetView is rotated. I made a research and didn't find how to enable gyroscope functionality in Google Maps v2 API for StreetView. I can implement this by myself, but maybe there IS such functionality and I just haven't find it?
There isn't any built-in features in StreetView to enable this functionality. If you would like to do it by yourself you can check following links:
Set the camera orientation point of view
Animate the camera movements
It looks like Google recently updated their API to include accelerometer-based tracking in the street view point of view. Try this example on your phone.
From the documentation on Motion tracking on mobile devices:
On devices that support device orientation events, the API offers users the ability to change the Street View point of view based on the movement of the device. Users can look around by moving their devices. This is called motion tracking or device rotation tracking.
More information (like how to disable it) is in that link.

Augmented reality Vuforia- What is a marker

im new to augmented reality but what is meant by the term marker ? i have done a web search and it says the marker is a place where content will be shown on the mobile device but im not clear still. Here is what i found out so far:
Augmented reality is hidden content, most commonly hidden behind
marker images, that can be included in printed and film media, as long
as the marker is displayed for a suitable length of time, in a steady
position for an application to identify and analyze it. Depending on
the content, the marker may have to remain visible.
There are a couple of types of marker in Vuforia, there are ones you define yourself, after putting them in to they CMS online, ones that you can create at run time and set markers that just have information around the edge. They are where your content will appear. You can see a video here where my business card is the marker and the 3d content is rendered on top. http://youtu.be/MvlHXKOonjI
When the app sees the marker it will work out the pose (position and rotation) of the marker and apply that to any 3d content you want to load, that way as you move around date marker the content stays in the same relative position to the marker.
And one final heads up, this is much easier in Unity 3D than using the iOS or Android native versions. I've done quite a lot and it saves a lot of time.
Marker is nothing but a target for your Augmented Reality app. Whenever you see your marker through the AR CAMERA, the models of your augmented reality app will be shown on!
It will be easy to understand if you develop your first app in augmented reality! :)
TLDR: Augmented Reality markers ~= Google Goggles + Activator
AR markers can be real-world objects or locations that trigger associated actions in your AR system when identified in sight, and usually result in some action, like displaying annotations (à la Rap Genius for objects around you).
Example:
Imagine you are gazing through your AR glasses. (You will see both what is displayed on the lenses, if anything, as well as the world around you.)
As you drive by a series of road cones closing the rightmost lane ahead, the AR software analyzes the scene and identifies several road cones in formation. This pattern is programmed to launch traffic notification software and, in conjunction with your AR's built-in GPS, obtains the latest information regarding what is going on here and what to expect.
In this way, the road cone formation is a marker: something particular and pre-defined triggers some action: obtaining and providing special information regarding your surroundings.

Categories

Resources