Arcore StereoScopic Rendering in Android - android

I would like to implement an android application (API min 27) for a term project which enables users to experience Arcore and mixed reality features in headsets such as google cardboard etc with stereoscopic view. For preliminary research, i couldn't find valid resources and approaches in order to solve stereoscopic vision on arcore except some approaches in unity3d, openGL and some frameworks such as Vuforia etc.. As far as i know, currently arcore 1.5 is not supporting this feature.
I considered using cardboard sdk and arcore sdk together but I am not sure that it is going to be naive approach and provide solid foundation for the project and future works.
Is there a way to work around for desired stereoscopic view for Arcore in native android or how can I implement stereoscopic view for given case (Not asking for actual implementation, just brainstorming) ?
Thanks in Advance

there is the Google Sceneform - scenegraph-based visualization engine for AR applications and there is this feature request asking for the same thing you have planned. Now to answer your question, there is this java visualization library used throughout google's own ARCore examples and it has support for stereoscopic rendering. What is missing in their implementation is accounting for lens distortion.

Related

Which JS version supports plane detection for Augmented Reality?

I am making an AR app for Android which uses a WebView to host an URL, it should detect a plane and put a 3D object on it.
Can anyone tell me which JS supports plane detection for both Horizontal and Vertical plane detections?
It's not a version issue. It's the issue of WebXR Device API. At the moment it officially supports VR features only. If you need AR features, you must use a WebXR Hit Test API as well.
Excerpt from Augmented reality for the web:
In Chrome 67, we announced the WebXR Device API for both augmented reality (AR) and virtual reality (VR), though only the VR features were enabled. VR is an experience based purely on what's in a computing device. AR on the other hand allows you to render virtual objects in the real world. To allow placement and tracking of those objects, we just added the WebXR Hit Test API to Chrome Canary, a new method that helps immersive web code place objects in the real world.
The immersive web samples use one created just for the demos called Cottontail, and Three.js has supported WebXR since May 2018.
Hope this helps.

Area learning after Google Tango

Area learning was a key feature of Google Tango which allowed a Tango device to locate itself in a known environment and save/load a map file (ADF).
Since then Google has announced that it's shutting down Tango and putting its effort into ARCore, but I don't see anything related to area learning in ARCore documentation.
What is the future of area learning on Android ? Is it possible to achieve that on a non-Tango / ARCore-enabled device ?
Currently, Tango's area learning is not supported by ARCore and ARCore's offerings are not nearly as functional. First, Tango was able to take precise measurements of the surroundings, whereas ARCore is using mathematical models to make approximations. Currently, the ARCore modeling is nowhere near competitive with Tango's measurement capabilities; it appears to only model certain flat surfaces at the moment. [1]
Second, the area learning on Tango allowed the program to access previously captured ADF files, but ARCore does not currently support this -- meaning that the user has to hardcode the initial starting position. [2]
Google is working on a Visual Positioning Service that would live in the cloud and allow a client to compare local point maps with ground truth point maps to determine indoor position [3]. I suspect that this functionality will only work reliably if the original point map is generated using a rig with a depth sensor (ie. not in your own house with your smartphone), although mobile visual SLAM has had some success. This also seems like a perfect task for deep learning, so there might be robust solutions on the horizon.[4]
[1] ARCore official docs https://developers.google.com/ar/discover/concepts#environmental_understanding
[2] ARCore, ARKit: Augmented Reality for everyone, everywhere! https://www.cologne-intelligence.de/blog/arcore-arkit-augmented-reality-for-everyone-everywhere/
[3] Google 'Visual Positioning Service' AR Tracking in Action
https://www.youtube.com/watch?v=L6-KF0HPbS8
[4] Announcing the Matterport3D Research Dataset. https://matterport.com/blog/2017/09/20/announcing-matterport3d-research-dataset/
Now at Google Developers channel on YouTube there are Google ARCore videos.
These videos will learn users how to create shared AR experiences across Android and iOS devices and how to build apps using the new APIs revealed in the Google Keynote: Cloud Anchors, Augmented Images, Augmented Faces and Sceneform. You'll come out understanding how to implement them, how they work in each environment, and what opportunities they unlock for your users.
Hope this helps.

Android Application : Augmented Reality or Image Recognition

I am interested in developing an Android Application that employs the Android Devices Camera to detect moving "Targets".
The three types of targets I need to detect and distinguish between are pedestrians, runners (joggers) and cyclists.
The augmented realities SDK's I have looked at only seem to offer face recognition which doesn't sound like they can detect entire people.
Have i misunderstood what Augmented Realities SDK can provide?
There is a big list of AR SDKs (also for Android platform):
Augmented reality SDKs
However, to be honest I strongly doubt that you will find any (doesn't matter free or payed) SDK for your task. It is to specific so you should probably write it by yourself using OpenCV.
OpenCV will allow you to detect objects (more or less) and then you will need to write some algorithm for classification. I would recommend classification based on object speed.
Then, when you have your object classified you can add any AR SDK to add something to your picture.

Video Capture and implementation Google Cardboard (like Paul Mccartney app)

Regards community
I just want to build a similar app like this,
with my own content of course.
How to capture 360 degree video (cameras, format, ingest, audio)?
Implementation:
2.1 Which one Cardboard SDK works best for my interests (Android or Unity)
2.2 Do you know any blogs, websites, tutorials, samples in which I can support.
Thank you
MovieTextures are a great way to do this in Unity, but unfortunately MovieTextures are not implemented on Android (maybe this will change in Unity 5). See the docs here:
For a simple wrap-a-texture-on-a-sphere app, the Cardboard Java SDK should work. But if you would rather use Unity due to other aspects of the app, the next best way to do this would be to allocate a RenderTexture and then grab the GL id and pass it to a native plugin that you would write.
This native code would be decoding the video stream, and for each frame it would fill the texture. Then Unity can handle the rest of the rendering, as detailed by the previous answer.
First of all, you need content, and to record stereo 360 video, you'll need a rig of at least 12 cameras. Such rigs can be purchased for GoPro cams. That's gonna be expensive.
The recently released Unity 5 is a great option and I strongly suggest using it. The most basic way of doing 360 stereo videos in Unity is to create two spheres with MovieTextures showing your 360 video. Then you turn them "inside out", so that they display their back faces instead of the front faces. This can be done with a simple shader, turning on front face culling and removing the mirror effect. Then you place your cameras inside the spheres. If you are using the google cardboard sdk camera rig, put the spheres on different culling layers and make the cameras only see the appropriate spheres. Remember to put the spheres in proper positions regarding the cameras.
There may be other ways to do this, leading to better results, but they won't be as simple. You can also look for some paid scripts/plugins/assets to do 360 video in VR.

Augmented reality in android

I have just started developing an application on android based on augmented reality, I just want to know is there any plugins or development kit required for augmented reality also I have gone through some of the development kit as follows,
ARToolKit
FLARToolKit and FLARManager for Adobe Flash
SLARToolkit
AR-media™ Plugin for Google™ SketchUp™
NyARToolkit
LinceoVR
HandyAR
Total Immersion – D’Fusion Studio
Unifeye Mobile
and I want to know, which would be the better... My application is based on text translator and face recognition. Thank you!
To do face recognition try to use OpenCV. It have got in built library functions which can make life easier. Refer below links.
OpenCV -Android
OpenCV - Java
Augmented reality (AR) is a term for a live direct or indirect view of a physical, real-world environment whose elements are augmented by virtual computer-generated sensory input, such as sound or graphics. It is related to a more general concept called mediated reality, in which a view of reality is modified (possibly even diminished rather than augmented) by a computer. As a result, the technology functions by enhancing one’s current perception of reality.
http://mobile.tutsplus.com/tutorials/android/android_augmented-reality/

Categories

Resources