I know about Google cardboard and I want to make say a campus tour of any company which will be handled by head movements in Google Cardboard with unity how to make campus buildings from the real images which I clicked by my camera.I am new to unity and much aware with android coding.could you link any unity tutorial.
And second thing my approach toward this idea is good with unity or it should be with android.
i want to make thing like this youtube link which i want Please suggest
In theory you could position a great lot of images in 3D space and have a scene with a very modern-art-like look - but it's much easier to do a spherical image and drag it to a sphere.
You can use a digital camera and hugin to stitch photos manually for better quality or just take any android phone with a gyroscope and semi-good camera and do a photosphere.
After getting a spherical image, just drag it to a sphere with reversed normals and put the VR camera in it's center - voilla, you've got a VR app.
A good idea would be to allow the user some interaction, moving between scenes etc. Usually there is a point where you look and either wait a bit or press the cardboard button.
Related
I tried searching a lot about developing 360 camera like Google Street View but still not able to reach through the solution.
I tried with the this panoramagl-android but this is not what i am looking for.
So can any one please give me idea or suggest anything to create spherical camera application.
360 images and videos are generally created with dedicated cameras or groups of regular cameras, and the result then 'stitched' together to produce the 360 representation.
The usual way to represent a 360 image or video at this time time is an equi-rectangular projection, similar to the technique used to depict the spherical globe on flat maps of the world.
If you are trying to do this with a regular phone you face the issue that you only have one camera, so you won't get the an image from multiple cameras at the same time to stitch together. This is maybe easier to understand visually - this is an example of a set up to capture multiple views:
You then need software to 'stitch' the different videos together. There are quite a few options, many being proprietary, VideoStitch is probably the best known at this time: http://www.video-stitch.com/.
Note that this is processing intensive so it nearly always done on relatively high powered servers rather than on mobile devices.
I am trying to show four videos at once using Google Cardboard. These videos are normal 2D videos that were shot on a normal 16:9 camera. What I want and need is to have one video in front of you then you turn your head 90 degrees and you see another video, turn again and see another until you hit the front video again. Please see my Pablo Picasso Microsoft Paint skills to visualize what I am talking about...
So basically what I need is like four VR movie theater screens that a person can look around in. Is there a program I could use or do I have to do some programming to make this happen? Searching this is not easy with all the articles of VR that pop up. Any help that can point me in the right direction would be greatly appreciated!
I actually found an app that did this all for me. The app is called 360 Virtual Reality Player(Google Play Store) and it takes any 2D video and makes it into a head-tracked VR video. Once I found this app, all I needed to do is stitch the videos together with a black bar in between them using OpenCV to get the desired effect.
im new to augmented reality but what is meant by the term marker ? i have done a web search and it says the marker is a place where content will be shown on the mobile device but im not clear still. Here is what i found out so far:
Augmented reality is hidden content, most commonly hidden behind
marker images, that can be included in printed and film media, as long
as the marker is displayed for a suitable length of time, in a steady
position for an application to identify and analyze it. Depending on
the content, the marker may have to remain visible.
There are a couple of types of marker in Vuforia, there are ones you define yourself, after putting them in to they CMS online, ones that you can create at run time and set markers that just have information around the edge. They are where your content will appear. You can see a video here where my business card is the marker and the 3d content is rendered on top. http://youtu.be/MvlHXKOonjI
When the app sees the marker it will work out the pose (position and rotation) of the marker and apply that to any 3d content you want to load, that way as you move around date marker the content stays in the same relative position to the marker.
And one final heads up, this is much easier in Unity 3D than using the iOS or Android native versions. I've done quite a lot and it saves a lot of time.
Marker is nothing but a target for your Augmented Reality app. Whenever you see your marker through the AR CAMERA, the models of your augmented reality app will be shown on!
It will be easy to understand if you develop your first app in augmented reality! :)
TLDR: Augmented Reality markers ~= Google Goggles + Activator
AR markers can be real-world objects or locations that trigger associated actions in your AR system when identified in sight, and usually result in some action, like displaying annotations (à la Rap Genius for objects around you).
Example:
Imagine you are gazing through your AR glasses. (You will see both what is displayed on the lenses, if anything, as well as the world around you.)
As you drive by a series of road cones closing the rightmost lane ahead, the AR software analyzes the scene and identifies several road cones in formation. This pattern is programmed to launch traffic notification software and, in conjunction with your AR's built-in GPS, obtains the latest information regarding what is going on here and what to expect.
In this way, the road cone formation is a marker: something particular and pre-defined triggers some action: obtaining and providing special information regarding your surroundings.
I am working on an Augmented Reality app that requires image tracker placed at a distant. A target would be a bill board or scoreboard in a basketball game. I have tried Qualcomm's Vuforia SDK, it seems it only works when the marker is placed within 3 feet from the camera. When you move further, I think it loses detail and AR engine is not able to recognize the tracker any more.
In theory, if the marker is large and bright enough, and with a clearly defined details and border markings for tracking purpose, should it not work?
Also, is there anyway for an AR app to recognize ANY flat surface like a table or hardwood floor with variety of colors and textures, as long as it's a flat surface. Typical applications would be virtual keyboard or chess board.
thanks,
Joe
AR is about recognition markers, not shape. AR engine input is image from camera and there is no way to determine shape from it, so answer for Your second question is: NO.
PS: In my case (iOS) default marker is detected from about 1,5m and can be tracked to about 4m. I think, that resolution of the camera is important thing and can affect on tracking effiency.
The experience we have is that a marker in the size of about 20x20cm is readable by the Vuforia SDK in about 5 meter distance. That seems to be the very limit.
I have implemented Augmented Reality application for android using Adobe Air for Android, FLARManager, Away3DLite.
The program works fine on flash, However when i publish it on my mobile phone (HTC Nexus One) or run it on the emulator my camera doesn’t activate and all i can see is the colour of my background and the framerate display.
I think that the problem is the Camera3D that i have used which is the FLARCamera_Away3DLite from FLARManager.
This is how I set my camera
import com.transmote.flar.camera.FLARCamera_Away3DLite;
private var camera3D:FLARCamera_Away3DLite;
this.camera3D = new FLARCamera_Away3DLite(this.flarManager, new Rectangle(0, 0, this.stage.stageWidth, this.stage.stageHeight));
I will really appreciate any advice i can get from you.
Thank you George
I think that you think wrong of the camera class. The camera class you use is the camera in your "virtual" 3d world and it is filming your 3d world. The "film" it then makes goes to the View class which can show your 3d world to 2d. Your screen is a 2d screen and is not capable of showing 3d. The camera class in combination with the view converts your 3D scene to a 2D image what is shows on your screen.
But since you want to make an AR app you mean the camera of the phone. You cant use the Away 3d camera class for this. This tutorial shows how to use the camera of your andriod phone in flash.
The steps you want to take is that you get your phonecamera feed, and past this on the screen. Then use the FLARtoolkit to determine the position of your marker. And then adjust the 3D model to the position of the marker. And last but not least show the 3d model on the screen (using the away3d/papervision camera and view). So basically you got 2 layers in you flash app. 1 background layer which is the feed of your phonecamera and the other layer (on top of it) is your view from away3d or papervision.
I think if you combine those tutorials you can make your application:
Use your phone camera
Augmented Reality with FLARManager
AR basics