Displaying 180 degree stereoscopic video instead of 360? - android

My VR-researcher friend asked me to develop a simple android (and eventually iOS) app for google cardboard that displays her 180 degree video. She stitched it together from her gopro 3 hero rig, and it's in a quite unusual projection I believe. I've been researching this a lot but I haven't seen anything like it, although I must admit I'm new to VR video.
Google's documentation for the cardboard specifies the equirectangular-panoramic projection as the only thing supported at the time and it doesn't seem like anyone uses this bizarre projection besides her. She normally loads it into koloreyes - where it looks perfectly fine - on her macbook and displays it on an Oculus and claims this projection based on the eyes curvature improves immersion, and I must agree it was deeply immersive when I tried it myself for her research project. (Full length video available here, but youtube compresses it to a degree that ruins immersion completely which is why the app is even justified to begin with).
She would like a more portable version for demonstration with a modified cardboard with straps or something added, but when I load it into the simplevideowidget in the google VR SDK it looks terrible, presumably because of the unexpected projection.
Unity appears to be a no-go as it does not support high resolution videos without using expensive plugins like Easy Movie Texture. It also seems quite overkill to use unity just to display a video.
TL;DR: How do I display this oddly projected 180 degree stereoscopic video in an android app with the google VR SDK (or something entirely different?) Why does the video file work fine in the Oculus but not at all with the cardboard? I'm thinking it must have to do with the field of view, somehow.

Try to use from this example.May be this one will help you. In this change sphere to 180, 230 whatever you want. May be it helps you.
https://github.com/ashqal/MD360Player4Android

Related

How to show 4 videos at once using Google Cardboard?

I am trying to show four videos at once using Google Cardboard. These videos are normal 2D videos that were shot on a normal 16:9 camera. What I want and need is to have one video in front of you then you turn your head 90 degrees and you see another video, turn again and see another until you hit the front video again. Please see my Pablo Picasso Microsoft Paint skills to visualize what I am talking about...
So basically what I need is like four VR movie theater screens that a person can look around in. Is there a program I could use or do I have to do some programming to make this happen? Searching this is not easy with all the articles of VR that pop up. Any help that can point me in the right direction would be greatly appreciated!
I actually found an app that did this all for me. The app is called 360 Virtual Reality Player(Google Play Store) and it takes any 2D video and makes it into a head-tracked VR video. Once I found this app, all I needed to do is stitch the videos together with a black bar in between them using OpenCV to get the desired effect.

Video Capture and implementation Google Cardboard (like Paul Mccartney app)

Regards community
I just want to build a similar app like this,
with my own content of course.
How to capture 360 degree video (cameras, format, ingest, audio)?
Implementation:
2.1 Which one Cardboard SDK works best for my interests (Android or Unity)
2.2 Do you know any blogs, websites, tutorials, samples in which I can support.
Thank you
MovieTextures are a great way to do this in Unity, but unfortunately MovieTextures are not implemented on Android (maybe this will change in Unity 5). See the docs here:
For a simple wrap-a-texture-on-a-sphere app, the Cardboard Java SDK should work. But if you would rather use Unity due to other aspects of the app, the next best way to do this would be to allocate a RenderTexture and then grab the GL id and pass it to a native plugin that you would write.
This native code would be decoding the video stream, and for each frame it would fill the texture. Then Unity can handle the rest of the rendering, as detailed by the previous answer.
First of all, you need content, and to record stereo 360 video, you'll need a rig of at least 12 cameras. Such rigs can be purchased for GoPro cams. That's gonna be expensive.
The recently released Unity 5 is a great option and I strongly suggest using it. The most basic way of doing 360 stereo videos in Unity is to create two spheres with MovieTextures showing your 360 video. Then you turn them "inside out", so that they display their back faces instead of the front faces. This can be done with a simple shader, turning on front face culling and removing the mirror effect. Then you place your cameras inside the spheres. If you are using the google cardboard sdk camera rig, put the spheres on different culling layers and make the cameras only see the appropriate spheres. Remember to put the spheres in proper positions regarding the cameras.
There may be other ways to do this, leading to better results, but they won't be as simple. You can also look for some paid scripts/plugins/assets to do 360 video in VR.

How to develop a Smart Magnifier using the mobile camera?

I would like to use the mobile camera and develop a smart magnifier that can zoom and freeze-frame what we are viewing, so we don't have to keep holding the device steady while we read. Also should be able to change colors as given in the image in the link below.
https://lh3.ggpht.com/XhSCrMXS7RCJH7AYlpn3xL5Z-6R7bqFL4hG5R3Q5xCLNAO0flY3Fka_xRKb68a2etmhL=h900-rw
Since i'm new to android i have no idea on how to start, do you have any idea?
Thanks in advance for your help :)
I've done something similar and published it here. I have to warn you though, this is not a task to start Android development with. Not because of development skills, the showstopper here is a need for massive amount of devices to test it on.
Basically, two reasons:
Camera API is quite complicated and the different HW devices behave differently. Forget about using emulator, you would need a bunch of real HW devices.
There is a new API, Camera2 for platform 21 and higher, and the old Camera API is deprecated (kind of 'in limbo' state).
I have posted some custom Camera code on GitHub here, to show some of the hurdles involved.
So the easiest way out in your situation would be to use camera intent approach, and when you get your picture back (it is a jpeg file) just decompress it and zoom-in to the center of the resulting bitmap.
Good Luck

Camera is being rotated 90 degree in air for android

In my AS3 Flex Mobile application for Android, I am using camera and it is being automatically rotated 90 degrees before I even done any video rotation by myself, it seems like it's a known bug in AIR. But I was wondering if anyone found a solution since it's really pretty important feature for mobile application developer.
I've tried to do some rotation manually in my code, but it is only fixes the view on my display, but still sends the wrong video to the receiver.
If any code is required I will add the snippets
Please let me know.
As you mentioned, this is a known bug with AIR. It is not consistent, either. On some devices, it is in the correct orientation but in some (and all iOS devices, I believe, though I haven't fully tested that), it is rotated as you are seeing. For example, it was always oriented correctly on my Nexus 4 and on my Nexus 5, but a friends Moto X is rotated incorrectly.
Unfortunately, I don't believe there is anything you can do short of having the user do a calibration (i.e. overlay a straight line and tell them to place it horizontally and click a button) and rotating the camera display and any images you take with the display.
That being said, if you are using the camera to take photos, I highly recommend using CameraUI instead, which is the native implementation.
I've faced the same issue today but i'm developping in Java, not with AIR so i don't know if it the same, for me the solution was to add this line before starting the recording.
mMediaRecorder.setOrientationHint(90);

Video on canvas with panning - how to solve performance issues (also: HTML5, Flash or something else?)

Long version:
I have a very particular issue. I'm a multimedia artist working at the moment together with an animator - we are trying to create an interactive animation that I want to make available online as a website and as free app on the App Store and the Android Market.
But here's the key problem I am faced with now.
The output video of the actual animation will be massive in resolution - probably something like 4 or more times the HD resolution, but it's for a reason. The idea is to let the viewer only watch a part of the video at one time - like when panning around in Google Maps or any other canvas-like view (eg. MMORPG or strategy computer games). So you only see a part of the whole image at one time, and then you can move around to see what's "behind the corner".
So the animation would be a Google Maps-alike canvas (panning and perhaps zooming if there's enough time to implement it) but with video instead of images.
The big problem that comes up is performance. I was thinking that the best way to make it run would be to scale down the video for different devices accordingly. But then even just considering desktop computers for now - scaling down to 720p for HD screen means there is in total of about 4 times 720p in resolution, which is probably too much for an average computer to decode (Full HD is quite often already problematic) - and the output resolution would be more than the 4K standard (5120 by 2880, whilst 4K is 4096x2160). Anyhow, that is unacceptable.
But I reached the conclusion that there is really no point in decoding and rendering the parts of the video which are invisible to the user anyway (why waste the CPU+GPU time for that) - since we know that only about 1/6th of the full canvas would be visible at any given time.
This inspired an idea that maybe I could split the output video into blocks - something between 8 to 64 files stacked together side by side like cells in a table, then have a timecode timer playing in some variable and enabling the video-blocks on demand. As the user drags the canvas to the visible element it would automatically start the playback of the file at the given timecode read from the global variable. There could be some heuristics anticipating users movement and prematurely activating the invisible blocks in order to remove any delay caused by seeking within video and starting the playback. Then blocks which are no longer visible could deactivate themselves after a certain amount of time.
So my first attempt was to try and see what are my choices platform-wise and I really see it comes down to:
HTML5 with JavaScript (heavily using <video> tag)
Adobe Flash (using Flash Builder to deploy the apps to all the different devices)
And HTML5 would really be more preferable.
So I did some research to see if it would be at all possible to even synchronize more than one video at a time in HTML5. Unfortunately it's far from perfect, there are two available "hacks" which work well with Firefox, but are buggy in Webkit (the videos often get out of sync by more than a few frames, sometimes even up to half a second, which would be quite visible if it was a single video split into blocks/segments). Not to mention the fact that I have not even tried it on mobile platforms (Android / iOS).
These two methods/hacks are Rick Waldron's sync as shown here:
http://weblog.bocoup.com/html5-video-synchronizing-playback-of-two-videos/
And the other one, also developed by Rick is the mediagroup.js (this one doesn't work in Chrome at all):
https://github.com/rwldrn/mediagroup.js
My test here: http://jsfiddle.net/NIXin/EQbAx/10/
(I've hidden the controller, cause it is always playing back earlier than the rest of the clips for some reason)
So after explaining all that I would really appreciate any feedback from you guys - what would be the best way of solving this problem and on which platform. Is HTML5 mature enough?
Short version:
If I still haven't made it clear as to what I need - think of a video zoomed in at 600% so that you can't see everything (some bits are off screen) and you need to pan around by dragging with your mouse (or flicking your finger on mobile devices) to see what's going on in different places of the video. How could I do that (have the video run smoothly) across platforms, while retaining the high quality and resolution of the video?
Thanks a lot, let me know if you need any more details or any clarification of the matter.

Categories

Resources