All guides I have seen like:
https://www.youtube.com/watch?v=NfG63Ge3aQk
have a main prefab, but I have NEVER found any package remotely close to containing a camera. Also when I follow the guide and download the package:
https://developers.google.com/vr/unity/download#google-vr-sdk-for-unity
Even the demo scenes have no special camera and contain faults.
Also to do without I have used unity input.gyro.attitude, but it gives weird movements when the phone is still
To my knowledge, the CardboardMain prefab is no longer included in the sdk. I'm pretty sure that now you just need to enable the Cardboard sdk in player settings ("virtual reality supported" checkbox) and add a GvR Raycaster script to your main camera.
Related
I've followed the steps here to install the HelloAR example scene onto my Samsung Galaxy S8+ (Model number: SM-G955F) using Unity 2018.1.
However when I start the app I see this:
I didn't receive a notification to give the app permission to use the camera initially, but I allowed it manually and restarted the app, to no effect.
What can I do to fix this?
I solved this problem by unticking the Scenes/SampleScene which is ticked in build settings.
Seems like it was a problem with the version of the SDK I was using. I managed to fix the issue by downloading the latest files from the official repo, and replacing the existing files in my Unity project.
I spent a week on this already, but I'm not an expert so I may miss something, or even want to do something impossible.
I have a Unity project (with Daydream VR) and my goal is to get a WebView working on a 3D surface (as a texture) in VR mode (for Android)
I know it sounds complicated and, I bet it is.
This is where I am so far:
First of all I started with the samples from Oculus. They made a plugin that links an Android MediaPlayer (native class from android) to a Unity Texture, this is kind of easy because the media player of android already use SurfaceTexture to render itself, and with the plugin they just created a bridge between the two environments.
I got this working easily.
Then after I found this article: http://www.felixjones.co.uk/neo%20website/Android_View/
and so I made a deduction that it could be possible to use the same trick to get a native Android WebView in a Unity Texture.
From the article, I thought that the only thing I needed was a custom Android WebView that renders to a SurfaceView (so it will be quite similar to the Android MediaView) and then the oculus plugin would work too.
Of course nothing was like that...
Step 1/
I made an Android plugin for Unity to expose a custom WebView that (IMHO) would render to a SurfaceView (but I'm not sure it is quite right)
see => JrmgxWebView.java
Step 2/
I compiled the oculus plugin, with minor changes
see => *.cpp
Step 3/
I adapted the Unity script that link both the Android plugin and the oculus plugin for unity, added that to a simple Daydream Scene on a 3D cube (with the script associated)
see => WebviewPlayerSampleWeb.cs
I get logs, and from what I understand, the problem is that the SurfaceView does not get refreshed. I can see that my Custom WebView is instanciated well, most of the code works and there are no error (nor crash)
But the texture only change from white (start value) to black (when the plugin is initialized) and then nothing happen.
The textureId coming from the plugin get stuck to either 0 or 29, and AndroidSurfaceTexture->GetNanoTimeStamp() never returns new values.
On the java side I played with .invalidate(), used .loadUrl() and .loadData() and on the C++ part, I checked each method call to see if everything was called. I spend so much time on this code that I can't even remember all the combination I tried.
What is wrong with this?
I'm sure I'm missing a basic point here.
most code here https://gist.github.com/jrmgx/019cba403769dda27a6d42e48c461b1b
PS: be nice with me, it's been a week, and it is my first time with Unity, C++, JNI and C#
Thank you in advance
Is it possible to get a photo from camera in a phonegap app only with HTML input, without Camera API?
Scenario
I developed an app and encapsulated it through Phonegap (Build)
Users can submit photos using html file input. But there is no option to take a new photo from camera, only gallery and files (and any other storage app like drive or dropbox, if it is the case).
I would like to enable the camera option, but without the PhoneGap's Camera API.
Solutions
I've tried the following solutions without success:
1. The "capture" attribute (from Raymond Camden post)
With this method, all you have to do is to add the "capture" attribute like this:
<input type="file" capture="camera" accept="image/*" id="JustChooseAnID">
Raymond explains that with this method you don't need to use PhoneGap's Camera API since Google developed this option and showed it in a Google IO presentation and, according to Google devs and also Raymond, it works.
But.. not for me.
2. Config Phonegap to ask camera's permission (from Jorge Lizaso's question)
"Maybe the camera option is not showed up because your app doesn't have the proper permissions"
In the above question, 4 methods are mentioned to indirectly ask and receive the proper permissions to use the camera.
I have already tried methods 2 and 4 and, although my app now asks for camera permission and I allowed it properly, when I use the file input, all the options are presented (like gallery and files), but no camera.
3 (and 1). Use the latest version of webkit with Crosswalk to ensure solution #1
With the possibility of my android webview be outdated (and not compatible with solution #1), I decided to include in my app the best webview possible: the latest version of Google Chromium.
But.. no success again.
1, 2 and 3 combined
The result: no... success.
Yes, there is a solution, without the need for Cordova/PhoneGap's Camera API.
For reasons that are beyond my knowledge, the webkit used in CocoonJS (named WebView+) is far more optimized and adjusted to HTML5 apps than phonegap's Crossswalk (please, I am not related to Cocoon).
I discarded completely my Phonegap config.xml settings and tried Cocoon builder. I used the solution #1 (the capture attribute in file input) and added these plugins, via Cocoon:
<cocoon:plugin name="cordova-plugin-camera"/>
<cocoon:plugin name="cordova-plugin-media-capture"/>
<cocoon:plugin name="cordova-plugin-media"/>
<cocoon:plugin name="cordova-plugin-file"/>
<cocoon:plugin name="cordova-plugin-device"/>
It's important to note that I just installed these plugins in order to obtain the wanted permissions, but I am not using them at all.
Probably, more experienced developers will have better solutions but, for now, this is the best way I found to solve this problem.
I just bought a Google Cardboard Viewer and decided to take a shot at making an app. I have played around with OpenGL and have no problem with the basics of that.
I created a new project following the source from the only sample I could find "Treasure Hunt" linked to from the "Getting Started" for Cardboard.
I can run my own test application and the Treasure Hunt sample application from Google on my Nexus 7 tablet, but I get an error on my phone:
JNI DETECTED ERROR IN APPLICATION: can't call void com.google.vrtoolkit.cardboard.CardboardView$StereoRenderer.onNewFrame(com.google.vrtoolkit.cardboard.HeadTransform) on instance of my.app.cardboardtest.CardboardRenderer
Since I get the same error from the sample from Google, I don't think I need to share any of my code specifically. This is a link to their code:
https://github.com/googlesamples/cardboard-java
I have a T-Mobile Galaxy Note 3 running Android 5.0
Is this the final tipping point for me to upgrade my phone? lol
This seems to be a problem with your build in Android Studio.
Try moving the cardboard.jar file (that contains the vrtoolkit.so binary) to a folder called /lib and point to it.
Upshot is, android cannot find vrtoolkit.so in the delployed apk. The binary is a new addition to the jar, and the docs are older than that.
Or, just stick with Eclipse until Studio is a little more rattled in.
Just a headsup the learning curves around VR development are turbulent like the event horizon of a black hole ...
I have tried to use the Project Tango unity examples found here; https://github.com/googlesamples/tango-examples-unity with no success.
I have never used Unity before, and have successfully completed the Codelab: Motion Tracking example fromt he developer resources: https://developers.google.com/project-tango/apis/unity/unity-codelab-motion-tracking.
I downloaded the example files mentioned at the top, open one of the projects (tried several, all have the same problem). Once the project is open, I open the scene that come with it, I go to Build Settings, change platform to Android, change the Bundle Identifier and set the Minimum API Level to Jelly Bean 17 and finally click "Build and Run".
Now it starts to build the project, everything goes fine and the Unity logo shows up on the Tango. And that's it, the Unity logo shows up, no error messages on the computer, but nothing more happens. The program stalls at the Unity logo.
Am I missing some crucial step to get the example projects to actually run?
Thanks in advance!
The example code is Unity 5 based project, so please use the 5.x version of Unity to open it. If you use Unity 4.x version, the scene won't be opened up properly.
However, the code lab is based on Unity 4.6 and it is independent from our sample code. I understand this is a little bit confusing now, we are working on make sure the version are unified.
In short, I would suggest you open our example code with Unity 5.x version. You will be able to run it without a problem.
It was blank empty scene build. thats why after the unity logo it didn't load any scene or do anything.
You need to go to Unity Build Setting and drag and drop all the scenes from the examples folder in it.
Once you have all the scenes. Make sure that AreaDescriptionManagement is on the top of the list.
That is the only step missing from you.
Now, build and run. Everything should be fine.