I'm extremely new to Unity and VR and I've just been finishing up with Unity tutorials from YouTube. Unfortunately there isn't exactly much documentation on the process one should follow to make a VR app with Unity.
What I need is to be able to replicate the Photosphere App in the Cardboard App for Android. I need to do this using Unity if possible.
The Photosphere's have been taken from a Nexus 4's Camera with the photosphere option and look like below image:
I tried following this really nice walkthrough which attaches a cubemap skybox to the lighting. The problem is the top and the bottom part of of the cube don't seem to show the proper image.
I tried doing it with a 6 sided skybox too but I'm pretty lost about the way i should proceed with that. Primarily because I've just got one Photosphere image and the 6 sided skybox has 6 input texture parameters.
I also tried following this link but the information there is slightly overwhelming.
Any help or pointers in the right direction would be extremely appreciated!
Thank you :)
I also went through the tutorial of the link you provided, but it seems that they did it "manually".
Since you have Unity 3D and Cardboard SDK for Unity, you don't have to do configure the cameras.
You please follow this tutorial.
There is an alternative way doing a Sphere and put the cam inside:
http://zhvillues.tumblr.com/post/126331275376/creating-a-360-viewer-using-unity-3d
You have to apply a custom shader to the Sphere to render the inside of it.
Shader "Custom/sphereShader" {
Properties {
_MainTex ("Base (RGB)", 2D) = "white" {}
_Color ("Main Color", Color) = (1,1,1,0.5)
}
SubShader {
Tags { "RenderType" = "Opaque" }
Cull Front
CGPROGRAM
#pragma surface surf Lambert vertex:vert
sampler2D _MainTex;
struct Input {
float2 uv_MainTex;
float4 color : COLOR;
};
void vert(inout appdata_full v)
{
v.normal.xyz = v.normal * -1;
}
void surf (Input IN, inout SurfaceOutput o) {
fixed3 result = tex2D(_MainTex, IN.uv_MainTex);
o.Albedo = result.rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "Diffuse"
}
Related
I have been working with the Arcore demo code provided by Google and was working in Android Studio, I would like to avoid using Unity if I can to complete this task.
By default the plane is shown as triangles that are white and the negative space is transparent. I would like to change that plan to rather be a texture that can be tiled throughout the environment, an example of this would be a grass texture.
The default image the plane uses is a file called trigrid.png and that is defined in the HelloArActivity.java.
https://github.com/google-ar/arcore-android-sdk/blob/master/samples/java_arcore_hello_ar/app/src/main/java/com/google/ar/core/examples/java/helloar/HelloArActivity.java
I tried to replace that with an image file that was just grass texture and called it floor.png . This just appears all white and doesn't display the grass at all.
}
try {
mPlaneRenderer.createOnGlThread(/*context=*/this, "floor.png");
} catch (IOException e) {
Log.e(TAG, "Failed to read plane texture");
}
I have tried adding
GLES20.glEnable(GLES20.GL_BLEND);
in the drawPlanes function but that didn't seem to help. I also commented out some of the changing of the colors in drawPlanes as well.
//GLES20.glClearColor(1, 1, 1, 1);
//GLES20.glColorMask(false, false, false, true);
//GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//GLES20.glColorMask(true, true, true, true);
I'm not sure what is required to make the texture show. It could have to do with the plane_fragment.shader files but I don't have any experience with those.
Any insight would be helpful.
The shaders are very important. If you want to do any graphics programming in OpenGL, you need to know at least a little bit about shaders. They are programs that run on the graphics processing unit (GPU) to determine the color of every pixel for every frame. I quick intro to shaders is here: https://youtu.be/AyNZG_mqGVE.
To answer your question, you can use a new fragment shader which just draws your texture and does not mix in other colors. This is a quick and dirty solution, in the long term you definitely want to clean up the code to not reference all uniform variables that are no longer used.
Specifically:
Create a new file called plane_simple_fragment.shader in the src/main/assets/raw directory.
Open it in the editor and add the following code:
uniform sampler2D u_Texture;
varying vec3 v_TexCoordAlpha;
void main() {
gl_FragColor = texture2D(u_Texture, v_TexCoordAlpha.xy);
}
Then in PlaneRenderer change to the new shader by replacing R.raw.plane_fragment with R.raw.plane_simple_fragment.
I am trying to make a slice effect when user moves his fingers on the screen on Android Device like in Fruit Ninja
I have a movieClip named Particle which has a circle
I tried following
stage.addEventListener(MouseEvent.MOUSE_DOWN , stratSlice);
stage.addEventListener(MouseEvent.MOUSE_UP , endSlice);
function startSlice(e:MouseEvent):void
{
stage.addEventListener(MouseEvent.MOUSE_MOVE , drawSlice);
}
function endSlice(e:MouseEvent):void
{
stage.addEventListener(MouseEvent.MOUSE_MOVE , drawSlice);
}
function drawSlice(e:MouseEvent):void
{
var p:Particle = new Particle();
addChild(p);
p.x = mouseX;
p.y = mouseY;
}
but when I run it The slice is broken I want it to be seamless.
Adding e.updateafterevent() in the mouse move handler might improve performance a bit.
In general your code will probably not work as well as other particle effects you see in games such as fruits ninja. For good results you will need to use a particle engine such as
StarDust: https://code.google.com/p/stardust-particle-engine/
Flint: http://flintparticles.org/
Partigen: http://www.desuade.com/partigen/engine
Starling Particle: https://github.com/PrimaryFeather/Starling-Extension-Particle-System
I know starling works great on android, not sure about the others
I'm working on a Google Cardboard project, right now i have a demo for Android where u can look around in a special scene i build in UNITY 3D, everything is working fine & looking good, but what I really want is:
I want to walk forward when I press the Google Cardboard magnet button.
I found a few script's on the web, but I do not know exactly how to make these scripts work in my UNITY project.
Can anybody help me further with this?
Assuming you are able to read the magnet input correctly. This is how I did an FPS style controller script:
In Unity5 import the asset package Standard Assets/Characters.
Create an instance of RigidBodyFPSController.prefab from that package.
Remove it's child object, "MainCamera"
Import the Google cardboard unitypackage.
Replace the "MainCamera" you removed in step #3 with CardboardMain.prefab
Update or modify a copy of RigidbodyFirstPersonController.cs GetInput() method.
GetInput() with Google Cardboard forward movement fallback:
private Vector2 GetInput()
{
Vector2 input = new Vector2
{
x = Input.GetAxis("Horizontal"),
y = Input.GetAxis("Vertical")
};
// If GetAxis are empty, try alternate input methods.
if (Math.Abs(input.x) + Math.Abs(input.y) < 2 * float.Epsilon)
{
if (IsMoving) //IsMoving is the flag for forward movement. This is the bool that would be toggled by a click of the Google cardboard magnet
{
input = new Vector2(0, 1); // go straight forward by setting positive Vertical
}
}
movementSettings.UpdateDesiredTargetSpeed(input);
return input;
}
Google's SDK only support's detecting a magnet "click". If you want to hold down the magnet to move forward, I recommend using Cardboard Controls+ from the Unity3D Asset Store.
I am using Camera.Face to detect face and min3D to load 3d models.
I want to let the model move with face, but it is not working well.
#Override
public void updateScene() {
if (mFaces == null) {
animeModel.position().x = animeModel.position().y = animeModel
.position().z = 0;
return;
}
for (Face face : mFaces) {
if (face == null) {
continue;
}
animeModel.position().x = face.rect.centerX();
animeModel.position().y = face.rect.centerY();
}
}
Is that model's coordinate and rectangle's coordinate are different systems?
(world coordinates to screen coordinates or something?)
How to solve this?
UPDATE:
I have try to get model's coordinate and face's coordinate.
These two value are totally different.
How to convert face.rect.centerX() to animeModel.position().x?
Here is an article all about how a face tracking demo was developed:
http://www.smallscreendesign.com/2011/02/07/about-face-detection-on-android-%E2%80%93-part-1/
That app is also available on the Play store. Part 1 of the above article has some performance metrics on recognition time. It looks like it may take up to two seconds or more to detect a face.
You could use the code in that article to do your prototyping. You may discover that face detection doesn't happen fast or often enough to track a face in realtime.
Here is the documentation for face tracking on the Android Developer site:
http://developer.android.com/reference/android/hardware/Camera.Face.html
UPDATE:
Check out this library: https://code.google.com/p/asmlib-opencv/
I was surfing the net looking for a nice effect for turning pages on Android and there just doesn't seem to be one. Since I'm learning the platform it seemed like a nice thing to be able to do is this.
I managed to find a page here: http://wdnuon.blogspot.com/2010/05/implementing-ibooks-page-curling-using.html
- (void)deform
{
Vertex2f vi; // Current input vertex
Vertex3f v1; // First stage of the deformation
Vertex3f *vo; // Pointer to the finished vertex
CGFloat R, r, beta;
for (ushort ii = 0; ii < numVertices_; ii++)
{
// Get the current input vertex.
vi = inputMesh_[ii];
// Radius of the circle circumscribed by vertex (vi.x, vi.y) around A on the x-y plane
R = sqrt(vi.x * vi.x + pow(vi.y - A, 2));
// Now get the radius of the cone cross section intersected by our vertex in 3D space.
r = R * sin(theta);
// Angle subtended by arc |ST| on the cone cross section.
beta = asin(vi.x / R) / sin(theta);
// *** MAGIC!!! ***
v1.x = r * sin(beta);
v1.y = R + A - r * (1 - cos(beta)) * sin(theta);
v1.z = r * (1 - cos(beta)) * cos(theta);
// Apply a basic rotation transform around the y axis to rotate the curled page.
// These two steps could be combined through simple substitution, but are left
// separate to keep the math simple for debugging and illustrative purposes.
vo = &outputMesh_[ii];
vo->x = (v1.x * cos(rho) - v1.z * sin(rho));
vo->y = v1.y;
vo->z = (v1.x * sin(rho) + v1.z * cos(rho));
}
}
that gives an example (above) code for iPhone but I have no idea how I would go about implementing this on android. Could any of the Math gods out there please help me out with how I would go about implementing this in Android Java.
Is it possible using the native draw APIs, would I have to use openGL? Could I mimik the behaviour somehow?
Any help would be appreciated. Thanks.
****************EDIT**********************************************
I found a Bitmap Mesh example in the Android API demos: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/BitmapMesh.html
Maybe someone could help me out on an equation to simply fold the top right corner inward diagnally across the page to create a similar effect that I can later apply shadows to to gie it more depth?
I'm doing some experimenting on page curl effect on Android using OpenGL ES at the moment. It's quite a sketch actually but maybe gives some idea how to implement page curl for your needs. If you're interested in 3D page flip implementation that is.
As for the formula you're referring to - I tried it out and didn't like the result too much. I'd say it simply doesn't fit small screen very well and started to hack a more simple solution.
Code can be found here:
https://github.com/harism/android_page_curl/
While writing this I'm in the midst of deciding how to implement 'fake' soft shadows - and whether to create a proper application to show off this page curl effect. Also this is pretty much one of the very few OpenGL implementations I've ever done and shouldn't be taken too much as a proper example.
I just created a open source project which features a page curl simulation in 2D using the native canvas: https://github.com/moritz-wundke/android-page-curl
I'm still working on it to add adapters and such to make it usable as a standalone view.
EDIT: Links updated.
EDIT: Missing files has been pushed to repo.
I'm pretty sure, that you'd have to use OpenGL for a nice effect. The basic UI framework's capabilities are quite limited, you can only do basic transformations (alpha, translate, rotate) on Views using animations.
Tho it might be possible to mimic something like that in 2D using a FrameLayout, and a custom View in it.