I'm working on a Google Cardboard project, right now i have a demo for Android where u can look around in a special scene i build in UNITY 3D, everything is working fine & looking good, but what I really want is:
I want to walk forward when I press the Google Cardboard magnet button.
I found a few script's on the web, but I do not know exactly how to make these scripts work in my UNITY project.
Can anybody help me further with this?
Assuming you are able to read the magnet input correctly. This is how I did an FPS style controller script:
In Unity5 import the asset package Standard Assets/Characters.
Create an instance of RigidBodyFPSController.prefab from that package.
Remove it's child object, "MainCamera"
Import the Google cardboard unitypackage.
Replace the "MainCamera" you removed in step #3 with CardboardMain.prefab
Update or modify a copy of RigidbodyFirstPersonController.cs GetInput() method.
GetInput() with Google Cardboard forward movement fallback:
private Vector2 GetInput()
{
Vector2 input = new Vector2
{
x = Input.GetAxis("Horizontal"),
y = Input.GetAxis("Vertical")
};
// If GetAxis are empty, try alternate input methods.
if (Math.Abs(input.x) + Math.Abs(input.y) < 2 * float.Epsilon)
{
if (IsMoving) //IsMoving is the flag for forward movement. This is the bool that would be toggled by a click of the Google cardboard magnet
{
input = new Vector2(0, 1); // go straight forward by setting positive Vertical
}
}
movementSettings.UpdateDesiredTargetSpeed(input);
return input;
}
Google's SDK only support's detecting a magnet "click". If you want to hold down the magnet to move forward, I recommend using Cardboard Controls+ from the Unity3D Asset Store.
Related
I am developing an application on Unity for Android Mobile platform. In which I am rotating object with single finger touch gesture, with the help of this script,
using UnityEngine;
public class MouseDragRotate : MonoBehaviour {
float rotationSpeed = 0.02f;
void OnMouseDrag()
{
float XaxisRotation = Input.GetAxis("Mouse X")*rotationSpeed;
float YaxisRotation = Input.GetAxis("Mouse Y")*rotationSpeed;
// select the axis by which you want to rotate the GameObject
transform.RotateAround (Vector3.down, XaxisRotation);
transform.RotateAround (Vector3.right, YaxisRotation);
}
}
But the problem is that, this script is working only on all Unity Assets for example cube, sphere, capsule and others. But not working with third party 3d objects,
So simply the question is why this script is not working on third party 3d objects ?
You have to have some sort of a Collider attached to the 3d-model/Object you would like to interact with. What you can do is add a BoxCollider to any Imported Object, or if there is a MeshFilter attached you could also add a MeshCollider.
You should make sure the script you show is added to the right top level object and not in a nested component of that object.
If you are still having problems please show us more about the objects you are trying to apply this to. And what components and option are set to it.
After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.
I'm using Unity 5.3.4 to create an Android game for the Samsung Gear VR. I'm able to walk around in my scene with my bluetooth controller using the FPSController from the Standard Assets package. However, the player moves in the direction its (non-existent) body is facing, not in the direction he's looking at. This makes walking around rather unnatural because "moving forward" doesn't move the player forward.
I have found several solutions for this around a number of forums, but none seem to work. How can I achieve this behaviour?
Found a working solution by changing the C# code in FirstPersonController.cs:
Change line 100 in method FixedUpdate() containing
Vector3 desiredMove = transform.forward*m_Input.y + transform.right*m_Input.x;
into:
Vector3 desiredMove = m_Camera.transform.forward * m_Input.y + m_Camera.transform.right * m_Input.x;
This way the current transform of the Camera is used to calculate the desired player movement.
I am new to Unity and I am trying to learn the basics, I learn physics at school(ten grade).
What I have done so far - added a ball to my project and used gravity on it with RigidBody.
I want to make the ball jump suddenly on air when there is some touch input, for example - flappy bird.
My script is basic:
void Update()
{
if (Input.touchCount == 1)
{
GetComponent<Rigidbody>().AddForce(new Vector2(0, 10), ForceMode.Impulse);
}
}
With this script, the ball is falling(gravity) and when I touch the script, is Y coordinate changes but it happens by sudden (no animation) and it changes by like ~1 and continue falling(I can't keep the ball on screen) , also I can make it jump only once, if I press multiple times it will jump only once, as you can see here:
https://vid.me/aRfk
Thank you for helping.
I have created same scene in Unity3D Editor and played a little with same setup you have. And yes I had similar problems adding force on Update and also (but less) on FixedUpdate. But adding force on LateUpdate seems to work okay.
Here is my code:
public Rigidbody2D rb2dBall;
void LateUpdate()
{
if(Input.GetKeyUp(KeyCode.Space))
{
rb2dBall.AddForce (Vector2.up * 10f, ForceMode2D.Impulse);
}
}
And also I turned on interpolation:
I cant say why the physics bahaves like this, could be some bug in Unity3D 5.X since on FixedUpdate it should work fine.
Firstly, when working with physics in Unity, it's highly recomended to use the FixedUpdate method, instead of the Update. You can check in this link http://docs.unity3d.com/ScriptReference/MonoBehaviour.FixedUpdate.html
The second thing is that maybe you are not applying so much force at the ball, the force needed to give a quite impulse will depends of the mass of your ball's rigidbody. So try to do something like that:
GetComponent<Rigidbody>().AddForce(Vector2.up * 100, ForceMode.Impulse);
Change the fixed value 100 to adjust your needs.
I'm new to Unity and I am trying to build a solar system exploration app through unity. I have the environment set up, and now all I need is the ability to look around (via tilting and moving the phone itself, which is android) smoothly. I have the ability to look around, but if I do a complete 180, it seems to invert the physical orientation of the phone with the visual movements in game, e.g. if I have turn 180 degrees, if I tilt the phone down it shifts my vision in game to the right, up results in visual shift to the left. Here is the code I have thus far:
#pragma strict
private var quatMult : Quaternion;
private var quatMap : Quaternion;
function Start () {
Input.gyro.enabled = true;
}
function Update () {
#if UNITY_ANDROID
quatMap = Input.gyro.attitude;
#endif
transform.localRotation = Quaternion.Euler(90, 0, 0) * quatMap * Quaternion(0,0,1,0) /*quatMult*/;
}
Any help is greatly appreciated. Thanks.
This should be what you're looking for: https://gist.github.com/chanibal/baf46307c4fee3c699d5. Just drag it to the camera and it should work.
You might want to remove the reset on touch part (Input.touchCount > 0 in Update) and debug information (the OnGui method).