First Person Camera rotation in 3D - android

I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back

There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}

Related

Panning the view of a gameObject instead of the camera in Unity3d?

I'm having a hard time to pan a view of a gameObject in Unity3d. I'm new to scripting and I'm trying to develop an AR (Augmented Reality) application for Android.
I need to have a gameObject (e.g. a model of a floor), from the normal top down view, rendered to a "pseudo" iso view, inclined to 45 degrees. As the gameObject is inclined, I need to have a panning function on its view, utilizing four (4) buttons (for left, right, forward(or up), backward(or down)).
The problem is that, I cannot use any of the known panning script snippets around the forum, as the AR camera has to be static in the scene.
Need to mention that, I need the panning function to be active only at the isometric view, (which I already compute with another script), not on top down view. So there must be no problem with the inclination of the axes of the gameObject, right?
Following, are two mockup images of the states, the gameObject (model floor) is rendered and the script code (from Unity reference), that I'm currently using, which is not very much functional for my needs.
Here is the code snippet, for left movement of the gameObject. I use the same with a change in -, +speed values, for the other movements, but I get it only move up, down, not forth, backwards:
#pragma strict
// The target gameObject.
var target: Transform;
// Speed in units per sec.
var speedLeft: float = -10;
private static var isPanLeft = false;
function FixedUpdate()
{
if(isPanLeft == true)
{
// The step size is equal to speed times frame time.
var step = speedLeft * Time.deltaTime;
// Move model position a step closer to the target.
transform.position = Vector3.MoveTowards(transform.position, target.position, step);
}
}
static function doPanLeft()
{
isPanLeft = !isPanLeft;
}
It would be great, if someone be kind enough to take a look at this post, and make a suggestion on how this functionality can be coded the easiest way, as I'm a newbie?
Furthermore, if a sample code or a tutorial can be provided, it will be appreciated, as I can learn from this, a lot. Thank you all in advance for your time and answers.
If i understand correctly you have a camera with some fixed rotation and position and you have a object you want to move up/down/left/right from the cameras perspective
To rotated an object to a set of angles you simply do
transform.rotation = Quaternion.Euler(45, 45, 45);
Then to move it you use the cameras up/right/forward in worldspace like this to move it up and left
transform.position += camera.transform.up;
transform.position -= camera.transform.right;
If you only have one camera in your scene you can access its transform by Camera.main.transform
An example of how to move it when someone presses the left arrow
if(Input.GetKeyDown(KeyCode.LeftArrow))
{
transform.position -= camera.transform.right;
}

transfer Optitrack-NatNet-Quaternion to a Rotation of a 3-D Model in a libgdx Scene with scaled axes

We track a Tablet with Markers and the Optitrack System. Therefore we use a JNI Wrapper to have access on functions of the NatNet SDK. After receiving the Position data on the server we stream(real time) it back to the client- the tablet itself and render an Augmented Reality Scene with the libgdx framework. The Target Platform is Android.
Here is some sample data we are receiving (7 values: x,y,z, qx, qy, qz, qw):
-0,465436 0,888108 -0,991635 0,331507 0,091413 -0,379475 0,858921
-0,438584 0,888583 -0,982334 0,356608 0,092872 -0,364935 0,855002
-0,414451 0,892762 -0,973772 0,365460 0,096244 -0,348293 0,857828
-0,394074 0,900471 -0,963359 0,365230 0,109444 -0,323559 0,865990
The first three values describe the position in the room. We scale the values of the z-axis by the factor of 3 to have a autentic translation from the small values we are receiving to the size we need in our render libgdx scene on the tablet itself and it works fine! The last 4 values are for the rotation of the tracked tablet as an quaternion. This is really new to me as I never worked with quaternion before.
Libgdx supports rotations in a 3D scene with quaternions, but after handing the values over to the concrete modelInstance the rotation is totally unexpected and full of errors. Here is the important code:
//gets the rotation of the latest received rigidbody position data
//and stores them into a libgdx Quaternion
Quaternion rotation = rigidBody.getQuat();
modelInst.transform.rotate(rotation);
...
public Quaterion getQuat(){
float qx = rigidBody.qx;
float qy = rigidBody.qy;
float qz = rigidBody.qy;
float qw = rigidBody.qw;
Quaternion rot = new Quaternion(qx, qy, qz, qw);
return rot;
}
Looks obvious to me so far. But is does not work. When I rotate the tablet it tranlates too and rotates in wrong and unexpected directions without translating the tablet. I've been looking for solutions for a week now. First I tried different permutations of the parameters which I hand over to the libgdx Quaternion class. The rotation values seem to be in a relative(local) form in a right handed coordinate system. At least this is whats written in the official NatNet User Guide on page 12:
NatNet User Guide
. I think libgdx uses absolute Quaternions, but I couldnt figure it out for sure. If this would be the case...how could I transform relative Quaternions to absolute ones? Or maybe it has to do something with our scaling of the z-Axis values?
We appreciate ervery help. Thank you in advance!

Unity2D Android Touch misbehaving

I am attempting to translate an object depending on the touch position of the user.
The problem with it is, when I test it out, the object disappears as soon as I drag my finger on my phone screen. I am not entirely sure what's going on with it?
If somebody can guide me that would be great :)
Thanks
This is the Code:
#pragma strict
function Update () {
for (var touch : Touch in Input.touches)
{
if (touch.phase == TouchPhase.Moved) {
transform.Translate(0, touch.position.y, 0);
}
}
}
The problem is that you're moving the object by touch.position.y. This isn't a point inworld, it's a point on the touch screen. What you'll want to do is probably Camera.main.ScreenToWorldPoint(touch.position).y which will give you the position inworld for wherever you've touched.
Of course, Translate takes a vector indicating distance, not final destination, so simply sticking the above in it still won't work as you're intending.
Instead maybe try this:
Vector3 EndPos = Camera.main.ScreenToWorldPoint(touch.position);
float speed = 1f;
transform.position = Vector3.Lerp(transform.position, EndPos, speed * Time.deltaTime);
which should move the object towards your finger while at the same time keeping its movements smooth looking.
You'll want to ask this question at Unity's dedicated Questions/Answers site: http://answers.unity3d.com/index.html
There are very few people that come to stackoverflow for Unity specific question, unless they relate to Android/iOS specific features.
As for the cause of your problem, touch.position.y is define in screen space (pixels) where as transform.Translate is expecting world units (meters). You can convert between the two using the Camera.ScreenToWorldPoint() method, then creating a vector out of the camera position and screen world point. With this vector you can then either intersect some geometry in the scene or simply use it as a point in front of the camera.
http://docs.unity3d.com/Documentation/ScriptReference/Camera.ScreenToWorldPoint.html

rotate an Object, but translate the Object always in its own front Axis

I want to program a racinggame for Android. My Problem is, that if I rotate the car and want to translate the position it doesn't translate into the new direction of the car , but always in the X axis of the world.
Here is my wrong code.. thank you
gl.glTranslatef(car.position.x, car.position.y, car.position.z);
gl.glRotatef(car.currentAngle, 0, 1, 0);
Opengl uses matrices to create images.
In Matrices, multiplication do not have an associative property. Therefore when you rotate an object and then translate it, the object will end up in a different position as opposed if you did not translate it first.
A solution to transforming and translating an object would be to animate and translate. That way you can translate anywhere you want without worrying about object rotation's associative property.
To see the effects of the non-associative multiplication on your object, try this: rotate and translate your object about 8 times, rotating and translating 8 times each respectively. You will notice that your object will disappear. As opposed to "rotate in a circle while changing position".
Ok I have the solution. All I have to do is to translate my Car towards the new directional vector who gets changed by the new angle of my car :)
if (accel < 0)
position.add((float) Math.sin(current * Math.PI/180)/5, 0, (float) Math.cos(currentangle * Math.PI/180)/5);
if (accel > 0)
position.sub((float) Math.sin(current * Math.PI/180)/5, 0, (float) Math.cos(currentangle * Math.PI/180)/5);
and in the rendering class
gl.glTranslatef(car.position.x, car.position.y, car.position.z);
gl.glRotatef(car.currentAngle, 0, 1, 0);

Calibrating 3d Accelerometer for 2d Game

I am making a 2d game. The phone is held horizontally and a character moves up/down & left/right to avoid obstacles. The character is controlled by the accelerometer on the phone. Everything works fine if the player doesn't mind (0,0) (the point where the character stands still) being when the phone is held perfectly flat. In this scenario it's possible to just read the Y and X values directly and use them to control the character. The accelerometer values are between -10 and 10 (they get multiplied by an acceleration constant to decide the movement speed of the character), libgdx is the framework used.
The problem is that having (0,0) isn't very comfortable, so the idea is to calibrate it so that 0,0 will be set to the phones position at a specific point in time.
Which brings me to my question, how would I do this? I tried just reading the current X and Y values then subtracting it. The problem with that is that when the phone is held at a 90 degree angle then the X offset value is 10 (which is the max value) so it ends up becoming impossible to move because the value will never go over 10 (10-10 = 0). The Z axis has to come into play here somehow, I'm just not sure how.
Thanks for the help, I tried explaining as best as I can, I did try searching for the solution, but I don't even know what the proper term is for what I'm looking for.
An old question, but I am providing the answer here as I couldn't find a good answer for Android or LibGDX anywhere. The code below is based on a solution someone posted for iOS (sorry, I have lost the reference).
You can do this in three parts:
Capture a vector representing the neutral direction:
Vector3 tiltCalibration = new Vector3(
Gdx.input.getAccelerometerX(),
Gdx.input.getAccelerometerY(),
Gdx.input.getAccelerometerZ() );
Transform this vector into a rotation matrix:
public void initTiltControls( Vector3 tiltCalibration ) {
Vector3.tmp.set( 0, 0, 1 );
Vector3.tmp2.set( tiltCalibration ).nor();
Quaternion rotateQuaternion = new Quaternion().setFromCross( Vector3.tmp, Vector3.tmp2 );
Matrix4 m = new Matrix4( Vector3.Zero, rotateQuaternion, new Vector3( 1f, 1f, 1f ) );
this.calibrationMatrix = m.inv();
}
Whenever you need inputs from the accelerometer, first run them through the rotation matrix:
public void handleAccelerometerInputs( float x, float y, float z ) {
Vector3.tmp.set( x, y, z );
Vector3.tmp.mul( this.calibrationMatrix );
x = Vector3.tmp.x;
y = Vector3.tmp.y;
z = Vector3.tmp.z;
[use x, y and z here]
...
}
For a simple solution you can look at the methods:
Gdx.input.getAzimuth(), Gdx.input.getPitch(), Gdx.input.getRoll()
The downside is that those somehow use the internal compass to give your devices rotation compared to North/South/East/West. I did only test that very shortly so I'm not 100% sure about it though. Might be worth a look.
The more complex method involves some trigonometry, basically you have to calculate the angle the phone is held at from Gdx.input.getAccelerometerX/Y/Z(). Must be something like (for rotation along the longer side of the phone):
Math.atan(Gdx.input.getAccelerometerX() / Gdx.input.getAccelerometerZ());
For both approaches you then store the initial angle and subtract it later on again. You have to watch out for the ranges though, I think Math.atan(...) is within -Pi and Pi.
Hopefully that'll get you started somehow. You might search for "Accelerometer to pitch/roll/rotation" and similar, too.

Categories

Resources