I developed one 2-D game in unity. First I have run game on web player my screen is cutting from both ends. Now I am creating android apk from it. Then on android devices its also cutting from both ends. We are using free aspect ratio. I dont know much about screen size. So how to overcome from this problem?
CameraFollow.cs script-
using UnityEngine;
using System.Collections;
public class CameraFollow : MonoBehaviour
{
public float xMargin = 1f; // Distance in the x axis the player can move before the camera follows.
public float yMargin = 1f; // Distance in the y axis the player can move before the camera follows.
public float xSmooth = 8f; // How smoothly the camera catches up with it's target movement in the x axis.
public float ySmooth = 8f; // How smoothly the camera catches up with it's target movement in the y axis.
public Vector2 maxXAndY; // The maximum x and y coordinates the camera can have.
public Vector2 minXAndY; // The minimum x and y coordinates the camera can have.
private Transform player; // Reference to the player's transform.
void Awake ()
{
// Setting up the reference.
player = GameObject.FindGameObjectWithTag("Player").transform;
//camera.orthographicSize = 640 / screenwidth * screenheight / 2;
}
bool CheckXMargin()
{
// Returns true if the distance between the camera and the player in the x axis is greater than the x margin.
return Mathf.Abs(transform.position.x - player.position.x) > xMargin;
}
bool CheckYMargin()
{
// Returns true if the distance between the camera and the player in the y axis is greater than the y margin.
return Mathf.Abs(transform.position.y - player.position.y) > yMargin;
}
void FixedUpdate ()
{
TrackPlayer();
}
void TrackPlayer ()
{
// By default the target x and y coordinates of the camera are it's current x and y coordinates.
float targetX = transform.position.x;
float targetY = transform.position.y;
// If the player has moved beyond the x margin...
if(CheckXMargin())
// ... the target x coordinate should be a Lerp between the camera's current x position and the player's current x position.
targetX = Mathf.Lerp(transform.position.x, player.position.x, xSmooth * Time.deltaTime);
// If the player has moved beyond the y margin...
if(CheckYMargin())
// ... the target y coordinate should be a Lerp between the camera's current y position and the player's current y position.
targetY = Mathf.Lerp(transform.position.y, player.position.y, ySmooth * Time.deltaTime);
// The target x and y coordinates should not be larger than the maximum or smaller than the minimum.
targetX = Mathf.Clamp(targetX, minXAndY.x, maxXAndY.x);
targetY = Mathf.Clamp(targetY, minXAndY.y, maxXAndY.y);
// Set the camera's position to the target position with the same z component.
transform.position = new Vector3(targetX, targetY, transform.position.z);
}
}
Since you are making a 2D game, I presume you are using the orthogonal camera. Unlike perspective camera, the orthogonal camera shows more stuff on the screen the bigger resolution you have.
You need to normalize the orthogonal size to a wanted resolution:
camera.orthographicSize = 640/screenwidth * screenheight/2
In the above code the orthogonal size is normalized to 640 pixel width. (Link)
Free Aspect is just the current game view. This does not reflect what will be seen in the final product. I recommend using one of the preset ratio's or if your up to date with Unity3d you can add your own custom resolutions.
So if you designing for phone when its vertical (GS4 used as an example) you can set it to 1080x1920. This will give you a better representation as to what you will see on your device.
Related
I have an object which moves on a terrain and a third person camera follow it, after I move it for some distance in different directions it begin to shaking or vibrating even if it is not moving and the camera rotates around it, this is the moving code of the object
double& delta = engine.getDeltaTime();
GLfloat velocity = delta * movementSpeed;
glm::vec3 t(glm::vec3(0, 0, 1) * (velocity * 3.0f));
//translate the objet atri before rendering
matrix = glm::translate(matrix, t);
//get the forward vetor of the matrix
glm::vec3 f(matrix[2][0], matrix[2][1], matrix[2][2]);
f = glm::normalize(f);
f = f * (velocity * 3.0f);
f = -f;
camera.translate(f);
and the camera rotation is
void Camera::rotate(GLfloat xoffset, GLfloat yoffset, glm::vec3& c, double& delta, GLboolean constrainpitch) {
xoffset *= (delta * this->rotSpeed);
yoffset *= (delta * this->rotSpeed);
pitch += yoffset;
yaw += xoffset;
if (constrainpitch) {
if (pitch >= maxPitch) {
pitch = maxPitch;
yoffset = 0;
}
if (pitch <= minPitch) {
pitch = minPitch;
yoffset = 0;
}
}
glm::quat Qx(glm::angleAxis(glm::radians(yoffset), glm::vec3(1.0f, 0.0f, 0.0f)));
glm::quat Qy(glm::angleAxis(glm::radians(xoffset), glm::vec3(0.0f, 1.0f, 0.0f)));
glm::mat4 rotX = glm::mat4_cast(Qx);
glm::mat4 rotY = glm::mat4_cast(Qy);
view = glm::translate(view, c);
view = rotX * view;
view = view * rotY;
view = glm::translate(view, -c);
}
float is sometimes not enough.
I use double precision matrices on CPU side to avoid such problems. But as you are on Android it might not be possible. For GPU use floats again as there are no 64bit interpolators yet.
Big numbers are usually the problem
If your world is big then you are passing big numbers into the equations multiplying any errors and only at the final stage the stuff is translated relative to camera position meaning the errors stay multiplied but the numbers got clamped so error/data ratio got big.
To lower this problem before rendering convert all vertexes to coordinate system with origin at or near your camera. You can ignore rotations just offset the positions.
This way you will got higher errors only far away from camera which is with perspective not visible anyway... For more info see:
ray and ellipsoid intersection accuracy improvement
Use cumulative transform matrix instead of Euler angles
for more info see Understanding 4x4 homogenous transform matrices and all the links at bottom of that answer.
This sounds like a numerical effect to me. Even small offsets coming from your game object will influence the rotation of the following camera with small movements / rotations and it looks like a vibrating object / camera.
So what you can do is:
Check if the movement above a threshold value before calculating a new rotation for your camera
When you are above this threshold: do a linear interpolation between the old and the new rotation using the lerp-algorithm for the quaternion ( see this unity answer to get a better understanding how your code can look like: Unity lerp discussion )
I am using http://www.codeproject.com/Articles/146145/Android-3D-Carousel code and the following is my current output
when i rotate size of front image should be increased
front image size has to be increased only when it comes to front position
we can increase the size of the image using getChildStaticTransformation in Carousel class but i don't know how to do it
To just increase the size of the Selected Image change the Z coordinate in Carousal.java.
private void Calculate3DPosition(CarouselItem child, int diameter, float angleOffset){
angleOffset = angleOffset * (float)(Math.PI/180.0f);
float x = - (float)(diameter/2 * android.util.FloatMath.sin(angleOffset)) + diameter/2 - child.getWidth()/2;
float z = diameter/2 * (1.0f - (float)android.util.FloatMath.cos(angleOffset));
float y = - getHeight()/2 + (float) (z * android.util.FloatMath.sin(mTheta));
child.setItemX(x);
child.setItemZ(z-200);//Add how much size you want for Z axis.
child.setItemY(y);
}
I'm making a application to track a veicle based in GPS coordinates.
I created a SurfaceView to draw the field, vehicle and the path (route) for him.
The result looked like this:
The black dots represent the coming of GPS coordinates, and blue rectangles would be the area covered by the path traveled. (the width of the path is configurable)
The way I'm drawing with blue rectangles (this is my question) which are the area covered by the path traveled. (the width of the path is configurable)
With that I need to overcome some situation.
I need to calculate the field's rotation angle so that the path always get left behind. (completed)
I need to calculate the angle of rotation of each rectangle so they are facing towards the vehicle. (completed)
In the future I will need:
Detect when the vehicle passes twice in the same place. (based on the path traveled)
Calculate the area (m²) all traveled by the vehicle.
I would like some tips for draw this path.
My code:
public void draw(Canvas canvas) {
Log.d(getClass().getSimpleName(), "draw");
canvas.save();
// translate canvas to vehicle positon
canvas.translate((float) center.cartesian(0), (float) center.cartesian(1));
float fieldRotation = 0;
if (trackerHistory.size() > 1) {
/*
Before drawing the way, only takes the last position and finds the angle of rotation of the field.
*/
Vector lastPosition = new Vector(convertToTerrainCoordinates(lastPoint));
Vector preLastPosition = new Vector(convertToTerrainCoordinates(preLastPoint));
float shift = (float) lastPosition.distanceTo(preLastPosition);
/*
Having the last coordinate as a triangle, 'preLastCoord' saves the values of the legs, while 'shift' is the hypotenuse
*/
// If the Y offset is negative, then the opposite side is the Y displacement
if (preLastPosition.cartesian(1) < 0) {
// dividing the opposite side by hipetenusa, we have the sine of the angle that must be rotated.
double sin = preLastPosition.cartesian(1) / shift;
// when Y is negative, it is necessary to add or subtract 90 degrees depending on the value of X
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
if (preLastPosition.cartesian(0) < 0) {
fieldRotation = (float) (Math.toDegrees(Math.asin(sin)) - 90d);
} else {
fieldRotation = (float) (Math.abs(Math.toDegrees(Math.asin(sin))) + 90d);
}
}
// if not, the opposite side is the X offset
else {
// dividing the opposite side by hipetenusa have the sine of the angle that must be rotated.
double senAngulo = preLastPosition.cartesian(0) / shift;
// The "Math.asin()" calculates the radian arc to the sine previously calculated.
// And the "Math.toDegress()" converts degrees to radians from 0 to 360.
fieldRotation = (float) Math.toDegrees(Math.asin(senAngulo));
}
}
final float dpiTrackerWidth = Navigator.meterToDpi(trackerWidth); // width of rect
final Path positionHistory = new Path(); // to draw the route
final Path circle = new Path(); // to draw the positions
/*
Iterate the historical positions and draw the path
*/
for (int i = 1; i < trackerHistory.size(); i++) {
Vector currentPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i))); // vector with X and Y position
Vector lastPosition = new Vector(convertToTerrainCoordinates(trackerHistory.get(i - 1))); // vector with X and Y position
circle.addCircle((float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1), 3, Path.Direction.CW);
circle.addCircle((float) lastPosition.cartesian(0), (float) lastPosition.cartesian(1), 3, Path.Direction.CW);
if (isInsideOfScreen(currentPosition.cartesian(0), currentPosition.cartesian(1)) ||
isInsideOfScreen(lastPosition.cartesian(0), lastPosition.cartesian(1))) {
/*
Calcule degree by triangle sides
*/
float shift = (float) currentPosition.distanceTo(lastPosition);
Vector dif = lastPosition.minus(currentPosition);
float sin = (float) (dif.cartesian(0) / shift);
float degress = (float) Math.toDegrees(Math.asin(sin));
/*
Create a Rect to draw displacement between two coordinates
*/
RectF rect = new RectF();
rect.left = (float) (currentPosition.cartesian(0) - (dpiTrackerWidth / 2));
rect.right = rect.left + dpiTrackerWidth;
rect.top = (float) currentPosition.cartesian(1);
rect.bottom = rect.top - shift;
Path p = new Path();
Matrix m = new Matrix();
p.addRect(rect, Path.Direction.CCW);
m.postRotate(-degress, (float) currentPosition.cartesian(0), (float) currentPosition.cartesian(1));
p.transform(m);
positionHistory.addPath(p);
}
}
// rotates the map to make the route down.
canvas.rotate(fieldRotation);
canvas.drawPath(positionHistory, paint);
canvas.drawPath(circle, paint2);
canvas.restore();
}
My goal is to have something like this application: https://play.google.com/store/apps/details?id=hu.zbertok.machineryguide (but only in 2D for now)
EDIT:
To clarify a bit more my doubts:
I do not have much experience about it. I would like a better way to draw the path. With rectangles it was not very good. Note that the curves are some empty spaces.
Another point is the rotation of rectangles, I'm rotating them at the time of drawing. I believe this will make it difficult to detect overlaps
I believe I need math help for the rotation of objects and overlapping detection. And it also helps to draw the path of a filled shape.
After some research time, I came to a successful outcome. I will comment on my thoughts and how was the solution.
As I explained in question, along the way I have the coordinates traveled by the vehicle, and also a setting for the width of the path should be drawn.
Using LibGDX library is ready a number of features, such as the implementation of a "orthographic camera" to work with positioning, rotation, etc.
With LibGDX I converted GPS coordinates on my side points to the road traveled. Like this:
The next challenge was to fill the path traveled. First I tried using rectangles, but the result was as shown in my question.
So the solution was to trace triangles using the side of the path as vertices. Like this:
Then simply fill in the triangles. Like this:
Finally, using Stencil, I set up OpenGL to highlight overlaps. Like this:
Other issues fixed:
To calculate the covered area, simply calculate the area of existing triangles along the path.
To detect overlapping, just check if the current position of the vehicle is within a triangle.
Thanks to:
AlexWien for the attention and for their time.
Conner Anderson by videos of LibGDX
And a special thanks to Luis Eduardo for knowledge, helped me a lot.
The sample source code.
Usually such a path is drawn using a "path" method from the graphics lib.
In that lib you can create a polyline, and give a line width.
You further specify how corners are filled. (BEVEL_JOIN, MITTER_JOIN)
The main question is wheter the path is drawn while driving or afterwords.
Afterwords is no problem.
To draw while driving might be a bit tricky to avoid to redraw the path each second.
When using the Path with moveTo and lineTo to create a polyline, then you can set a line width, and the graphics lib will do that all for you.
Then there will be no gaps, since it is a poly line.
The official development documentation suggests the following way of obtaining the quaternion from the 3D rotation rate vector (wx, wy, wz).
// Create a constant to convert nanoseconds to seconds.
private static final float NS2S = 1.0f / 1000000000.0f;
private final float[] deltaRotationVector = new float[4]();
private float timestamp;
public void onSensorChanged(SensorEvent event) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ);
// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude;
axisY /= omegaMagnitude;
axisZ /= omegaMagnitude;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
float thetaOverTwo = omegaMagnitude * dT / 2.0f;
float sinThetaOverTwo = sin(thetaOverTwo);
float cosThetaOverTwo = cos(thetaOverTwo);
deltaRotationVector[0] = sinThetaOverTwo * axisX;
deltaRotationVector[1] = sinThetaOverTwo * axisY;
deltaRotationVector[2] = sinThetaOverTwo * axisZ;
deltaRotationVector[3] = cosThetaOverTwo;
}
timestamp = event.timestamp;
float[] deltaRotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
}
My question is:
It is quite different from the acceleration case, where computing the resultant acceleration using the accelerations ALONG the 3 axes makes sense.
I am really confused why the resultant rotation rate can also be computed with the sub-rotation rates AROUND the 3 axes. It does not make sense to me.
Why would this method - finding the composite rotation rate magnitude - even work?
Since your title does not really match your questions, I'm trying to answer as much as I can.
Gyroscopes don't give an absolute orientation (as the ROTATION_VECTOR) but only rotational velocities around those axis they are built to 'rotate' around. This is due to the design and construction of a gyroscope. Imagine the construction below. The golden thing is rotating and due to the laws of physics it does not want to change its rotation. Now you can rotate the frame and measure these rotations.
Now if you want to obtain something as the 'current rotational state' from the Gyroscope, you will have to start with an initial rotation, call it q0 and constantly add those tiny little rotational differences that the gyroscope is measuring around the axis to it: q1 = q0 + gyro0, q2 = q1 + gyro1, ...
In other words: The Gyroscope gives you the difference it has rotated around the three constructed axis, so you are not composing absolute values but small deltas.
Now this is very general and leaves a couple of questions unanswered:
Where do I get an initial position from? Answer: Have a look at the Rotation Vector Sensor - you can use the Quaternion obtained from there as an initialisation
How to 'sum' q and gyro?
Depending on the current representation of a rotation: If you use a rotation matrix, a simple matrix multiplication should do the job, as suggested in the comments (note that this matrix-multiplication implementation is not efficient!):
/**
* Performs naiv n^3 matrix multiplication and returns C = A * B
*
* #param A Matrix in the array form (e.g. 3x3 => 9 values)
* #param B Matrix in the array form (e.g. 3x3 => 9 values)
* #return A * B
*/
public float[] naivMatrixMultiply(float[] B, float[] A) {
int mA, nA, mB, nB;
mA = nA = (int) Math.sqrt(A.length);
mB = nB = (int) Math.sqrt(B.length);
if (nA != mB)
throw new RuntimeException("Illegal matrix dimensions.");
float[] C = new float[mA * nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]);
return C;
}
To use this method, imagine that mRotationMatrix holds the current state, these two lines do the job:
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix);
// Apply rotation matrix in OpenGL
gl.glMultMatrixf(mRotationMatrix, 0);
If you chose to use Quaternions, imagine again that mQuaternion contains the current state:
// Perform Quaternion multiplication
mQuaternion.multiplyByQuat(deltaRotationVector);
// Apply Quaternion in OpenGL
gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ());
Quaternion multiplication is described here - equation (23). Make sure, you apply the multiplication correctly, since it is not commutative!
If you want to simply know rotation of your device (I assume this is what you ultimately want) I strongly recommend the ROTATION_VECTOR-Sensor. On the other hand Gyroscopes are quite precise for measuring rotational velocity and have a very good dynamic response, but suffer from drift and don't give you an absolute orientation (to magnetic north or according to gravity).
UPDATE: If you want to see a full example, you can download the source-code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo.
Makes sense to me. Acceleration sensors typically work by having some measurable quantity change when force is applied to the axis being measured. E.g. if gravity is pulling down on the sensor measuring that axis, it conducts electricity better. So now you can tell how hard gravity, or acceleration in some direction, is pulling. Easy.
Meanwhile gyros are things that spin (OK, or bounce back and forth in a straight line like a tweaked diving board). The gyro is spinning, now you spin, the gyro is going to look like it is spinning faster or slower depending on the direction you spun. Or if you try to move it, it will resist and try to keep going the way it is going. So you just get a rotation change out of measuring it. Then you have to figure out the force from the change by integrating all the changes over the amount of time.
Typically none of these things are one sensor either. They are often 3 different sensors all arranged perpendicular to each other, and measuring a different axis. Sometimes all the sensors are on the same chip, but they are still different things on the chip measured separately.
I'm using libgdx to develop a basic 3d game for android and I'm having difficulty properly orienting the camera given three rotation angles provided from the compass (azimuth - rotation about Z, roll - rotation about Y, pitch - rotation about X). I've had some slight success with the following code in that I can properly aim the virtual camera down the Z-axis and X-axis as I expected. (Angles are in degrees [-180,180])
camera.direction.x = 0;
camera.direction.y = 0;
camera.direction.z = 1;
camera.up.x = -1;
camera.up.y = 0;
camera.up.z = 0;
camera.rotate(azimuth,0,0,1);
camera.rotate(roll,0,1,0);
camera.rotate(pitch,1,0,0);
I've also had some success with this, but it does not orient the camera's up-vector. (Angles have been converted to radians in this version)
float x,y,z;
roll = (float) (roll + Math.PI/2.0);
x = (float) (Math.cos(azimuth) * Math.cos(roll));
y = (float) (Math.sin(azimuth) * Math.cos(roll));
z = (float) (Math.sin(roll));
Vector3 lookat = new Vector3(x,y,z);
camera.lookAt(lookat.x, lookat.y, lookat.z);
Can someone shed some light on how to properly orient the virtual camera from these three angles?
Also, I'm trying to have the phone be in landscape mode such that the top of the phone is to the left and the bottom is to the right. Hence the camera's default direction (all rotations are at 0, top of the phone is aimed north) be the camera aiming toward the ground (positive Z) with the up aiming east (negative X).
By coding other things for a while, I eventually reached a point where I was trying to separate the rendering, simulation, and input. Because of that I have come up with the following solution that works for me. I haven't tested it rigorously, but it appears to do what I want (ignoring camera roll).
On the android part of the program, I needed to set the orientation to landscape mode:
<activity android:name=".MySuperAwesomeApplication"
android:label="#string/app_name"
android:screenOrientation="landscape">
>
I created a player class to store yaw, pitch, roll, and position
public class Player {
public final Vector3 position = new Vector3(0,1.5f,0);
/** Angle left or right of the vertical */
public float yaw = 0.0f;
/** Angle above or below the horizon */
public float pitch = 0.0f;
/** Angle about the direction as defined by yaw and pitch */
public float roll = 0.0f;
}
And then when I update the player based on input I do the following:
player.yaw = -Gdx.input.getAzimuth();
player.pitch = -Gdx.input.getRoll()-90;
player.roll = -Gdx.input.getPitch();
Note that pitch maps to input.roll and roll maps to input.pitch. Not sure why, but it works for me. Finally update the camera:
camera.direction.x = 0;
camera.direction.y = 0;
camera.direction.z = 1;
camera.up.x = 0;
camera.up.y = 1;
camera.up.z = 0;
camera.position.x = 0;
camera.position.y = 0;
camera.position.z = 0;
camera.update();
// The world up vector is <0,1,0>
camera.rotate(player.yaw,0,1,0);
Vector3 pivot = camera.direction.cpy().crs(camera.up);
camera.rotate(player.pitch, pivot.x,pivot.y,pivot.z);
camera.rotate(player.roll, camera.direction.x, camera.direction.y, camera.direction.z);
camera.translate(player.position.x, player.position.y, player.position.z);
camera.update();
EDIT: added camera roll to the code. For me and my Droid 2, roll appears to only have values in [-90,90] such that if you rotate past -90 or 90 the values start changing back towards 0.