So I'm creating a game for android that uses the roll of the device to set the position of the main character. The position is updated everytime the .onSensorChanged(event) method gets run. The problem is, even when I run my app and set my phone on the tabletop, the values change significantly (about two degrees). The sensitivity of the data collected also doesn't seem to be very precise, as difference in angle seem to be in increments of about 0.4 degrees every change. Using
Log.e(TAG, "roll: " + event.values[2]);
Sequential degree output looks like this:
roll: 2.265
roll: 2.265
roll: 2.265
roll: 1.843
roll: 2.265
roll: 2.265
roll: 2.75
roll: 2.265
I've also implemented this algorithm to limit my character's movement in the screen, but it's not enough and severely limits the speed and responsiveness of the character's movement with regards to the current roll data (I would like the character's placement to be as close to 1-1 with regards to roll as possible).
public void onSensorChanged(SensorEvent event)
{
if (event.sensor.getType() == Sensor.TYPE_ORIENTATION)
{
float newX = -(int)event.values[2] * CCDirector.sharedDirector().displaySize().width/10.0f +
CCDirector.sharedDirector().displaySize().width/2.0f;
sam.updatePosition(newX);
Log.e(TAG, "roll: " + event.values[2]);
}
}
public void updatePosition(float newX)
{
float winSizeX = CCDirector.sharedDirector().displaySize().width;
float contentWidth = this.sprite.getContentSize().width;
//tries to eliminate jumpy behaviour by the character by limiting his speed
double tolerance = 0.01;
if (Math.abs(newX - this.sprite.getPosition().x) > winSizeX * tolerance)
newX = this.sprite.getPosition().x - ((this.sprite.getPosition().x - newX) * 0.1f);
this.sprite.setPosition(newX, this.sprite.getPosition().y);
}
I'm wondering if this is normal or if it's the phone I'm testing it on (Sony Xperia Play.)
Thanks for any input.
How about moving average to smooth your data on the fly?
It will lag a little but it would be unnoticeable.
On the other hand, I would not use Euler angles (such as roll).
Related
I want to be able to detect a situation where the phone has an acceleration towards the ground (probably means that the Gravity sensor has to be used here also).
I have read a lot about this topic in the Android docs, about High and Low pass filters and other posts, and right now what I have is a code sample that gets the acceleration in the X, Y and Z axis after stripping the gravity:
if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
final float alpha = (float) 0.8;
gravity[0] = alpha * gravity[0] + (1 - alpha) * event.values[0];
gravity[1] = alpha * gravity[1] + (1 - alpha) * event.values[1];
gravity[2] = alpha * gravity[2] + (1 - alpha) * event.values[2];
linear_acceleration[0] = event.values[0] - gravity[0];
linear_acceleration[1] = event.values[1] - gravity[1];
linear_acceleration[2] = event.values[2] - gravity[2];
}
So, the linear_acceleration is supposedly the acceleration in the X, Y, Z axis without the gravity.
This is all nice, but the problem obviously is that is depends on how the user holds the phone, for example, in the elevator - if he holds it flat, parallel to the ground - the Z axis will change, if he holds it up straight - the Y axis will change , etc.
So, for example, if the user holds the phone diagonally, the acceleration will be "divided" between the different axes, and some sort of math work, considering where the gravity direction is, will be needed to calculate the actual acceleration in that direction.
Correct me if I am wrong?
Is there a reliable way to detect the downward (towards the earth) acceleration? maybe using other sensors like Gyroscope?
BTW - about the TYPE_LINEAR_ACCELERATION type, I read this answer, saying that its actually not very accurate.
Use some basic physics. Acceleration is a vector. The magnitude of a vector v is always equal to (v.v)^.5, or the square root of the dot product. Or in simpler terms (x^2+y^2+z^2)^.5. That will tell you the amount of acceleration, but not if its towards or away from the earth.
If you need to know if its going towards or away from earth- you can combine that with data from SensorManager.getOrientation. You may need to do that before they enter the elevator though- the orientation code uses gravity as one of its inputs, so it may be screwed up if you try to use it in an elevator. You'd need to test it out.
If you need to break it down to acceleration in terms of earth x, y, and z axes- simple geometry. Take the angle from the orientation result, and use trig properties to convert axes. If you don't know the formulas you need to read up on trig a bit or you'll get them wrong even if I tell them to you.
I also wanted to just be able to measure vertical movement. This is how I did it and it worked for me. First time posting on this site and I have no idea how to format correctly.
Use two different android Sensors: Type_linear_Acceleration and Type_Gravity
Linear acceleration will give you acceleration in the X, Y and Z axis of the phone, and Gravity will do the same, but just for gravity. You know that the sum of the gravity values should = 9.8, but this will be split between X, Y and Z coordinates depending on the phone orientation.
I wont go into the math of it too much, but the following will give you vertical acceleration without gravity. If you want to understand it a bit more run through some values as if the phone was held vertically, then horizontally, it works even if the phone is oblique.
vertical acceleration = (LinearAccelX * GravityX / 9.8)+ (LinearAccelY * GravityY / 9.8)+ (LinearAccelZ * GravityZ / 9.8).
See code below (irrelevant parts removed):
{public class MainActivity extends AppCompatActivity implements SensorEventListener {
SensorManager sm;
Sensor linearaccelerometer;
Sensor gravity;
double Yaccel;
double Xaccel;
double Zaccel;
double gravityY;
double gravityX;
double gravityZ;
double verticalAccel;
sm = (SensorManager) getSystemService(SENSOR_SERVICE);
linearaccelerometer = sm.getDefaultSensor(Sensor.TYPE_LINEAR_ACCELERATION);
gravity = sm.getDefaultSensor(Sensor.TYPE_GRAVITY);
sm.registerListener(this, linearaccelerometer, SensorManager.SENSOR_DELAY_NORMAL);
sm.registerListener(this, gravity, SensorManager.SENSOR_DELAY_NORMAL);
}
public void onSensorChanged(SensorEvent event) {
Sensor sensor = event.sensor;
if (sensor.getType() == Sensor.TYPE_LINEAR_ACCELERATION) {
Xaccel = (double) event.values[0];
Yaccel = (double) event.values[1];
Zaccel = (double) event.values[2];
}
if (sensor.getType() == Sensor.TYPE_GRAVITY) {
gravityX = (double) event.values[0];
gravityY = (double) event.values[1];
gravityZ = (double) event.values[2];
}
verticalAccel = (Xaccel * gravityX / 9.8) + (Yaccel * gravityY / 9.8) + (Zaccel *gravityZ /9.8);
}
I am developing a simple game in which a character fly when you tap/click the screen. keep tapping the character will fly (some what similar to flappy bird and jet pack). However the movement is not smooth at all, as of jet pack.
Here is sample of my code.
Varaible Initilization
maxSpeedLimit = spriteHeight/10;
speed = maxSpeedLimit/2; //half of the max speed
touch event
public void onTapOrClick(int action) {
if (action == UP) {
sprite.up= true;
}
else {
sprite.up = false;
}
}
Sprite update called from game loop
public void update() {
if (up) {
y -= speed; //fly up
}
else {
y += speed;// fly down
}
if (speed < maxSpeedLimit) {
speed++; // May be cheep way to add **velocity/acceleration**
}
}
I think speed++ is not a smooth way to increase the speed, I am not sure but may be adding some time related variable to increment may improve it, moreover adding a gravity will make it more realistic but i have no idea how to do it, I have read few blogs, first thing I am not able to search with the correct keyword, and second thing is they are so hard understand because they contains platform related codes. Please help.
I am making this game in android, but code in any language is accepted (HTML5, javascript, Android, Flash or any).
Q: How to add acceleration and gravity to an object (sprite) which fly when user tap or click and fall on release?
Something similar to jet pack joyride (only up and down movement)
UPDATED
After #scottt advise I have implemented the dY += gravity + flapp I can feel now gravity, however there are 2 issues.
My screen height is 480, and sprite immediately touch upper(y=0) and lower(y=480) boundary, becuase i think it keep increasing the speed of the sprite.
when it touches the ground, it seems very havey and take much time to lift the sprite up in the air.
Some how there should be some limit to dY which is constantly being added to y location.
Here is update code.
int downSpeed = 1;
int upSpeed = -2;
int dy = 0;
private void update() {
if (flapping) {
upSpeed = -2; //if flying speed
}
else {
upSpeed = 0;
}
dy += downSpeed + upSpeed;
if (dy < -10) {
dy = -10; //limit for rise speed
}
else if (dy > 8) {
dy = 8; //limit for gravity
}
y += dy; // add value in y location
if (y > GAME_HEIGHT - sprite.getHeight()) {
y = GAME_HEIGHT - sprite.getHeight(); // reset y, if touch ground
dy = 0; //reset speed, otherwise it make it very heavy to rise
}
else if ( y < 0) {
y = 0; //reset y if touch upper limit
dy = 0; //reset speed, otherwise take much time to fall (as it would be in negative)
}
}
Gravity is just a downward acceleration. An acceleration is in turn just a change in velocity (directed speed). For the following discussion, I make the following assumptions to keep things simpler:
your horizontal speed is constant
input tapping is called 'flapping'
gravity behaves normally
simplistic, rather than a scientific approach/verbiage is OK for our purposes
For each pass through the update loop, all of the various accelerations must be summed and the total added to the current speed. In your situation, there are 2 possible accelerations, gravity and possibly flapping. Gravity is constant and is a negative value (is works downward), while flapping only occurs during tapping as is positive (upwards).
Let's set gravity to -10 pixels per loop, tapping to be +25 pixels per loop, and initial height to 500. Some initial definitions are:
static final int gravity = -10; // constant downward acceleration
static final int flapping = 25; // upward acceleration whenever isFlapping is true
Boolean isFlapping = false; // Is the bird flapping
int dY = 0; // current vertical speed
y = 500; // current vertical position
Each time through the loop, without flapping, the speed calculation would be:
dY += gravity + flapping;
So the first time through, the speed calculation would be dY = 0 + (-10) + 0 = -10. The second time, dy = -10 + (-10) + 0 = -20. The 5th time, dy = -40 + (-10) = -50. Each time through, the downward speed is 10 more than the time before.
The height is simple. Each time through, the height changes by the amount of vertical acceleration. So:
y += dY;
So the first time through, the height would be y = 500 + (-10) = 490. The second time, y = 490 + (-20) = 470. And the 5th time, y = 400 + (-50) = 350. Because the rate of falling increases each time through, the bird will plummet faster and faster until splat!
That's where flapping comes in. Each time through the loop where flapping is occurring, a +25 will be applied to the dY calculation. So lets assume the bird starts flapping in 6th iteration. The dy calculation would be dy = -50 + (-10) + 25 = -35 and the height would be y = 350 + (-35) = 315. The next time through would give dy = -35 + (-10) + 25 = -20 and the height would be y = 350 + (-20) = 295. Still falling, but more slowly. The time after: that dy = -20 + (-10) + 25 = -5 and y = 295 + (-5) = 290. The time after that finally shows a gain in height: dy = -5 + (-10) + 25 = 10 and y = 290 + 10 = 300.
All that said, you'll definitely need to play with the numbers until you get a satisfying result.
TLDR: You don't want to change the height directly using gravity and flapping. Instead you want to use gravity and flapping to calculate the speed for each iteration and then use that to adjust the height.
The official development documentation suggests the following way of obtaining the quaternion from the 3D rotation rate vector (wx, wy, wz).
// Create a constant to convert nanoseconds to seconds.
private static final float NS2S = 1.0f / 1000000000.0f;
private final float[] deltaRotationVector = new float[4]();
private float timestamp;
public void onSensorChanged(SensorEvent event) {
// This timestep's delta rotation to be multiplied by the current rotation
// after computing it from the gyro sample data.
if (timestamp != 0) {
final float dT = (event.timestamp - timestamp) * NS2S;
// Axis of the rotation sample, not normalized yet.
float axisX = event.values[0];
float axisY = event.values[1];
float axisZ = event.values[2];
// Calculate the angular speed of the sample
float omegaMagnitude = sqrt(axisX*axisX + axisY*axisY + axisZ*axisZ);
// Normalize the rotation vector if it's big enough to get the axis
// (that is, EPSILON should represent your maximum allowable margin of error)
if (omegaMagnitude > EPSILON) {
axisX /= omegaMagnitude;
axisY /= omegaMagnitude;
axisZ /= omegaMagnitude;
}
// Integrate around this axis with the angular speed by the timestep
// in order to get a delta rotation from this sample over the timestep
// We will convert this axis-angle representation of the delta rotation
// into a quaternion before turning it into the rotation matrix.
float thetaOverTwo = omegaMagnitude * dT / 2.0f;
float sinThetaOverTwo = sin(thetaOverTwo);
float cosThetaOverTwo = cos(thetaOverTwo);
deltaRotationVector[0] = sinThetaOverTwo * axisX;
deltaRotationVector[1] = sinThetaOverTwo * axisY;
deltaRotationVector[2] = sinThetaOverTwo * axisZ;
deltaRotationVector[3] = cosThetaOverTwo;
}
timestamp = event.timestamp;
float[] deltaRotationMatrix = new float[9];
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
// User code should concatenate the delta rotation we computed with the current rotation
// in order to get the updated rotation.
// rotationCurrent = rotationCurrent * deltaRotationMatrix;
}
}
My question is:
It is quite different from the acceleration case, where computing the resultant acceleration using the accelerations ALONG the 3 axes makes sense.
I am really confused why the resultant rotation rate can also be computed with the sub-rotation rates AROUND the 3 axes. It does not make sense to me.
Why would this method - finding the composite rotation rate magnitude - even work?
Since your title does not really match your questions, I'm trying to answer as much as I can.
Gyroscopes don't give an absolute orientation (as the ROTATION_VECTOR) but only rotational velocities around those axis they are built to 'rotate' around. This is due to the design and construction of a gyroscope. Imagine the construction below. The golden thing is rotating and due to the laws of physics it does not want to change its rotation. Now you can rotate the frame and measure these rotations.
Now if you want to obtain something as the 'current rotational state' from the Gyroscope, you will have to start with an initial rotation, call it q0 and constantly add those tiny little rotational differences that the gyroscope is measuring around the axis to it: q1 = q0 + gyro0, q2 = q1 + gyro1, ...
In other words: The Gyroscope gives you the difference it has rotated around the three constructed axis, so you are not composing absolute values but small deltas.
Now this is very general and leaves a couple of questions unanswered:
Where do I get an initial position from? Answer: Have a look at the Rotation Vector Sensor - you can use the Quaternion obtained from there as an initialisation
How to 'sum' q and gyro?
Depending on the current representation of a rotation: If you use a rotation matrix, a simple matrix multiplication should do the job, as suggested in the comments (note that this matrix-multiplication implementation is not efficient!):
/**
* Performs naiv n^3 matrix multiplication and returns C = A * B
*
* #param A Matrix in the array form (e.g. 3x3 => 9 values)
* #param B Matrix in the array form (e.g. 3x3 => 9 values)
* #return A * B
*/
public float[] naivMatrixMultiply(float[] B, float[] A) {
int mA, nA, mB, nB;
mA = nA = (int) Math.sqrt(A.length);
mB = nB = (int) Math.sqrt(B.length);
if (nA != mB)
throw new RuntimeException("Illegal matrix dimensions.");
float[] C = new float[mA * nB];
for (int i = 0; i < mA; i++)
for (int j = 0; j < nB; j++)
for (int k = 0; k < nA; k++)
C[i + nA * j] += (A[i + nA * k] * B[k + nB * j]);
return C;
}
To use this method, imagine that mRotationMatrix holds the current state, these two lines do the job:
SensorManager.getRotationMatrixFromVector(deltaRotationMatrix, deltaRotationVector);
mRotationMatrix = naivMatrixMultiply(mRotationMatrix, deltaRotationMatrix);
// Apply rotation matrix in OpenGL
gl.glMultMatrixf(mRotationMatrix, 0);
If you chose to use Quaternions, imagine again that mQuaternion contains the current state:
// Perform Quaternion multiplication
mQuaternion.multiplyByQuat(deltaRotationVector);
// Apply Quaternion in OpenGL
gl.glRotatef((float) (2.0f * Math.acos(mQuaternion.getW()) * 180.0f / Math.PI),mQuaternion.getX(),mQuaternion.getY(), mQuaternion.getZ());
Quaternion multiplication is described here - equation (23). Make sure, you apply the multiplication correctly, since it is not commutative!
If you want to simply know rotation of your device (I assume this is what you ultimately want) I strongly recommend the ROTATION_VECTOR-Sensor. On the other hand Gyroscopes are quite precise for measuring rotational velocity and have a very good dynamic response, but suffer from drift and don't give you an absolute orientation (to magnetic north or according to gravity).
UPDATE: If you want to see a full example, you can download the source-code for a simple demo-app from https://bitbucket.org/apacha/sensor-fusion-demo.
Makes sense to me. Acceleration sensors typically work by having some measurable quantity change when force is applied to the axis being measured. E.g. if gravity is pulling down on the sensor measuring that axis, it conducts electricity better. So now you can tell how hard gravity, or acceleration in some direction, is pulling. Easy.
Meanwhile gyros are things that spin (OK, or bounce back and forth in a straight line like a tweaked diving board). The gyro is spinning, now you spin, the gyro is going to look like it is spinning faster or slower depending on the direction you spun. Or if you try to move it, it will resist and try to keep going the way it is going. So you just get a rotation change out of measuring it. Then you have to figure out the force from the change by integrating all the changes over the amount of time.
Typically none of these things are one sensor either. They are often 3 different sensors all arranged perpendicular to each other, and measuring a different axis. Sometimes all the sensors are on the same chip, but they are still different things on the chip measured separately.
ive managed to get out the accelerometers values (x,y,z). Is there an easy way to make a circle move with those values? I would also like it to stop at the edges of the screen. Thanks!
I think you can do something like this (note: partially pseudocode):
public void onSensorChanged (int sensor, float[] values) {
//adjust someNumber to desired speed
//values[1] can be -180 to 180
float xChange = someNumber * values[1];
//values[2] can be -90 to 90
float yChange = someNumber * 2 * values[2];
//only move object if it will stay within the bounds
if (object.xPos + xChange > 0 && object.xPos + xChange < xBorder) {
object.xPos += xChange;
}
if (object.yPos + yChange > 0 && object.yPos + yChange < yBorder) {
object.yPos += yChange;
}
//force a repaint of your surface here
}
Where:
onSensorChanged is a method that is called each time the accelerometer moves... I'm not sure whether you are using SensorManager, but it seems convenient for your scenario. Note that you must implement this method yourself.
object is the circle you want to move.
xBorder and yBorder are the maximum bounds for the object's movement. The minimum bounds are assumed to be 0 and 0, though you can use whatever you like.
I have to write a compass app in Android. The only thing the user sees on the screen is a cube with a red wall which has to point north. This is not important. What's important is that I need to rotate that cube accordingly to the rotation of the device itself so that the red wall continues to point north no matter how the phone is being held. My code is simple and straightforward:
#Override
public void onSensorChanged(SensorEvent event) {
synchronized (this) {
switch (event.sensor.getType()){
case Sensor.TYPE_ACCELEROMETER:
direction = event.values[2];
break;
case Sensor.TYPE_ORIENTATION:
if (direction < 0) {
angleX = event.values[1];
angleY = -event.values[2];
angleZ = event.values[0];
} else {
angleX = -event.values[1];
angleY = -event.values[2];
angleZ = event.values[0];
}
break;
}
}
}
I have added this extra direction variable that simply stores whether the phone's display is pointing downwards or upwards. I don't know if I need it but it seems to fix some bugs. I am using the SensorSimulator for android but whenever my pitch slider goes in the [-90, 90] interval the other variables get mixed up. It's like they get a 180 offset. But I can't detect when I am in this interval because the range of the pitch is from -90 to 90 so I can move that slider from left to write and I will always be in that interval.
This was all just to show you how far has my code advanced. I am not saying how this problem should be solved because I will only probably stir myself into a dead end. You see, I have been trying to write that app for 3 days now, and you can imagine how pissed my boss is. I have read all sorts of tutorials and tried every formula I could find or think of. So please help me. All I have to do is know how to rotate my cube, the rotation angles of which are EULER ANGLES in degrees.
Here's some code I wrote to do something pretty similar, really only caring about the rotation of the device in the roll direction. Hope it helps! It just uses the accelerometer values to determine the pitch, no need to get orientation of the view.
public void onSensorChanged(SensorEvent event) {
float x = -1 * event.values[0] / SensorManager.GRAVITY_EARTH;
float y = -1 * event.values[1] / SensorManager.GRAVITY_EARTH;
float z = -1 * event.values[2] / SensorManager.GRAVITY_EARTH;
float signedRawRoll = (float) (Math.atan2(x, y) * 180 / Math.PI);
float unsignedRawRoll = Math.abs(signedRawRoll);
float rollSign = signedRawRoll / unsignedRawRoll;
float rawPitch = Math.abs(z * 180);
// Use a basic low-pass filter to only keep the gravity in the accelerometer values for the X and Y axes
// adjust the filter weight based on pitch, as roll is harder to define as pitch approaches 180.
float filterWeight = rawPitch > 165 ? 0.85f : 0.7f;
float newUnsignedRoll = filterWeight * Math.abs(this.roll) + (1 - filterWeight) * unsignedRawRoll;
this.roll = rollSign * newUnsignedRoll;
if (Float.isInfinite(this.roll) || Float.isNaN(this.roll)) {
this.roll = 0;
}
this.pitch = filterWeight * this.pitch + (1 - filterWeight) * rawPitch;
for (IAngleListener listener : listeners) {
listener.deviceRollAndPitch(this.roll, this.pitch);
}
}