Preface: I'm very new to graphics.
I'm trying to learn a bit more about AR and have a pet project that should allow me to explore my location for AR objects. effectively I want to stick a flat image in the real world and use my phone's sensors to move it such that it is only visible if I'm looking at it.
I followed the examples on the Android Developers page and I have both a rotation matrix and orientation. Both seem to make sense. I have my image which for now I just want to always fix at North but will eventually randomize it to be X degrees off North as well as some up/down etc.
I'm lost as to how to place my image in 3D space and how to use the rotation vector to determine the image's position on the screen.
Any help or links to articles would be greately appreciated.
I'm using the sample code from Android's developer's page:
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
mGravity = event.values;
} else if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
mGeomagnetic = event.values;
}
if (mGravity == null || mGeomagnetic == null) {
return;
}
float R[] = new float[16];
float I[] = new float[9];
if (SensorManager.getRotationMatrix(R, I, mGravity, mGeomagnetic)) {
Log.d(TAG, ...);
}
}
My gut instinct was to take the Matrix.invertM of R and then use Matrix.multiplyMV multiplying the inverted rotation matrix by the world position of object I want to render (for now lets assume the object is 5m north of me putting it at (0, 5, 0, 0)). I used a compass and a level to make sure my surface is flat and my phone is oriented N. The result of the print statement above is the following rotation matrix:
-0.37, 0.93, 0.02, 0.00
-0.92, -0.37, 0.07, 0.00
0.07, 0.01, 1.00, 0.00
0.00, 0.00, 0.00, 1.00
This seems wrong as the matrix should be the identity matrix:
1, 0, 0, 0
0, 1, 0, 0
0, 0, 1, 0
0, 0, 0, 1
Rather then trying to see where you are wrong, I would highly recommend you to try and get into it yourself. There are samples and answered questions on the web that show exactly what you want, so obviously you can try them and learn from them (one example: Android sensors for OpenGL, there are also some on github).
Also, since you have said you are very new to graphics, I would suggest to first do some heavy reading on matrices and transformations and understanding of the 3D space.
Examples: http://www.songho.ca/opengl/gl_transform.html
https://www.raywenderlich.com/3664/opengl-tutorial-for-ios-opengl-es-2-0
Related
I'm very new to working with the sensors on an Android device. I have the following code (which I obtained from various tutorials).
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ROTATION_VECTOR)
{
float[] orientationVals = new float[3];
// Convert the rotation-vector to a 4x4 matrix.
SensorManager.getRotationMatrixFromVector(mRotationMatrix,
event.values);
SensorManager.getOrientation(mRotationMatrix, orientationVals);
orientationVals[0] = (float) Math.toDegrees(orientationVals[0]);
orientationVals[1] = (float) Math.toDegrees(orientationVals[1]);
orientationVals[2] = (float) Math.toDegrees(orientationVals[2]);
txtAzimuth.setText("Azimuth : "+d.format(orientationVals[0]) + '°');
txtPitch.setText("Pitch : "+d.format(orientationVals[1]) + '°');
txtRoll.setText("Roll : "+d.format(orientationVals[2]) + '°');
}
}
Okay, so I've got these values. How do I use them to move an image (a marker of sorts) based on the movement of the phone. For example, if I move the phone to the right, the image should move to the left and when the phone is moved left later, the image should move right.
I've done a lot of googling and what I've found is mostly OpenGl stuff.
Like this:
OpenGl world using camera to find object
This question asks what I want with the exception of the object moving randomly. I want the object position to be decided by me.
I don't want to use OpenGL for this as it would be overkill for my app. Could I somehow map the values produced by the sensors to moving the image?
I am trying to detect Yaw, Pitch & roll using the following code:
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor == mAccelerometer) {
System.arraycopy(event.values.clone(), 0, mLastAccelerometer, 0, event.values.length);
mLastAccelerometerSet = true;
} else if (event.sensor == mMagnetometer) {
System.arraycopy(event.values.clone(), 0, mLastMagnetometer, 0, event.values.length);
mLastMagnetometerSet = true;
}
if (mLastAccelerometerSet && mLastMagnetometerSet) {
SensorManager.getRotationMatrix(mR, null, mLastAccelerometer, mLastMagnetometer);
SensorManager.getOrientation(mR, mOrientation);
mOrientation[0] = (float) Math.toDegrees(mOrientation[0]);
mOrientation[1] = (float) Math.toDegrees(mOrientation[1]);
mOrientation[2] = (float) Math.toDegrees(mOrientation[2]);
mLastAccelerometerSet = false;
mLastMagnetometerSet = false;
ManageSensorChanges();
}
}
This works fine apart from one issue;
When the phone is up-side-down or starts to go up-side-down (even if tilted a little forward in the portrait mode), the angles go haywire ... spitting out random angles!
Why is this happening - and - Any solution to this?
In case some one is stuck with this issue:
Strange behavior with android orientation sensor
In summary, the trouble is with the use of Eulers angle - so there is little that can be done to sort the above issue at extreme angles of the device ... apart from using Quaternions (which is suitable for OpenGL or 3D projections).
I am using it for 2D drawing on my app (to create a parallax effect), so I just ended up using the Accelerometer values (range is obviously -9.8 to 9.8) ... It works perfectly and an annoyingly easy solution for my crude needs. This technique is not acceptable for 3d projection... if needing precise measurements, see https://bitbucket.org/apacha/sensor-fusion-demo.
Lets say you have the acceleration readings in all the 3 dimensions i.e X, Y and Z. How do you infer using the readings the phone was tilted left or right? The readings get generated every 20ms.
I actually want the logic of inferring the tilt from the readings. The tilt needs to be smooth.
A tilt can be detected in a sort of diferent ways. You can take into account 1 axis, 2 axis, or the 3 axis. Depending on how accurate you want it, and how much you feel like fighting with maths.
If you use only one axis, it is quite simple. Think the mobile is completely horizontal, and you move it like this:
using just one axis, lets say, axis x, will be enough, since you can detect accurately a change in that axis position, since even any small movement will do a change in the axis.
But, if your application is only reading that axis, and the user has the phone almost vertical, the difference in x axis will be really small even rotating the phone a big angle.
Anyways,for applications that only need coarse resolution, a single-axis can be used.
Referring to basic trigonometry, the projection of the gravity vector on the x-axis produces an output acceleration equal to the sine of the angle between the accelerometer x-axis and the horizon.
This means that having the values of an axis (those are acceleration values) you can calculate the angle in which the device is.
this means that the value given to you by the sensor, is = to 9,8 * sine of the angle, so doing the maths you can get the actual angle.
But don't worry, you don't even have to do this. Since the values are more or less proportional, as you can see in the table below, you can work directly with the value of the sensor, without taking much care of what angle represents, if you don't need it to be much accurate, since a change in that value means a proportional change in the angle, so with a few test, you will find out how big should be the change in order to be relevant to you.
So, if you take the value over the time, and compare to each other, you can figure out how big the rotation was. For this,
you consider just one axis. this will be axis X.
write a function to get the difference in the sensor value for that axis between one function call, and the next
Decide a maximum time and a minimum sensor difference, that you will consider a valid movement (e.g. a big rotation is good but only if it is fast enough, and a fast movement is good only if the difference in the angle is big enough)
if you detect two measurements that accomplish those conditions, you take note of half tilt done (in a boolean for instance), and start measuring again, but now, the new reference value is the value that was considered half tilt.
if the last difference was positive, now you need a negative difference, and if the last difference was negative, now you need a positive difference; this is, coming back. so start taking values comparing the new reference value with the new values coming from the sensor, and see if one accomplish what you decided in point 3.
if you find a valid value (accomplishing value difference and time conditions ), you have a tilt. But if you dont get a good value and the time is consumed, you reset everything: let your reference value be the last one, reset the timers, reset the half-tilt-done boolean to false, and keep measuring.
I hope this is good enough for you. For sure you can find some libraries or code snippets to help you out with this, but i think is good, as you say, to know the logic of inferring the tilt from the readings
The pictures was taken from this article, wich i recomend to read if you want to improve the accuracy and consider 2 o 3 axis for the tilt
The commonsware Sensor Monitor app does a pretty good job with this. It converts the sensor readouts to X, Y, Z values on each sensor reading, so it's pretty easy from there to determine which way the device is moving.
https://github.com/commonsguy/cw-omnibus/tree/master/Sensor/Monitor
Another item worth noting (from the Commonsware book):
There are four standard delay periods, defined as constants on the
SensorManager class:
SENSOR_DELAY_NORMAL, which is what most apps would use for broad changes, such as detecting a screen rotating from portrait to
landscape
SENSOR_DELAY_UI, for non-game cases where you want to update the UI continuously based upon sensor readings
SENSOR_DELAY_GAME, which is faster (less delay) than SENSOR_DELAY_UI, to try to drive a higher frame rate
SENSOR_DELAY_FASTEST, which is the “firehose” of sensor readings, without delay
You can use the accelerometer and magnetic field sensor to accomplish this. You can call this method in your OnSensorChanged method to detect if the phone was tilt upwards. This currently only works if the phone is held horizontally. Check the actual blog post for a more complete solution.
http://www.ahotbrew.com/how-to-detect-forward-and-backward-tilt/
public boolean isTiltUpward()
{
if (mGravity != null && mGeomagnetic != null)
{
float R[] = new float[9];
float I[] = new float[9];
boolean success = SensorManager.getRotationMatrix(R, I, mGravity, mGeomagnetic);
if (success)
{
float orientation[] = new float[3];
SensorManager.getOrientation(R, orientation);
/*
* If the roll is positive, you're in reverse landscape (landscape right), and if the roll is negative you're in landscape (landscape left)
*
* Similarly, you can use the pitch to differentiate between portrait and reverse portrait.
* If the pitch is positive, you're in reverse portrait, and if the pitch is negative you're in portrait.
*
* orientation -> azimut, pitch and roll
*
*
*/
pitch = orientation[1];
roll = orientation[2];
inclineGravity = mGravity.clone();
double norm_Of_g = Math.sqrt(inclineGravity[0] * inclineGravity[0] + inclineGravity[1] * inclineGravity[1] + inclineGravity[2] * inclineGravity[2]);
// Normalize the accelerometer vector
inclineGravity[0] = (float) (inclineGravity[0] / norm_Of_g);
inclineGravity[1] = (float) (inclineGravity[1] / norm_Of_g);
inclineGravity[2] = (float) (inclineGravity[2] / norm_Of_g);
//Checks if device is flat on ground or not
int inclination = (int) Math.round(Math.toDegrees(Math.acos(inclineGravity[2])));
/*
* Float obj1 = new Float("10.2");
* Float obj2 = new Float("10.20");
* int retval = obj1.compareTo(obj2);
*
* if(retval > 0) {
* System.out.println("obj1 is greater than obj2");
* }
* else if(retval < 0) {
* System.out.println("obj1 is less than obj2");
* }
* else {
* System.out.println("obj1 is equal to obj2");
* }
*/
Float objPitch = new Float(pitch);
Float objZero = new Float(0.0);
Float objZeroPointTwo = new Float(0.2);
Float objZeroPointTwoNegative = new Float(-0.2);
int objPitchZeroResult = objPitch.compareTo(objZero);
int objPitchZeroPointTwoResult = objZeroPointTwo.compareTo(objPitch);
int objPitchZeroPointTwoNegativeResult = objPitch.compareTo(objZeroPointTwoNegative);
if (roll < 0 && ((objPitchZeroResult > 0 && objPitchZeroPointTwoResult > 0) || (objPitchZeroResult < 0 && objPitchZeroPointTwoNegativeResult > 0)) && (inclination > 30 && inclination < 40))
{
return true;
}
else
{
return false;
}
}
}
return false;
}
Is this what you're looking for?
public class AccelerometerHandler implements SensorEventListener
{
float accelX;
float accelY;
float accelZ;
public AccelerometerHandler(Context paramContext)
{
SensorManager localSensorManager = (SensorManager)paramContext.getSystemService("sensor");
if (localSensorManager.getSensorList(1).size() != 0)
localSensorManager.registerListener(this, (Sensor)localSensorManager.getSensorList(1).get(0), 1);
}
public float getAccelX()
{
return this.accelX;
}
public float getAccelY()
{
return this.accelY;
}
public float getAccelZ()
{
return this.accelZ;
}
public void onAccuracyChanged(Sensor paramSensor, int paramInt)
{
}
public void onSensorChanged(SensorEvent paramSensorEvent)
{
this.accelX = paramSensorEvent.values[0];
this.accelY = paramSensorEvent.values[1];
this.accelZ = paramSensorEvent.values[2];
}
}
i know what i am going to ask is already discussed sometimes but after going through all of them i can't found my complete answer so i am asking a new question
when i tried integrating JPCT-ae with QCAR all goes well as expected, i got my modelview matrix from renderframe from jni and successfully transferred that in java to jpct model is shown perfectly as expected. but when i tried to pass this matrix to JPCT world camera my model disappear.
my code:in onsurfacechanged:
world = new World();
world.setAmbientLight(20, 20, 20);
sun = new Light(world);
sun.setIntensity(250, 250, 250);
cube = Primitives.getCube(1);
cube.calcTextureWrapSpherical();
cube.strip();
cube.build();
world.addObject(cube);
cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 10);
cam.lookAt(cube.getTransformedCenter());
SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);
MemoryHelper.compact();
and in ondraw:
com.threed.jpct.Matrix mResult = new com.threed.jpct.Matrix();
mResult.setDump(modelviewMatrix ); //modelviewMatrix i get from Qcar
cube.setRotationMatrix(mResult);
cam.setBack(mResult);
fb.clear(back);
world.renderScene(fb);
world.draw(fb);
fb.display();
after some research i found that QCAR uses a right-handed coordinate system meaning that the X positive goes right, the Y positive goes up and the Z positive comes out of screen but in JPCT coordinate system the X positive goes right, the Y positive goes down and the Z positive goes into the screen.
Qcar coordinate system:
i know that matrix QCar is giving is a 4*4 matrix having 3*3 rotational values and translation vector .
i am posting matrices to be more clear:
modelviewmatrix:
1.512537 -159.66255 -10.275316 0.0
-89.86529 -1.1592013 4.7839375 0.0
-8.619186 10.179538 -159.44305 0.0
59.182976 93.205956 437.2832 1.0
modelviewmatrix after reverse using cam.setBack(modelviewmatrix.invert(modelviewmatrix)) :
5.9083453E-5 -0.01109448 -3.3668696E-4 0.0
0.0040540528 -3.8752193E-4 0.0047518034 0.0
-0.004756433 -4.6811014E-4 0.0040459237 0.0
0.7533285 0.4116795 2.7063704 0.9999999
if i remove 13,14 and 15 matrix element assuming 3*3 rotation matrix...model is rotated properly but translation(in and out movement of image) is not there
finally i dont know what changes translation vector is needed.
so please suggest me what i am missing here?
QCAR::Matrix44F inverseMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Matrix44F invTransposeMatrix = SampleMath::Matrix44FTranspose(inverseMatrix);
then pass the invTransposeMatrix value to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, invTransposeMatrix.data);
env->CallVoidMethod(obj, method, modelviewArray);
I have a sensor manager that returns a rotationMatrix based on the devices Magnetometer and Accelerometer. I have been trying to also calculate the yaw pitch and roll of the user's device but am finding that pitch and roll interfere with each other and give inaccurate results. Is there a way to extract YAW PITCH and ROLL of a device from the rotationMatrix?
EDIT
Trying to interpret blender's answer below, which i am thankful for but not quite there yet, i am trying to get the angle from a rotaion matrix like this:
float R[] = phoneOri.getMatrix();
double rmYaw = Math.atan2(R[4], R[0]);
double rmPitch = Math.acos(-R[8]);
double rmRoll = Math.atan2(R[9], R[10]);
i don't know if i am referencing the wrong parts of the matrix or not but i am not getting the results i would think.
i was hoping to get values in degrees, but am getting weird integers.
my matrix is coming from my sensorManager which looks like this:
public void onSensorChanged(SensorEvent evt) {
int type=evt.sensor.getType();
if(type == Sensor.TYPE_ORIENTATION){
yaw = evt.values[0];
pitch = evt.values[1];
roll = evt.values[2];
}
if (type == Sensor.TYPE_MAGNETIC_FIELD) {
orientation[0]=(orientation[0]*1+evt.values[0])*0.5f;
orientation[1]=(orientation[1]*1+evt.values[1])*0.5f;
orientation[2]=(orientation[2]*1+evt.values[2])*0.5f;
} else if (type == Sensor.TYPE_ACCELEROMETER) {
acceleration[0]=(acceleration[0]*2+evt.values[0])*0.33334f;
acceleration[1]=(acceleration[1]*2+evt.values[1])*0.33334f;
acceleration[2]=(acceleration[2]*2+evt.values[2])*0.33334f;
}
if ((type==Sensor.TYPE_MAGNETIC_FIELD) || (type==Sensor.TYPE_ACCELEROMETER)) {
float newMat[]=new float[16];
SensorManager.getRotationMatrix(newMat, null, acceleration, orientation);
if(displayOri==0||displayOri==2){
SensorManager.remapCoordinateSystem(newMat,SensorManager.AXIS_X*-1, SensorManager.AXIS_MINUS_Y*-1,newMat);
}else{
SensorManager.remapCoordinateSystem(newMat,SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X,newMat);
}
matrix=newMat;
sample matrix when device is laying face up on table
0.9916188, -0.12448014, -0.03459576, 0.0
0.12525482, 0.9918981, 0.021199778, 0.0
0.031676512,-0.025355382, 0.9991765, 0.0
0.0, 0.0, 0.0, 1
ANSWER
double rmPitch = Math.toDegrees( Math.acos(R[10]));
I believe Blender's answer is not correct, since he gave a transformation from Rotation matrix to Euler angles (z-x-z extrinsic), and Roll Pitch Yaw are a different kind of Euler angles (z-y-x extrinsic).
The actual transformation formula would rather be:
yaw=atan2(R(2,1),R(1,1));
pitch=atan2(-R(3,1),sqrt(R(3,2)^2+R(3,3)^2)));
roll=atan2(R(3,2),R(3,3));
Source
Feedback : this implementation revealed to lack numerical stability near the singularity of the representation (gimbal lock). Therefore on C++ I recommend using Eigen library with the following line of code:
R.eulerAngles(2,1,0).reverse();
(More details here)
Yaw, pitch and roll correspond to Euler angles. You can convert a transformation matrix to Euler angles pretty easily:
Sensor Manager provides a SensorManager.getOrientation to get all the three angle.