Android Camera AutoFocus on Demand - android

The built in Camcorder App (like the one on the HTC EVO) seems to call camera.autoFocus() only when the preview image changes. If you hold the camera steady no camera.autoFocus() happens.
I would like to duplicate this behavior while camera.startPreview() is active as in the initial preview setup code below:
camera = camera.open();
Camera.Parameters parameters = camera.getParameters();
List<String> focusModes = parameters.getSupportedFocusModes();
if (focusModes.contains(Camera.Parameters.FOCUS_MODE_AUTO))
{
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
}
camera.setParameters(parameters);
camera.setPreviewDisplay(holder);
camera.startPreview();
All the examples I found for autoFocus() seem to be calling it every 500ms to 2000ms, or once the instant before the picture is taken or recording is started.
The EVO Camcorder app seems to use a sensor or an algorithm to trigger autoFocus(). However this autoFocus() trigger is done it works exceptionally well. Does anyone have any knowledge of how to trigger autoFocus() on demand when it is needed, such as when the camera is moved close or farther from the subject or is panned slightly?
Thank you,
Gerry

Android has introduced continuous auto focus since API Level 9 (Gingerbread). It works better than calling Camera.autoFocus periodically.

I had the same problem in one of my applications.
My solution was to use a sensor listener and do auto focus when the user shook the device to some threshold. Here is the code.
public void setCameraFocus(AutoFocusCallback autoFocus){
if (mCamera.getParameters().getFocusMode().equals(mCamera.getParameters().FOCUS_MODE_AUTO) ||
mCamera.getParameters().getFocusMode().equals(mCamera.getParameters().FOCUS_MODE_MACRO)){
mCamera.autoFocus(autoFocus);
}
}
The callback for auto focus:
// this is the autofocus call back
private AutoFocusCallback myAutoFocusCallback = new AutoFocusCallback(){
public void onAutoFocus(boolean autoFocusSuccess, Camera arg1) {
//Wait.oneSec();
mAutoFocus = true;
}};
And the way to call the focus.
public void onSensorChanged(SensorEvent event) {
if (mInvalidate == true){
mView.invalidate();
mInvalidate = false;
}
float x = event.values[0];
float y = event.values[1];
float z = event.values[2];
if (!mInitialized){
mLastX = x;
mLastY = y;
mLastZ = z;
mInitialized = true;
}
float deltaX = Math.abs(mLastX - x);
float deltaY = Math.abs(mLastY - y);
float deltaZ = Math.abs(mLastZ - z);
if (deltaX > .5 && mAutoFocus){ //AUTOFOCUS (while it is not autofocusing)
mAutoFocus = false;
mPreview.setCameraFocus(myAutoFocusCallback);
}
if (deltaY > .5 && mAutoFocus){ //AUTOFOCUS (while it is not autofocusing)
mAutoFocus = false;
mPreview.setCameraFocus(myAutoFocusCallback);
}
if (deltaZ > .5 && mAutoFocus){ //AUTOFOCUS (while it is not autofocusing) */
mAutoFocus = false;
mPreview.setCameraFocus(myAutoFocusCallback);
}
mLastX = x;
mLastY = y;
mLastZ = z;
}
You can see the complete project here: http://adblogcat.com/a-camera-preview-with-a-bounding-box-like-google-goggles/

It is very possible to call a refocus with a simpler technique, if you have a white box flash within the cameras view (from code, not a real box) it will rapidly call a refocus. I own the EVO 4G and one of the previous posters is correct, it does continually refocus without the need to change what it is looking at ever since the update to Gingerbread.

For taking pictures, you can set this.
Applications can call autoFocus(AutoFocusCallback) in this mode. If the autofocus is in the middle of scanning, the focus callback will return when it completes. If the autofocus is not scanning, the focus callback will immediately return with a boolean that indicates whether the focus is sharp or not. The apps can then decide if they want to take a picture immediately or to change the focus mode to auto, and run a full autofocus cycle.

i would make use of the SensorEventListener. All you would need to do is to listen to sensor events and fire the autofocus once the phones orientation has changed by a sufficient threshhold.
http://developer.android.com/reference/android/hardware/SensorEventListener.html

Related

Android pinch zoom on Camera delay

I have a custom surfaceview for my CameraPreview in my app, and I am trying to implement Pinch zoom, by implementing these two methods:
#Override
public boolean onTouchEvent(MotionEvent event) {
Camera camera = getCamera();
if (camera == null) {
return true;
}
Camera.Parameters params = camera.getParameters();
int action = event.getAction();
if (event.getPointerCount() > 1) {
if (action == MotionEvent.ACTION_POINTER_DOWN) {
MCLog.v(TAG, "Single ");
mDist = getFingerSpacing(event);
MCLog.w(TAG, "Original distance " + mDist);
} else if (action == MotionEvent.ACTION_MOVE && params.isZoomSupported()) {
camera.cancelAutoFocus();
handleZoom(event, params);
}
} else {
if (action == MotionEvent.ACTION_UP) {
mFirstTime = false;
handleFocus(event, params);
}
}
return true;
}
private void handleZoom(MotionEvent event, Camera.Parameters params) {
if(mFirstTime) {
mDist = getFingerSpacing(event);
mFirstTime = false;
return;
}
List<Integer> zoomRatios = params.getZoomRatios();
int maxZoom = params.getMaxZoom();
int zoom = params.getZoom();
double spacing = getFingerSpacing(event);
MCLog.w(TAG, String.format("Old zoom is: %s", zoom));
//Percentage of displacement
MCLog.w(TAG, String.format("Original distance is: %s, new displacement is %s", mDist, spacing));
double percentage = (mDist + spacing)/mDist;
if(mDist > spacing)
{
percentage *= -1;
}
MCLog.w(TAG, String.format("Percentage is: %s", percentage));
zoom = new Double(zoom + percentage).intValue();
MCLog.w(TAG, String.format("New zoom is: %s", zoom));
if (zoom > maxZoom) {
zoom = maxZoom;
}
if (zoom < 0) {
zoom = 0;
}
mDist = spacing;
params.setZoom(zoom);
if (mZoomListener != null) {
mZoomListener.onZoomChanged(zoomRatios.get(zoom));
}
getCamera().setParameters(params);
}
This seems to be working, however the zoom has some slight delay that gets longer the more I zoom into the image. Like I would stop pinching and the image would still keep zooming in.
I couldnt find any implementation for pinch zoom in the camera besides this one, so maybe this is doing something wrong.
Since you're seeing the logging continue after you lift your finger, that probably means you're not processing your touch event queue fast enough.
That setProperties call is not particularly fast.
So you'll need to rate-limit somehow, and drop touch events that you don't have time to handle. There are many options of varying kinds of tradeoffs.
I'm not very familiar with the input APIs, so not sure if there's some parameter you can just tweak to reduce the rate of calls - maybe just don't do anything unless the change in zoom is above dinner threshold, and then increase the threshold until the zooming doesn't lag?
Or you can send the zoom calls to another thread to actually invoke setParameters, and just drop a zoom call to the floor if that thread if already busy processing a previous call.
Or better, have a 'nextZoom' parameter that your zoom setting thread looks at once it finishes its prior call, and then just have the touch event handler update nextZoom on each invocation. The zoom setting thread then always checks if the value has changed once it finishes the last set call, and if so, sets it again.
Then you'll always get the newest zoom level, and they won't pile up either.

Android Tilt issue with Accelerometer and Magnetometer

I am trying to detect Yaw, Pitch & roll using the following code:
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor == mAccelerometer) {
System.arraycopy(event.values.clone(), 0, mLastAccelerometer, 0, event.values.length);
mLastAccelerometerSet = true;
} else if (event.sensor == mMagnetometer) {
System.arraycopy(event.values.clone(), 0, mLastMagnetometer, 0, event.values.length);
mLastMagnetometerSet = true;
}
if (mLastAccelerometerSet && mLastMagnetometerSet) {
SensorManager.getRotationMatrix(mR, null, mLastAccelerometer, mLastMagnetometer);
SensorManager.getOrientation(mR, mOrientation);
mOrientation[0] = (float) Math.toDegrees(mOrientation[0]);
mOrientation[1] = (float) Math.toDegrees(mOrientation[1]);
mOrientation[2] = (float) Math.toDegrees(mOrientation[2]);
mLastAccelerometerSet = false;
mLastMagnetometerSet = false;
ManageSensorChanges();
}
}
This works fine apart from one issue;
When the phone is up-side-down or starts to go up-side-down (even if tilted a little forward in the portrait mode), the angles go haywire ... spitting out random angles!
Why is this happening - and - Any solution to this?
In case some one is stuck with this issue:
Strange behavior with android orientation sensor
In summary, the trouble is with the use of Eulers angle - so there is little that can be done to sort the above issue at extreme angles of the device ... apart from using Quaternions (which is suitable for OpenGL or 3D projections).
I am using it for 2D drawing on my app (to create a parallax effect), so I just ended up using the Accelerometer values (range is obviously -9.8 to 9.8) ... It works perfectly and an annoyingly easy solution for my crude needs. This technique is not acceptable for 3d projection... if needing precise measurements, see https://bitbucket.org/apacha/sensor-fusion-demo.

rotate 3D Object (Spatial) with compass

I've generated 3D overlay using jMonkey in my Android app. Everything works fine - my ninja model is walking in loop. Awsome !
Now I want to rotate camera according to direction of the phone. I thought compass is the best way BUT unfortunatly I have a lot of problems. So here I go
I've created method that is invoked in activity
public void rotate(float x, float y, float z) {
Log.d(TAG, "simpleUpdate: Nowa rotacja: " + y);
newX = x;
newY = y;
newZ = z;
newPosition = true;
}
in 'simpleUpdate' method I've managed it this way
if(newPosition && ninja != null) {
Log.d(TAG, "simpleUpdate: rotacja: " + newY);
ninja.rotate((float)Math.toRadians(newX), (float)Math.toRadians(newY), (float)Math.toRadians(newZ));
newPosition = false;
}
in my activity I'm checking if phone moved
if(lastAzimuth != (int)azimuthInDegress) {
lastAzimuth = (int)azimuthInDegress;
I cast to int so the distortion won't be so big problem
if ((com.fixus.towerdefense.model.SuperimposeJME) app != null) {
((com.fixus.towerdefense.model.SuperimposeJME) app).rotate(0f, azimuthInDegress, 0f);
}
At the moment I want to rotate it only in Y axis
Now the main problem is that the rotations is more like jump that rotation. When I move my phone a bit and I have 6 degrees diffrence (i see this in my log) the model is rotated like for 90 degrees and he turns back. This has nothing to do with rotation or change taken from my compas.
Any ideas ?
UPDATE
I think I got it. Method rotate, rotates from current state with value I've set. So it looks more like old Y rotate + new value. So I'm setting the diffrence between current value and old value and it now look almoust fine. Is it the good way ?

How to get a phone's azimuth with compass readings and gyroscope readings?

I wish to get my phone's current orientation by the following method:
Get the initial orientation (azimuth) first via the getRotationMatrix() and getOrientation().
Add the integration of gyroscope reading over time to it to get the current orientation.
Phone Orientation:
The phone's x-y plane is fixed parallel with the ground plane. i.e., is in a "texting-while-walking" orientation.
"getOrientation()" Returnings:
Android API allows me to easily get the orientation, i.e., azimuth, pitch, roll, from getOrientation().
Please note that this method always returns its value within the range: [0, -PI] and [o, PI].
My Problem:
Since the integration of the gyroscope reading, denoted by dR, may be quite big, so when I do CurrentOrientation += dR, the CurrentOrientation may exceed the [0, -PI] and [o, PI] ranges.
What manipulations are needed so that I can ALWAYS get the current orientation within the the [0, -PI] and [o, PI] ranges?
I have tried the following in Python, but I highly doubt its correctness.
rotation = scipy.integrate.trapz(gyroSeries, timeSeries) # integration
if (headingDirection - rotation) < -np.pi:
headingDirection += 2 * np.pi
elif (headingDirection - rotation) > np.pi:
headingDirection -= 2 * np.pi
# Complementary Filter
headingDirection = ALPHA * (headingDirection - rotation) + (1 - ALPHA) * np.mean(azimuth[np.array(stepNo.tolist()) == i])
if headingDirection < -np.pi:
headingDirection += 2 * np.pi
elif headingDirection > np.pi:
headingDirection -= 2 * np.pi
Remarks
This is NOT that simple, because it involves the following trouble-makers:
The orientation sensor reading goes from 0 to -PI, and then DIRECTLY JUMPS to +PI and gradually gets back to 0 via +PI/2.
The integration of the gyrocope reading also leads to some trouble. Should I add dR to the orientation or subtract dR.
Do please refer to the Android Documentations first, before giving a confirmed answer.
Estimated answers will not help.
The orientation sensor actually derives its readings from the real magnetometer and the accelerometer.
I guess maybe this is the source of the confusion. Where is this stated in the documentation? More importantly, does the documentation somewhere explicitly state that the gyro readings are ignored? As far as I know the method described in this video is implemented:
Sensor Fusion on Android Devices: A Revolution in Motion Processing
This method uses the gyros and integrates their readings. This pretty much renders the rest of the question moot; nevertheless I will try to answer it.
The orientation sensor is already integrating the gyro readings for you, that is how you get the orientation. I don't understand why you are doing it yourself.
You are not doing the integration of the gyro readings properly, it is more complicated than CurrentOrientation += dR (which is incorrect). If you need to integrate the gyro readings (I don't see why, the SensorManager is already doing it for you) please read Direction Cosine Matrix IMU: Theory how to do it properly (Equation 17).
Don't try integrating with Euler angles (aka azimuth, pitch, roll), nothing good will come out.
Please use either quaternions or rotation matrices in your computations instead of Euler angles. If you work with rotation matrices, you can always convert them to Euler angles, see
Computing Euler angles from a rotation matrix by Gregory G. Slabaugh
(The same is true for quaternions.) There are (in the non-degenrate case) two ways to represent a rotation, that is, you will get two Euler angles. Pick the one that is in the range you need. (In case of gimbal lock, there are infinitely many Euler angles, see the PDF above). Just promise you won't start using Euler angles again in your computations after the rotation matrix to Euler angles conversion.
It is unclear what you are doing with the complementary filter. You can implement a pretty damn good sensor fusion based on the Direction Cosine Matrix IMU: Theory manuscript, which is basically a tutorial. It's not trivial to do it but I don't think you will find a better, more understandable tutorial than this manuscript.
One thing that I had to discover myself when I implemented sensor fusion based on this manuscript was that the so-called integral windup can occur. I took care of it by bounding the TotalCorrection (page 27). You will understand what I am talking about if you implement this sensor fusion.
UPDATE: Here I answer your questions that you posted in comments after accepting the answer.
I think the compass gives me my current orientation by using gravity and magnetic field, right? Is gyroscope used in the compass?
Yes, if the phone is more or less stationary for at least half a second, you can get a good orientation estimate by using gravity and the compass only. Here is how to do it: Can anyone tell me whether gravity sensor is as a tilt sensor to improve heading accuracy?
No, the gyroscopes are not used in the compass.
Could you please kindly explain why the integration done by me is wrong? I understand that if my phone's pitch points up, euler angle fails. But any other things wrong with my integration?
There are two unrelated things: (i) the integration should be done differently, (ii) Euler angles are trouble because of the Gimbal lock. I repeat, these two are unrelated.
As for the integration: here is a simple example how you can actually see what is wrong with your integration. Let x and y be the axes of the horizontal plane in the room. Get a phone in your hands. Rotate the phone around the x axis (of the room) by 45 degrees, then around the y axis (of the room) by 45 degrees. Then, repeat these steps from the beginning but now rotate around the y axis first, and then around the x axis. The phone ends up in a totally different orientation. If you do the integration according to CurrentOrientation += dR you will see no difference! Please read the above linked Direction Cosine Matrix IMU: Theory manuscript if you want to do the integration properly.
As for the Euler angles: they screw up the stability of the application and it is enough for me not to use them for arbitrary rotations in 3D.
I still don't understand why you are trying to do it yourself, why you don't want to use the orientation estimate provided by the platform. Chances are, you cannot do better than that.
I think you should avoid the depreciated "Orientation Sensor", and use sensor fusion methods like getRotationVector, getRotationMatrix that already implement fusion algorithms specially of Invensense, which already use gyroscope data.
If you want a simple sensor fusion algorithm called a balance filter
(refer http://www.filedump.net/dumped/filter1285099462.pdf) can be used. Approach is as in
http://postimg.org/image/9cu9dwn8z/
This integrates the gyroscope to get angle, then high-pass filters the result to remove
drift, and adds it to the smoothed accelerometer and compass results. The integrated, high-pass-fil-tered gyro data and the accelerometer/compass data are added in such a way that the two parts add
to one, so that the output is an accurate estimate in units that make sense.
For the balance filter, the time constant may be tweaked to tune the response. The shorter the time
constant, the better the response but the more acceleration noise will be allowed to pass through.
To see how this works, imagine you have the newest gyro data point (in rad/s) stored in gyro, the
newest angle measurement from the accelerometer is stored in angle_acc, and dtis the time from
the last gyro data until now. Then your new angle would be calculated using
angle = b * (angle + gyro*dt) + (1 - b) *(angle_acc);
You may start by trying b = 0.98 for instance. You will also probably want to use a fast gyroscope measurement time dt so the gyro doesn’t drift more than a couple of degrees before the next measurement is taken. The balance filter is useful and simple to implement, but is not the ideal sensor fusion approach.
Invensense’s approach involves some clever algorithms and probably some form of Kalman filter.
Source: Professional Android Sensor Programming, Adam Stroud.
If the azimuth value is inaccurate due to magnetic interference, there is nothing that you can do to eliminate it as far as I know. To get a stable reading of the azimuth you need to filter the accelerometer values if TYPE_GRAVITY is not available. If TYPE_GRAVITY is not available, then I am pretty sure that the device does not have a gyro, so the only filter that you can use is low pass filter. The following code is an implementation of a stable compass using TYPE_GRAVITY and TYPE_MAGNETIC_FIELD.
public class Compass implements SensorEventListener
{
public static final float TWENTY_FIVE_DEGREE_IN_RADIAN = 0.436332313f;
public static final float ONE_FIFTY_FIVE_DEGREE_IN_RADIAN = 2.7052603f;
private SensorManager mSensorManager;
private float[] mGravity;
private float[] mMagnetic;
// If the device is flat mOrientation[0] = azimuth, mOrientation[1] = pitch
// and mOrientation[2] = roll, otherwise mOrientation[0] is equal to Float.NAN
private float[] mOrientation = new float[3];
private LinkedList<Float> mCompassHist = new LinkedList<Float>();
private float[] mCompassHistSum = new float[]{0.0f, 0.0f};
private int mHistoryMaxLength;
public Compass(Context context)
{
mSensorManager = (SensorManager) context.getSystemService(Context.SENSOR_SERVICE);
// Adjust the history length to fit your need, the faster the sensor rate
// the larger value is needed for stable result.
mHistoryMaxLength = 20;
}
public void registerListener(int sensorRate)
{
Sensor magneticSensor = mSensorManager.getDefaultSensor(Sensor.TYPE_MAGNETIC_FIELD);
if (magneticSensor != null)
{
mSensorManager.registerListener(this, magneticSensor, sensorRate);
}
Sensor gravitySensor = mSensorManager.getDefaultSensor(Sensor.TYPE_GRAVITY);
if (gravitySensor != null)
{
mSensorManager.registerListener(this, gravitySensor, sensorRate);
}
}
public void unregisterListener()
{
mSensorManager.unregisterListener(this);
}
#Override
public void onAccuracyChanged(Sensor sensor, int accuracy)
{
}
#Override
public void onSensorChanged(SensorEvent event)
{
if (event.sensor.getType() == Sensor.TYPE_GRAVITY)
{
mGravity = event.values.clone();
}
else if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD)
{
mMagnetic = event.values.clone();
}
if (!(mGravity == null || mMagnetic == null))
{
mOrientation = getOrientation();
}
}
private void getOrientation()
{
float[] rotMatrix = new float[9];
if (SensorManager.getRotationMatrix(rotMatrix, null,
mGravity, mMagnetic))
{
float inclination = (float) Math.acos(rotMatrix[8]);
// device is flat
if (inclination < TWENTY_FIVE_DEGREE_IN_RADIAN
|| inclination > ONE_FIFTY_FIVE_DEGREE_IN_RADIAN)
{
float[] orientation = sensorManager.getOrientation(rotMatrix, mOrientation);
mCompassHist.add(orientation[0]);
mOrientation[0] = averageAngle();
}
else
{
mOrientation[0] = Float.NAN;
clearCompassHist();
}
}
}
private void clearCompassHist()
{
mCompassHistSum[0] = 0;
mCompassHistSum[1] = 0;
mCompassHist.clear();
}
public float averageAngle()
{
int totalTerms = mCompassHist.size();
if (totalTerms > mHistoryMaxLength)
{
float firstTerm = mCompassHist.removeFirst();
mCompassHistSum[0] -= Math.sin(firstTerm);
mCompassHistSum[1] -= Math.cos(firstTerm);
totalTerms -= 1;
}
float lastTerm = mCompassHist.getLast();
mCompassHistSum[0] += Math.sin(lastTerm);
mCompassHistSum[1] += Math.cos(lastTerm);
float angle = (float) Math.atan2(mCompassHistSum[0] / totalTerms, mCompassHistSum[1] / totalTerms);
return angle;
}
}
In your activity instantiate a Compass object say in onCreate, registerListener in onResume and unregisterListener in onPause
private Compass mCompass;
#Override
protected void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
mCompass = new Compass(this);
}
#Override
protected void onPause()
{
super.onPause();
mCompass.unregisterListener();
}
#Override
protected void onResume()
{
super.onResume();
mCompass.registerListener(SensorManager.SENSOR_DELAY_NORMAL);
}
Its better to let android's implementation of Orientation detection handle it. Now, yes values you get are from -PI to PI, and you can convert them to degrees (0-360).Some Relevant parts:
Saving data to be processed:
#Override
public void onSensorChanged(SensorEvent sensorEvent) {
switch (sensorEvent.sensor.getType()) {
case Sensor.TYPE_ACCELEROMETER:
mAccValues[0] = sensorEvent.values[0];
mAccValues[1] = sensorEvent.values[1];
mAccValues[2] = sensorEvent.values[2];
break;
case Sensor.TYPE_MAGNETIC_FIELD:
mMagValues[0] = sensorEvent.values[0];
mMagValues[1] = sensorEvent.values[1];
mMagValues[2] = sensorEvent.values[2];
break;
}
}
Calculating roll, pitch and yaw (azimuth).mR and mI are arrys to hold rotation and inclination matrices, mO is a temporary array. The array mResults has the values in degrees, at the end:
private void updateData() {
SensorManager.getRotationMatrix(mR, mI, mAccValues, mMagValues);
/**
* arg 2: what world(according to app) axis , device's x axis aligns with
* arg 3: what world(according to app) axis , device's y axis aligns with
* world x = app's x = app's east
* world y = app's y = app's north
* device x = device's left side = device's east
* device y = device's top side = device's north
*/
switch (mDispRotation) {
case Surface.ROTATION_90:
SensorManager.remapCoordinateSystem(mR, SensorManager.AXIS_Y, SensorManager.AXIS_MINUS_X, mR2);
break;
case Surface.ROTATION_270:
SensorManager.remapCoordinateSystem(mR, SensorManager.AXIS_MINUS_Y, SensorManager.AXIS_X, mR2);
break;
case Surface.ROTATION_180:
SensorManager.remapCoordinateSystem(mR, SensorManager.AXIS_MINUS_X, SensorManager.AXIS_MINUS_Y, mR2);
break;
case Surface.ROTATION_0:
default:
mR2 = mR;
}
SensorManager.getOrientation(mR2, mO);
//--upside down when abs roll > 90--
if (Math.abs(mO[2]) > PI_BY_TWO) {
//--fix, azimuth always to true north, even when device upside down, realistic --
mO[0] = -mO[0];
//--fix, roll never upside down, even when device upside down, unrealistic --
//mO[2] = mO[2] > 0 ? PI - mO[2] : - (PI - Math.abs(mO[2]));
//--fix, pitch comes from opposite , when device goes upside down, realistic --
mO[1] = -mO[1];
}
CircleUtils.convertRadToDegrees(mO, mOut);
CircleUtils.normalize(mOut);
//--write--
mResults[0] = mOut[0];
mResults[1] = mOut[1];
mResults[2] = mOut[2];
}

Android: getAngleChange - weird results

I'm currently experimenting with some sensors of Android phones. For testing, I'm using a Samsung Galaxy S. As it does not have a gyroscope, I'm using accelerometer and sensor for magnetic field.
What I basically want to do, is to get a certain angle, when moving the device. I try to explain: consider you are holding the phone in landscape mode in front of your face and then you turn yourself by 90 degrees to the right.
I use the following code to get the current rotation matrix:
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() == Sensor.TYPE_ACCELEROMETER) {
mGravity = event.values.clone();
}
if (event.sensor.getType() == Sensor.TYPE_MAGNETIC_FIELD) {
mGeomagnetic = event.values.clone();
}
if (mGravity != null && mGeomagnetic != null) {
float R[] = new float[9];
float I[] = new float[9];
boolean success = SensorManager.getRotationMatrix(R, I, mGravity,
mGeomagnetic);
}
}
This works well and then I use SensorManager.getAngleChange(angleChange, R, lastR); to achieve the angle change.
I then get (roughly) 65° in angleChange[1] if I turn myself as described above and do not tilt the phone or change anything else...
But if I also tilt the phone by 90° when turning myself (so that display is looking to the ceiling afterwards) I get (roughly) 90° in angleChange[1].
I'm very confused now, why such a rotation affects the value in angleChange[1] and on the other hand why it is needed to get the expected 90°.
What I want to achieve is to get the angle when moving the phone as described above (not in 90° degree steps but this sort of orientation change) no matter which other orientation changes (along the two other axes) are made.
Is there any possibility for this?

Categories

Resources