I am making a raspberry pi robot with an FVP (First Person View) camera mounted on a Pan/tilt Servo. I want to make it VR compatible by connecting it to my Phone. But my phone doesn't have Gyroscope sensor to detect horizontal movements, but it has magnetometer and accelerometer. How can I combine data from accelerometer and magnetometer to make a virtual gyroscope that can move with my camera. I am noob in all of these.
You should have an rotation vector sensor that is already fusing the two. You will not get better results than it.
Note that this will not be as high quality as a proper gyroscope and will have artifacts if the robot moves.
If you're still interested in how to make this yourself, you can get roll and pitch information from the accelerometer, then get the yaw information from the magnetometer. Best if you find a library for 3d maths and do this with quaternions or matrices. This seems like a use case where you will be affected by gimbal locks easily, so euler angles will be problematic.
I guess you want to use this for VR? Don't try to move the servos to compensate for head movement directly, you'll only make a motion sickness generator. Look at how timewarp works - you move the servos in the general direction a person is looking at and render the video reprojected on a sphere - this way you have almost zero lag.
Related
I am trying to understand the process of sensor fusion and along with it Kalman filtering too.
My goal is to detect Fall of a device using Accelerometer and Gyroscope.
In most of the papers such as this one, It mentions how to overcome drift due to Gyroscope and noise due to Accelerometer. Eventually the sensor fusion provides us with better measurements of Roll, Pitch and Yaw and not better acceleration.
Is it possible to get better 'acceleration results' by sensor fusion and in turn use that for 'Fall detection' ? As only better Roll, Yaw and Pitch are not enough to detect a Fall.
However this source recommends to smoothen Accelerometer (Ax,Ay,Az) and Gyroscope (Gx,Gy,Gz) using Kalman filter individually and using some classification algorithm such as k-NN Algorithm or clustering to detect Fall using supervised learning.
Classification part is not my problem, it is if I should fuse the sensors(3D accelerometer and 3D gyroscope) or smoothen the sensors separately, with my goal of detecting a fall.
Several clarifications
Kalman Filter is typically to perform sensor fusion for position and orientation estimation, usually to combine IMU (accel and gyro) with some no-drifting absolute measurements (computer vision, GPS)
Complimentary filter, which is typically used to have good orientation estimation by combining accel(noisy but non-drifting) and gyro(accurate but drifting) . Using accel and combine with gyro, one can have fairly good orientation estimation. The orientation estimation you can see as primary using the gyro, but corrected using accel.
For the application of Fall detection using IMU, I believe that acceleration is very important. There is no known way to "correct" the acceleration reading, and thinking of this way is likely to be the wrong approach. My suggestion is to use accelerations as one of your inputs to the system, collect a bunch of data simulating the fall situation, you might be surprised that there are a lot of viable signals there.
I dont think you need to use KF to detect fall detection. Using simple Accelerometer will able to detect the fall of device. If you apply low pass filter to smooth accelerometer and check if total acceleration is close to zero (in free fall device is going with -g (9.8 m/s2) acc) for more than certain duration, you can detect as fall.
The issue with above approach is if device is rotating fast then acceleration wont be close to zero. For robust solution, you can implement simple complementary (search for Mahony) filter rather than KF for this application.
I am trying to estimate heading from the accelerometer, gyro and magnetometer data. I have implemented A Complementary Filter from here.
What I am trying is that I am holding the phone in my hand and I walked 15 steps in a straight line and I am trying to estimate Euler angles as given in the link above. But when I plot the raw data, I observe that the magnetometer data deviates. Here are the images of raw sensor data.
My question is: how do I estimate Euler angles so that they indicate I am walking in straight line.
Were you walking indoors, or somewhere there might be electrical or magnetic fields? Judging by the magnetometer graph, that's what's happening. If so, I'm afraid there's no solution to your problem.
Try running a compass app, or even carrying a toy compass, and then walk the same path. I bet you'll see the compass swing as you walk.
I've been environments where moving a compass three feet down a table can cause it to swing 180 degrees.
As an aside: you almost never want Euler angles. They have degenerate cases. Try to use transformation matrices or quaternions if you can.
I need to create an app that Calculates the device's velocity, with x/y/z speed.
My idea is using device's accelerometer and gyroscope,
like this pipeline
I wanted to know that whether accelerometer and gyroscope right sensor choice for this ?
(in the pipeline).
What Rotation table should i use for this?
So, right now I'm grabbing the accelerometer data and converting them to a decently rough estimate of the angle at which the phone is being held. For right now I'm just focused on the yaw axis.
My area of interest is between 0 and 45 degrees on the yaw axis, so I made a limited queue of the past 5 to 10 readings and compared the numbers to determine if it's going up or down, which kind of works, but it is slow and not really as precise or reliable as I'd want it to be.
Is there a way you can kind of just determine which direction your phone is rotating with just the accelerometer and the magnetic field sensor I guess, without keeping a history of past readings, or something like that? I'm really new to sensor manipulation and Android in general. Any help understanding would be great.
It's not clear exactly what you're looking for here, position or velocity. Generally speaking, you don't want to get a position measurement by using integration on the accelerometer data. There's a lot of error associated with that calculation.
If you literally want the "direction your phone is rotating," rather than angular position, you can actually get that directly from the gyroscope sensor, which provides rotational velocities. That would let you get the direction it's rotating from the velocity without storing data. You should be aware that not every phone has a gyroscope sensor, but it does seem like the newer ones do.
If you want the absolute orientation of the phone (position), you can use the Rotation Vector sensor. This is a combined sensor that automatically integrates data from several of the sensors in one go, and provides additional accuracy. From this, you can get roll-pitch-yaw with a single measurement. Basically, you first want to get your data from the Rotation_vector sensor. Then you use the sensor data with getRotationMatrixFromVector. You can use the output from that in getOrientation (see the same page as the previous link), which will spit out roll-pitch-yaw measurements for you. You might need to rotate the axes around a bit to get the angles measured positive in the direction you want.
I am doing an Opengl appln in which i have to rotate the camera, if the android device is rotated/tilted along Z axis.
I tried the SensorManager.getOrientation(R, orientVals); using the magnetic and accelerometer sensors. But the values are very much fluctuating.
Gyroscope is also available in my device.
Since I am animating (rotate) the camera, I need a smooth rotation values
Please guide me in this regard.
See How do you calculate the rate of rotation using the accelerometer values in Android for a particular axis on how to read Android's software-derived sensors that combine the data from the accelerometers, magnetometers, and (if available) gyroscopes.
To smooth values, use a low-pass filter or (better but more complicated) a Kalman filter. I suspect that Android's software-derived sensors such as the "rotation sensor" already use a Kalman filter to combine data from the different sensors. (One could search the source code...)