I have written a simple Activity which is a SensorEventListener for Sensor.TYPE_ACCELEROMETER.
In my onSensorChanged(SensorEvent event) i just pick the values in X,Y,Z format and write them on to a file.
Added to this X,Y,Z is a label, the label is specific to the activity i am performing.
so its X,Y,Z,label
Like this i obtain my activity profile. Would like to have suggestions on what operations to perform after data collection so as to remove noise and get the best data for an activity.
The main intent of this data collection is to construct a user activity detection application using neural network library (NeuroPh for Android) Link.
Just for fun I wrote a pedometer a few weeks ago, and it would have been able to detect the three activities that you mentioned. I'd make the following observations:In addition to Sensor.TYPE_ACCELEROMETER, Android also has Sensor.TYPE_GRAVITY and Sensor.TYPE_LINEAR_ACCELERATION. If you log the values of all three, then you notice that the values of TYPE_ACCELEROMETER are always equal to the sum of the values of TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION. The onSensorChanged(…) method first gives you TYPE_ACCELEROMETER, followed by TYPE_GRAVITY and TYPE_LINEAR_ACCELERATION which are the results of its internal methodology of splitting the accelerometer readings into gravity and the acceleration that's not due to gravity. Given that you're interested in the acceleration due to activities, rather than the acceleration due to gravity, you may find TYPE_LINEAR_ACCELERATION is better for what you need.Whatever sensors you use, the X, Y, Z that you're measuring will depend on the orientation of the device. However, for detecting the activities that you mention, the result can't depend on e.g. whether the user is holding the device in a portrait or landscape position, or whether the device is flat or vertical, so the individual values of X, Y and Z won't be any use. Instead you'll have to look at the length of the vector, i.e. sqrt(XX+YY+ZZ) which is independent of the device orientation.You only need to smooth the data if you're feeding it into something which is sensitive to noise. Instead, I'd say that the data is the data, and you'll get the best results if you use mechanisms which aren't sensitive to noise and hence don't need the data to be smoothed. By definition, smoothing is discarding data. You want to design an algorithm that takes noisy data in at one end and outputs the current activity at the other end, so don't prejudge whether it's necessary to include smoothing as part of that algorithmHere is a graph of sqrt(XX+YY+ZZ) from Sensor.TYPE_ ACCELEROMETER which I recorded when I was building my pedometer. The graphs shows the readings measured when I walked for 100 steps. The green line is sqrt(XX+YY+Z*Z), the blue line is an exponentially weighted moving average of the green line which gives me the average level of the green line, and the red line shows my algorithm counting steps. I was able to count the steps just by looking for the maximum and minimums and when the green line crosses the blue line. I didn't use any smoothing or Fast Fourier Transforms. In my experience, for this sort of thing the simplest algorithms often work best, because although complex ones might work in some situations it's harder to predict how they'll behave in all situations. And robustness is a vital characteristic of any algorithm :-).
This sounds like an interesting problem!
Have you plotted your data against time to get a feel for it, to see what kind of noise you are dealing with, and to help decide how you might pre-process your data for input to the detector?
^
|
A |
|
|
|
|_________________>
| time
|
v
I'd start with lines for each activity:
|Ax + Ay + Az|
|Vx + Vy + Vz| (approximate by calculating area of trapezoids formed by your data points)
... etc
Maybe you can work out the orientation of the phone by attempting to detect gravity, then rotate your vectors to a 'standard' orientation (eg positive Z axis = up). If you can do that, then the different axes may become more meaningful. For example, walking (in pocket) would tend to have a velocity on the horizontal plane, which might be distinguished from walking (in hand) by motion in the vertical plane.
As for filters, if the data appears noisy, a simple starting point is to apply a moving average to smooth it. This is a common technique for sensor data in general:
https://en.wikipedia.org/wiki/Moving_average
Also, this post seems relevant to your question:
How to remove Gravity factor from Accelerometer readings in Android 3-axis accelerometer
Things Identified by me:
The data has to be preprocessed as and how you need it to be, In my
case i just want 3 inputs and one output
The data has to be
subjected to Smoothing (Five-point Smoothing or Any other technique
which suites you the best) Reference. So that Noise gets filtered out (not completely though). Moving Average is one of the techniques
Linearized data would be good, because you dont have any idea how the data was sampled, Use interpolation to help you in Linearizing the data
Finally use FFT (Fast Fourier Transform) to extract the recipe out of the dish, that is to extract features out of your dataset!
Related
I try to use Dynamic Time Warping (DTW) to detect gestures performed with a smartphone by using the accelerometer sensor. I already implemented a simple DTW-algorithm.
So basicly I am comparing arrays of accelerometer-data (x,y,z) with DTW. The one array contains my predefiend gesture, the other should contain the measured values. My problem is, that the accelerometer-sensor measures continously new values and I don't know when to start the comparison with my predefined value-sequence.
I would need to know when the gesture starts and when it ends, but this might be different with different gestures. In my case all supported gestures start and end at the same point, but as far as I know I can't calculate the traveled distance from acceleration reliably.
So to sum things up: How would you determine the right time to compare my arrays using DTW?
Thanks in advance!
The answer is, you compare your predefined gesture to EVERY
subsequence.
You can do this in much faster than real time (see [a]).
You need to z-normalize EVERY subsequence, and z-normalize your predefined gesture.
So, by analogy, if you stream was.....
NOW IS THE WINTER OF OUR DISCONTENT, MADE GLORIOUS SUMMER..
And your predefined word was made, you can compare with every marked word beginning (denoted by white space)
DTW(MADE,NOW)
DTW(MADE,IS)
DTW(MADE,THE)
DTW(MADE,WINTER)
etc
In your case, you don’t have makers, you have this...
NOWISTHEWINTEROFOURDISCONTENTMADEGLORIOUSSUMMER..
So you just test every offset
DTW(MADE,NOWI)
DTW(MADE, OWIS)
DTW(MADE, WIST)
DTW(MADE, ISTH)
::
DTW(MADE, TMAD)
DTW(MADE, MADE) // Success!
eamonn
[a] https://www.youtube.com/watch?v=d_qLzMMuVQg
You want to apply DTW not only to a time-series, but to a continously evolving stream. Therefore you will have to use a sliding window of n recent data points.
This is exactly, what eamonn described in his second example. His target pattern consists of 4 events (M,A,D,E) and therefore he uses a sliding window with length of 4.
Yet in this case, he makes the assumption, that the data stream contains no distortions, such as (M,A,A,D,E). The advantage of DTW is that it allows these kind of distortions and yet recognizes the distorted target pattern as a match. In your case, distortions in time are likely to happen. I assume that you want equal gestures performed either slow or fast as the same gesture.
Thus, the length of the sliding window must be higher than the length of the target pattern (to be able to detect a slow target gesture). This is computationally expensive.
Finally, my point is: I want to recommed you this paper
Spring algorithm by Sakurai, Faloutsos and Yamamuro.
They optimized the DTW algorithm for datastreams. You will no longer need more than n*n computations per incoming event but only n. It basically is DTW but cutting down all unneccesary computations and only taking the best possible alignment of the template onto the stream into account.
p.s. most of what I know about time-series and pattern matching, I learned by reading what Eamonn Keogh provided. Thanks a lot, Mr. Keogh.
I have been working with Android's calibrated magnetometer for some time, feeding it into our algorithm for the rotation vector values to calculate the correct yaw/orientation with North. In spite of dealing with not completely projecting the yaw onto a plane that is parallel with the ground to get true yaw independent of pitch, we have been noticing that even after we calibrate the magnetometer - using the calibrated magnetometer values and moving the phone in figure eights and other movements/orientations - the calibrated values seem to eventually try to recalibrate.
With this in mind, we decided to start looking specifically at the uncalibrated values given by Android within our JNI code. Within the struct "ASensorEvent", there is "uncalibrated_magnetic", which is the struct "AUncalibratedEvent" - all of this is defined in "android/sensor.h". I assumed that this would give me uncalibrated values; however I was mistaken - at least on the devices I check it on - and was given the supposed calibrated values. Being that in the "sensor.h", the only enums for sensors that are explicitly defined are...
ASENSOR_TYPE_ACCELEROMETER = 1,
ASENSOR_TYPE_MAGNETIC_FIELD = 2,
ASENSOR_TYPE_GYROSCOPE = 4,
ASENSOR_TYPE_LIGHT = 5,
ASENSOR_TYPE_PROXIMITY = 8
...I decided to directly type in 14 assuming this would give me the uncalibrated magnetometer values since this is the values that is associated with the magnetometer outside of JNI http://developer.android.com/reference/android/hardware/Sensor.html#TYPE_MAGNETIC_FIELD
This gave the uncalibrated magnetometer values that corresponded with those outside of JNI.
So, at this point, we decided to plot the values given and we noticed something strange.
Here, you can see the x-axis are the y-values given and the y-axis the z-values given by the uncalibrated magnetometer - however, the axes are irrelevant since it can be seen across all axes. At the bottom left, you'll notice a "j" figure rotated roughly 150 degrees clockwise. These "j" figure values were at the beginning of data collection and lasted for around 20 seconds.
We haven't always seen this in our data collection, but around 50% of the time we have seen this. I really have no idea what this is. I mean I assume it isn't some weird hard iron offset since I imagine such an offset to be close to offset that is visible with a majority of the data and I'd assume it wasn't soft-iron skewed values because the environment was consistently the same at least after 1 second until the end of data collection (lasted for about 200s) and sometimes was the same throughout the whole trace.
I guess we are starting to speculate that we are not truly getting uncalibrated/raw values.
Thanks in advance.
As written on http://developer.android.com/guide/topics/sensors/sensors_position.html#sensors-pos-magunc
"Factory calibration and temperature compensation are still applied to the magnetic field." Hope it helps!
For an Android application, I need to get magnetic field measurements across the axis of global (world's) coordinate system. Here is how I'm going (guessing) to implement this. Please, correct me if necessary. Also, please, note that the question is about algorithmic part of the task, and not about Android APIs for sensors - I have an experience with the latter.
First step is to obtain TYPE_MAGNETIC_FIELD sensor data (M) and TYPE_ACCELEROMETER sensor data (G). The second is supposed to be used according to Android's documentation, but I'm not sure if it shouldn't be TYPE_GRAVITY instead (again as G), because accelerometer seems providing not the pure gravity.
Next step is to get rotation matrices via getRotationMatrix(R, I, G, M), where R and I are rotation and inclination matrix correspondingly.
And now goes the most questionnable part: in order to convert M vector into the world's coordinate system, I suppose to multiply [R * I] * M.
I'm not sure this is a correct way for transforming magnetic field reading into another basis. Also, I don't know if remapCoordinateSystem should be used in addition or as replacement for something above.
If there exists some source code which does this thing already, I'd appreciate posting a link, but I don't want to use big general purposes libraries (for example, for augmented reality support) for this specific task, because I'd like to keep it as simple as possible.
P.S.
I came to the idea to add some information to the original post for clarity.
Let us suppose a device rests on a table and continuously reads data from its magnetic sensor. Each measurement contains 3 values, presenting magnetic field in axis X, Y, Z, which are device's local coordinate system. I take it that I can neglect environmental field fluctuations (smoothed by lowpass filter), so this 3 values should remain almost the same all the time the device remains in place. If we rotate device around any axis, the values change, because we change the local coordinate system. But the field itself is not actually changed. So I want to translate local X, Y, Z field measurements into such X', Y', Z', that they keep their respective values regardless to device rotation, provided that device is not moved from its location (only rotated).
I've implemented the algorithm described above and got regular and noticable changes in values X', Y', Z', obtained through suggested transformations, so there is something wrong in it.
P.P.S.
Occasionally I've found an exact duplicate of my question here on SO - How can I get the magnetic field vector, independent of the device rotation? - but unfortunately the answer contains my suggestions, and OP of that question confirms that they do not work.
The coordinates of M with respect to the word coordinate is just the multiplication R*M.
The rotation matrix R is mathematically the change of basis matrix from the device coordinate to the word coordinate.
Let X, Y, Z be the device coordinate basis and W_1, W_2, W_3 be the word coordinate basis then
M = m_1 X + m_2 Y + m_3
and also
M = c_1 W_1 + c_2 W_2 + c_3 W_3
where R * (m_1, m_2, m_3) = (c_1, c_2, c_3) transpose.
Low pass filter is only used to filter out accelerations in the X, Y directions. RemapCoordinateSystem is used to change the order of the basis, ie changing from W_1, W_2, W_3 to W_1, W_3, W_2.
The magnetometer sensor on your device returns a 3-vector in device coordinates. You can use getRotationMatrix() to get a matrix that could be used to convert that device-coordinates
vector to world coordinates. You could also learn about Quaternions and use
TYPE_ROTATION_VECTOR directly. However, there's no Quaternion library in Android (that I know of) and that's a discussion beyond the scope of this question.
However, none of this will do you any good because the device orientation information is based in part on the value from the magnetometers. In other words, the device will always tell you that the magnetic vector is facing exactly North.
Now, what you can do is get magnetic dip. This is one of the outputs from getRotationMatrix(), although you'll have to convert a matrix to an angle for it to be useful. That too, is beyond the scope of this question.
Finally, your last option is to build a table which is level and which has an arrow on it pointing true north. (You'll have to align it by the stars at night or something.) Then, place your device flat on the table with the top of the device facing north. In this case, device coordinates will be the same as world coordinates and the magnetometer sensor will produce the values you want.
Your comments indicate that you're interested in local variations. There's simply no way to get true north with your Android device alone. Theoretically, you could build a table as I described, and then walk around holding the device in strictly the same orientation as before, keeping an eye on the table for reference. I doubt you could pull it off, though.
You could try using gyros in your app to help you keep the device oriented exactly the same way at all times, but the gyros in any Android device you use are likely to drift too much for this to work.
Or perhaps we still don't understand what you're trying to do. Bottom line, though, is that you simply cannot get a global coordinate system with an Android device alone -- whatever you get will always be aligned with the local magnetic field at that exact spot.
I am working on an android app that requires the detection of vertical motion. When moving the tablet upward, the Gyroscope, Accelerometer, and Linear Acceleration sensors give a corresponding value indicating upward or downward motion.
The problem I have is that these sensors will also read an upward/downward motion when you tilt the tablet towards the user or away from the user. For example, the x value in the gyroscope represents the vertical plane. But when you tilt the device forwards, the x value will change.
When I make this motion, the same sensor that reads vertical motion reads a value for this.
The same goes for the rest of the sensors. I have tried to use orientation coupled with the gyro to make the conditional statement, if the pitch is not changing, but the x variable is going up/down, then we have vertical motion. The problem with this is that if the user moves it up but tilted slightly, it will no longer work. I also tried making it so if there is a change in tilt, then there is no vertical motion. But it iterates so quickly that there may be a change in tilt for 1/100 of a second, but for the next there isn't.
Is there any way I can read only vertical changes and not changes in the devices pitch?
Here is what I want to detect:
edit:
"Please come up with a mathematically sound definition of what you consider 'moving upwards.'"
This was my initial question, how can I write a function to define when the tablet is moving upwards or downwards? I consider a vertical translation moving upwards. Now how do I detect this? I simply do not know where to begin, thank you.
Ok, even though this question is fairly old, I see a lot of confusion in the present answer and comments, so in case anyone finds this, I intend to clear a few things up.
The Gyroscope
First of all, the gyroscope does not measure vertical motion as per your definition (a translatory motion). It measures rotation around each of the axes, which are defined as in the figure below. Thus having you tilt your device forwards and backwards indeed rotates it around the x axis and therefore you will see non-zero values in the x value of your gyroscope sensor.
the x value in the gyroscope represents the vertical plane.
I'm not sure what is meant by "the vertical plane", however the x value certainly does not represent the plane itself nor the orientation of the device within the plane.
The x value of the gyroscope sensor represents the current angular velocity of the device around the x axis (eg. the change in rotation).
But when you tilt the device forwards, the x value will change. When I make this motion, the same sensor that reads vertical motion reads a value for this.
Not quite sure what you're referring to here. "The same sensor that reads vertical motion" I assume is the gyroscope, but as previously said, it does not read vertical motion. It does exactly what it says on the tin.
The device coordinate system
This is more in response to user Ali's answer than the original question, but it remains relevant in either case.
The individual outputs of the linear acceleration sensor (or any other sensor for that matter) are expressed in the coordinate system of the device, as shown in the image above. This means if you rotate the device slightly, the outputs will no longer be parallel to any world axis they coincided with before. As such, you will either have to enforce that the device is in a particular orientation for your application, or take the new orientation into account.
The ROTATION_VECTOR sensor, combined with quaternion math or the getRotationMatrixFromVector() method, is one way to translate your measurements from device coordinates to world coordinates. There are other ways to achieve the same goal, but once achieved, the way you hold your device won't matter for measuring vertical motion.
In either case, the axis you're looking for is the y axis, not the z axis.
(If by any chance you meant "along device y axis" as "vertical", then just ignore all the orientation stuff and just use the linear acceleration sensor)
Noise
You mentioned some problems regarding noise and update rates in the question, so I'll just mention it here. The simplest and one of the more common ways to get nice, consistent data from something that varies very often is to use a low-pass filter. What type of filter is best depends on the application, but I find that a exponential moving average filter is viable in most cases.
Finishing thoughts
Note that if you take proper care of the orientation, your transformed linear acceleration output will be a good approximation of vertical motion (well, change in motion) without filtering any noise.
Also, if you want to measure vertical "motion", as in velocity, you need to integrate the accelerometer output. For various reasons, this doesn't really turn out too well in most cases, although it is less severe in the case of velocity rather than trying to measure position.
OK, I suspect it is only a partial answer.
If you want to detect vertical movement, you only need linear acceleration, the device orientation doesn't matter. See
iOS - How to tell if device is raised/dropped (CoreMotion)
or
how to calculate phone's movement in the vertical direction from rest?
For some reason you are concerned with the device orientation as well, and I have no idea why. I suspect that you want to detect something else. So please tell us more and then I will improve my answer.
UPDATE
I read the post on coremotion, and you mentioned that higher z lower x and y means vertical motion, can you elaborate?
I will write in pseudo code. You measured the (x, y, z) linear acceleration vector. Compute
rel_z = z/sqrt(x^2+y^2+z^2+1.0e-6)
If rel_z > 0.9 then the acceleration towards the z direction dominates (vertical motion). Note that the constant 0.9 is arbitrary and may require tweaking (should be a positive number less than 1). The 1.0e-6 is there to avoid accidental division by zero.
You may have to add another constraint that z is sufficiently large. I don't know your device, whether it measures gravity as 1 or 9.81. I assume it measures it as 1.
So all in all:
if (rel_z > 0.9 && abs(z) > 0.1) { // we have vertical movement
Again, the constant 0.1 is arbitrary and may require tweaking. It should be positive.
UPDATE 2
I do not want this because rotating it towards me is not moving it upwards
It is moving upwards: The center of mass is moving upwards. My code has the correct behavior.
Please come up with a mathematically sound definition of what you consider "moving upwards."
In Android Documentation is specified the third parameter as
float[] gravity
then is specifies
[0 0 g] = R * gravity (g = magnitude of gravity)
Now, in most of the examples online I can see everyone sending accelerometer values to getRotationMatrix, but, Isn't suppose that I should send only gravity values?
For example, if the mobile phone has the gravity sensor,
Should I send it raw output to getRotationMatrix?
If it hasn't one, Should I send accelerometer values? Should I extract non gravity components first? (as accelerometer values are Acceleration minus G).
Will the use of gravity sensor values be more reliable than using accelerometer values in mobile phones that have that sensor?
Thanks in advance! Guillermo.
I think the reason you only see examples using the accelerometer values is because the gravity sensor was only launched in API 9 and also because most phones might not give this values separated from the accelerometer values, or dont have the sensor, etc, etc..
Another reason would be because in most of the cases the result tend to be the same, since what the accelerometer sensor outputs is the device linear acceleration plus gravity, but most of the time the phone will be standing still or even moving at a constant velocity, thus the device acceleration will be zero.
From the setRotationMatrix Android Docs:
The matrices returned by this function are meaningful only when the device is not free-falling and it is not close to the magnetic north. If the device is accelerating, or placed into a strong magnetic field, the returned matrices may be inaccurate.
Now, you're asking if the gravity data would me more reliable? Well, there is nothing like testing, but I suppose it wouldn't make much difference and it really depends on which application you want. Also, obtaining the simple gravity values is not trivial and it requires filtering, so you could end up with noisy results.