I'm trying to do some pattern detection experiments with Android motion sensors data, especially accelerometer data. First, in a recording mode, I get the sensors data associated with my pattern. And then in a detection mode, I would like to re-do the same pattern, and match the two dataset somehow (using data analytics, machine learning, etc) to detect if I did the same pattern again, so to raise a flag.
For instance, I gather sensor data when rotating the phone in a circle or S-like movement. Then would like to detect if I do a similar pattern, the pattern matches or not.
OR
One way I think to do it, is to keep 100 samples of my gesture say if we want the gesture to be 5 seconds long (at 20 Hz). So then we have to apply ML with the 2nd dataset to see if it is almost a "match".
Anyone has any experience with such sensor data recognition? Any helps or suggestions how to achieve it?
Related
I'd like my app to be able to detect when the user carrying the phone falls, using only accelerometer data (because it's the only sensor available on all smartphones).
I first tried to implement an algorithm to detect free fall (accelerometer total acceleration nearing zero, followed by high acceleration due to ground hitting, and a short period of motionlessness to ditch false positives when the user is just walking downstairs quickly), but there's a lot of ways to fall, and for my algorithm implementation, I can always find a case where a fall is not detected, or where a fall is wrongly detected.
I think Machine Learning can help me solve this issue, by learning from a lot of sensor values coming from different devices, with different sampling rates, what is a fall and what is not.
Tensorflow seems to be what I need for this as it seems it can run on Android, but while I could find tutorials to use it for offline image classifying (here for example), I didn't find any help to make model that learns patterns from motion sensors values.
I tried to learn how to use Tensorflow using the Getting Started page, but failed to, probably because I'm not fluent in Python, and do not have machine learning background. (I'm fluent in Java and Kotlin, and used to Android APIs).
I'm looking for help from the community to help me use Tensorflow (or something else in machine learning) to train my app to recognize falls and other motion sensors patterns.
As a reminder, Android reports motion sensors values at a random rate, but provides a timestamp in nanoseconds for each sensor event, which can be used to infer the time elapsed since the previous sensor event, and the sensor readings are provided as a float (32bits) for each axis (x, y, z).
If you have your data well organized, then you might be able to use the Java-based Weka machine learning environment:
http://www.cs.waikato.ac.nz/ml/weka/
You can use Weka to play around with all the different algorithms on your data. Weka uses a ARFF file for the data. It's pretty easy to create that if you have your data in JSON or CSV.
Once you find a algo/model that works, the you can easily put that into your Android app:
http://weka.wikispaces.com/Use+Weka+in+your+Java+code
You really don't need Tensorflow if you dont require deep learning algos, which I don't think you require. If you did need the deep learning algo, then DeepLearning4J is a java based open source solution for Android:
https://deeplearning4j.org/android
STEP 1)
Create a training database.
You need some sample of accelerometer data labelled ‘falling’ and ‘not falling’.
So you will basically record the acceleration in different situations and label them. i.e. To give an order of magnitude of the quantity of data, 1000 to 100,000 periods of 0.5 to 5 seconds.
STEP 2)
Use SK learn with python. Try different model to classify your data.
X is your vectors containing your sample of 3 accelerations axes.
Y is your target. (falling/not falling)
You will create a classifier that can classify X to Y.
STEP 3)
Make your classifier compatible with Android.
Sklearn-porter will port you code in the coding language that you like.
https://github.com/nok/sklearn-porter
STEP 4)
Implement this ported classifier in your app. Feed it with data.
I want to detect a specific pattern of motion on an Android mobile phone, e.g. if I do five sit-stands.
[Note: I am currently detecting the motion but the motion in all direction is the same.]
What I need is:
I need to differentiate the motion downward, upward, forward and backward.
I need to find the height of the mobile phone from ground level (and the height of the person holding it).
Is there any sample project which has pattern motion detection implemented?
This isn't impossible, but it may not be extremely accurate, given that the accuracy of the accelerometer and gyroscopes in phones have improved a lot.
What your app will doing is taking sensor data, and doing a regression analysis.
1) You will need to build a model of data that you classify as five sit and stands. This could be done by asking the user to do five sit and stands, or by loading the app with a more fine-tuned model from data that you've collected beforehand. There may be tricks you could do, such as loading several models of people with different heights, and asking the user to submit their own height in the app, to use the best model.
2) When run, your app will be trying to fit the data from the sensors (Android has great libraries for this), to the model that you've made. Hopefully, when the user performs five sit-stands, he will generate a set of motion data similar enough to your definition of five sit-stands that your algorithm accepts it as such.
A lot of the work here is assembling and classifying your model, and playing with it until you get an acceptable accuracy. Focus on what makes a stand-sit unique to other up and down motions - For instance, there might be a telltale sign of extending the legs in the data, followed by a different shape for straightening up fully. Or, if you expect the phone to be in the pocket, you may not have a lot of rotational motion, so you can reject test sets that registered lots of change from the gyroscope.
It is impossible. You can recognize downward and upward comparing acceleration with main gravity force but how do you know is your phone is in the back pocket when you rise or just in your waving hand when you say hello? Was if 5 stand ups or 5 hellos?
Forward and backward are even more unpredictable. What is forward for upside-down phone? What if forward at all from phone point of view?
And ground level as well as height are completely out of measurement. Phone will move and produce accelerations in exact way for dwarf or giant - it more depends on person behavior or motionless then on height.
It's a topic of research and probably I'm way too late to post it here, but I'm foraging the literature anyway, so what?
All kind of machine learning approaches have been set on the issue, I'll mention some on the way. Andy Ng's MOOC on machine learning gives you an entry point to the field and into Matlab/Octave that you instantly can put to practice, it demystifies the monsters too ("Support vector machine").
I'd like to detect if somebody is drunk from phone acceleration and maybe angle, therefore I'm flirting with neuronal networks for the issue (they're good for every issue basically, if you can afford the hardware), since I don't want to assume pre-defined patterns to look for.
Your task could be approached pattern based it seems, an approach applied to classify golf play motions, dancing, behavioural every day walking patterns, and two times drunk driving detection where one addresses the issue of finding a base line for what actually is longitudinal motion as opposed to every other direction, which maybe could contribute to find the baselines you need, like what is the ground level.
It is a dense shrub of aspects and approaches, below just some more.
Lim e.a. 2009: Real-time End Point Detection Specialized for Acceleration Signal
He & Yin 2009: Activity Recognition from acceleration data Based on
Discrete Consine Transform and SVM
Dhoble e.a. 2012: Online Spatio-Temporal Pattern Recognition with Evolving Spiking Neural Networks utilising Address Event Representation, Rank Order, and Temporal Spike Learning
Panagiotakis e.a.: Temporal segmentation and seamless stitching of motion patterns for synthesizing novel animations of periodic dances
This one uses visual data, but walks you through a matlab implementation of a neuronal network classifier:
Symeonidis 2000: Hand Gesture Recognition Using Neural Networks
I do not necessarily agree with Alex's response. This is possible (although maybe not as accurate as you would like) using accelerometer, device rotation and ALOT of trial/error and data mining.
The way I see that this can work is by defining a specific way that the user holds the device (or the device is locked and positioned on the users' body). As they go through the motions the orientation combined with acceleration and time will determine what sort of motion is being performed. You will need to use class objects like OrientationEventListener, SensorEventListener, SensorManager, Sensor and various timers e.g. Runnables or TimerTasks.
From there, you need to gather a lot of data. Observe, record and study what the numbers are for doing specific actions, and then come up with a range of values that define each movement and sub-movements. What I mean by sub-movements is, maybe a situp has five parts:
1) Rest position where phone orientation is x-value at time x
2) Situp started where phone orientation is range of y-values at time y (greater than x)
3) Situp is at final position where phone orientation is range of z-values at time z (greater than y)
4) Situp is in rebound (the user is falling back down to the floor) where phone orientation is range of y-values at time v (greater than z)
5) Situp is back at rest position where phone orientation is x-value at time n (greatest and final time)
Add acceleration to this as well, because there are certain circumstances where acceleration can be assumed. For example, my hypothesis is that people perform the actual situp (steps 1-3 in my above breakdown) at a faster acceleration than when they are falling back. In general, most people fall slower because they cannot see what's behind them. That can also be used as an additional condition to determine the direction of the user. This is probably not true for all cases, however, which is why your data mining is necessary. Because I can also hypothesize that if someone has done many situps, that final situp is very slow and then they just collapse back down to rest position due to exhaustion. In this case the acceleration will be opposite of my initial hypothesis.
Lastly, check out Motion Sensors: http://developer.android.com/guide/topics/sensors/sensors_motion.html
All in all, it is really a numbers game combined with your own "guestimation". But you might be surprised at how well it works. Perhaps (hopefully) good enough for your purposes.
Good luck!
I'm working on Sensor fusion with Accelerometer, Gyroscope and Magnetic Field on Android. Thanks to SensorsManager I can be noticed for each new value of theses sensors.
In reality and this is the case for my Nexus 5 (I'm not sure for others Android devices), acceleration, rotation rate and magnetic field are sampled in same time. We can verify it using event.timestamp.
On others systems (like iOS, xSens...), Sensor SDK provides a notification with these 3 vectors in same time.
Of course, when I receive an acceleration(t), I can write some lines of codes with arrays to wait rotationRate(t) and magneticField(t). But if there is a way to have an access directly to these 3 vectors together it could be very interesting to know!
An other question relative to sensors data:
Is there advices from Android team to device constructors to provide data in chronological order ?
Thank you,
Thibaud
Short answer, no, Android doesn't provide a way to get all the sensor readings as it reads them.
Furthermore, the behavior that you've observed with SensorManager, namely that readings from different sensors happen to have the same timestamp suggesting that they were read together - should not be relied upon. There isn't documentation that guarantees this behavior (also, this is likely a quirk of your testing and update configuration), so relying upon it could come to bite you in some future update (and trying to take advantage of this is likely much more difficult to get right or fast than the approach I outline below).
Generally, unless all results are generated by the same sensor, it is impossible to get them all "at the same time". Furthermore, just about all the sensors are noisy so you'd already need to do some smoothing if you read them as fast as possible.
So, what you could do is sample them pretty quickly, then at specific intervals, report the latest sample from all sensors (or some smoothed value that accounts for the delta between sample time and report time). This is a trivial amount of extra code, especially if you're already smoothing noisy sensor data.
There is a workaround for this particular problem. When multiple registered listeners are present in an activty at the same time, timestamp for those event may be misleading. But you can multiple fragment objects into the said activity which have different context's. Then listen to every sensor in these fragments. With this approach the sensor reading timestamps become reliable.
Or listen in parallel threads if you know about concurrency...
I was trying to port Sebastian Madgwich's sensor fusion algorithm ( http://www.x-io.co.uk/node/8 ) to Android, but the first results seem not to be correct, and the resulting quaternion is moving everywhere when the phone is steady. One of the problem might be that I'm not able to sample at the same time the three sensors (gyro, accelerometer and magnetometer), but it looks like Android sensor manager doesn't allow to do so.
Did anybody succeeded in porting the algorithm with more success?
thanks in advance
I haven't implemented this in Android, but I have it working on an iPad 2 for an augmented reality application i'm working on for my MSc thesis. To get it working smoothly I found that it's best to set the update rate for the sensors inline with the frame rate (so, 30hz for me), but it's probably worth experimenting to see what's best for your device.
I'm not sure exactly what you mean by flying everywhere, but sensor drift will probably cause a noticeable amount of error - my objects slowly rotate randomly when the device is at rest. Very annoying, but something you have to accept when using IMUs.
Also, make sure you update the quaternion after you have a new reading from all of the sensors, instead of ofter each sensor gets a new reading separately.
In an Android app I'm making, I would like to detect when a user is holding a phone in his hand, makes a gesture like he would when throwing a frissbee. I have seen a couple of apps implementing this, but I can't find any example code or tutorial on the web.
It would be great with some thoughts on how this could be done, and ofc.
It would be even better with some example code or link to a tutorial.
Accelerometer provides you with a stream of 3d vectors. In case your phone is help in hand, its direction is opposite of earth gravity pull and size is the same. (this way you can determine phone orientation)
If user lets if fall, vector value will go to 0 (the process as weighlessness on space station)
If user makes some gesture without throwing it, directon will shift, and amplitude will rise, then fall and then rise again (when user stops movement). To determine how it looks like, you can do some research by recording accelerometer data and performing desireg gestures.
Keep in mind, that accelerometer is pretty noisy - you will have to do some averaging over nearby values to get meaningful results.
I think that one workable approach to match gesture would be invariant moments (like Hu moments used to image recognition) - accelerometer vector over time defines 4 dimensional space, and you will need set of scaling / rotation invariant moments. Designing such set is not easy, but comptuing is not complicated.
After you got your moments, you may use standart techniques of matching vectors to clusters. ( see "moments" and "cluster" modules from our javaocr project: http://javaocr.svn.sourceforge.net/viewvc/javaocr/trunk/plugins/ )
PS: you may get away with just speed over time, which produces 2-Dimensional space and can be analysed with javaocr on the spot.
Not exactly what you are looking for:
Store orientation to an array - and compare
Tracking orientation works well. Perhaps you can do something similar with the accelerometer data (without any integration).
A similar question is Drawing in air with Android phone.
I am curious what other answers you will get.