Read angular change in phone through accelerometer - android

Given that I can read the effects of gravity on my device on three axes, I'm looking for some math that will get me a more logical view of what's happening to the device in my hands. I want to create a two dimensional control based on tilting the device forward and back, and tilting the device to the right and left.
The seemingly complex part is that I'd like to have the controls behave in a predictable way regardless of the starting position that the device happened to be in when starting the controls. For instance, if the user is in bed holding the phone upside down above their head, everything should still work the same from the user's perspective even though the numbers coming off of the accelerometer will be entirely different. I can envision some sort of transformation that would yield numbers that look like the device is starting off on a flat table, given the actual state of the device when the controls start to be used.

Related

Is it possible to detect Android devices laying side by side

I am developing an Android application that requires devices to be laid side by side and/or above and below each other.
I know I can use the Nearby API to detect devices "Nearby" however I need something a little more "Finer Grained".
My app needs to be able to identify a device laying either on the left side, above, right side or below. While all devices are laying flat on a table (for instance).
I can find nothing on the web that describes this use case.
Is it possible?
UPDATE
My use case is that I want Android devices to be able to detect any number of "Other Devices" laying either to their left or right. The devices will be laid out horizontally with a "small" gap between each one.
In the same way that you might layout children's lettered blocks to spell out a word or phrase, or numbered blocks to make a sum.
not only should the line of devices be able to detect their immediate neighbours to their left and right the two devices at either end should be able to detect they they are the start and end (reading left to right) of the line.
Using proximity sensors is a likely way to solve your question. TYPE_PROXIMITY will gives the distance from a near by object. TYPE_MAGNETIC_FIELD gives the geomagnetic field strength on x/y/z.
For more read Position Sensors.
Making your own Mock GPS (Local PS to be exact). I don't have a link for this but its definitely possible. Check out how GPS works to get an idea. Wifi and Bluetooth are signals. but you know what else is a signal?
A: SOUND
make each phone make a large beep in turn and measure audio strength using receivers. This might work better than wifi/bluetooth. once you measure relative distances between every pair of phones, it only takes a good algorithm to find relative positions
A Possible Alternative Solution : use image processing. Get something like OpenCV for Android and setup one phone as a master. This will work only for a 2D Layout.
Another "idea" - use the cameras. Stick A board on top of your surface with 4 QR codes in each corner. (This will help identify the edges and orientation of your phone). If you're looking for a 3D layout and the phones have sufficient in-between space, you could stick a QR behind every phone and show a QR on the screen of every phone.
All of these are solutions. Maybe you can use individual ones. Maybe you can use a combination. who knows.
An idea, in case it's relevant for your use case:
Setup phase
start your app on each device in "pairing mode".
Each device will show a QR code containing the key required for communicating with the device (for example via Firebase), and screen details: size in pixels. It will also draw a rectangle at the screen boundaries.
A different phone, external to this layout will run your app as a "master", taking a picture of the phones from above.
Now you need to write an algorithm to identify the screens and their locations, orientation and extract the QR codes for analysis. Not easy, but doable.
Interaction phase
now all the phones (this should work on more than two phones) can collaborate screens to show parts of the same movie, for example.
Seems not, if You have only 2 devices, but if You have external sources (with known position) of any signal (audio, vibrate, BT or WiFi radio, etc.), which can be detected by devices with adequate accuracy, and devices time is synchronized, You can do this comparing time of signal start (or signal strength) on both devices like on this picture:
Or, if You can add some sensors to one of devices, You can create "other device locator", for example like this sound locator.
UPDATE
In a updated formulation, the issue is also not solvable: it's possible to determine which two devices are at the edge, but you can not determine which one is on the left and which is on the right side. It is necessary that at least one device knows that it, for example, is leftmost - then other devices, for example, generates a sound, the others receive it and determine their order according to the difference in arrival time. But the anchor point and synchronization of time are necessary.
By understating your use case, it is possible to find number of devices surrounded by host device, using Nearby Api, other techniques. But find how many devices each side!!! I don't think it is possible with the current mobile hardware and technology. Because, by considering all factors, magnetic sensors are only the least possible solution. But the current mobiles have no such capability.
The following point I made based on above answers.
TYPE_ACCELEROMETER, TYPE_LINEAR_ACCELERATION, TYPE_MAGNETIC_FIELD, TYPE_ORIENTATION these sensors are react to the magnetic field around the device (compass react to the magnet). You can try an app using TYPE_MAGNETIC_FIELD, test how it will react to other device when close to it (I think it will react).
But the point I am trying to make here is, if you put three devices to once side and 4 devices to other side, the MAGNETIC_FIELD sensor reads relative magnetic field. So we can't identify how may devices each side, Until unless you have made some scientific calculations.
The second point is, some one suggested TYPE_PROXIMITY sensor, but it is not meant to serve this purpose. Current mobiles, measures the proximity of an object in cm relative to the view screen of a device. This sensor is typically used to determine whether a handset is being held up to a person's ear.
Another least possibility is using location sensor, it can identify the coordinates relative to your device coordinates, you communicate between each device coordinates with host using NFC. But the problem is, your use case says those devices are very close to each other, so it is not measurable distance using location service.
To conclude, it is not possible to identify number of each devices each side of a host device, with the current mobile hardware. It will can archived by using external sensor that will extends the mobile capability. For example, a phone case that equipped with such a capability, this will open window to other use-cases and application as well.
I think a way but it may require a a bit work. First check if 2 devices are laying by getting device orientation and using accelerometer or rotation vector to check pitch, roll, etc.
When you are sure that they are laying send data from one device from one to another using BT or wifi. Data should include send time. Check retreive time on other device also you should check for latency for sending and retreiving data. If you can have a noticible time differences in ms for small distance differences between devices it would be easy to check how approximately close they are. Also you may ask users to hold their device 1 meter or fixed distance from each other to get a time of travel for BT or wifi signal you send to other.

Detect pattern of motion on an Android device

I want to detect a specific pattern of motion on an Android mobile phone, e.g. if I do five sit-stands.
[Note: I am currently detecting the motion but the motion in all direction is the same.]
What I need is:
I need to differentiate the motion downward, upward, forward and backward.
I need to find the height of the mobile phone from ground level (and the height of the person holding it).
Is there any sample project which has pattern motion detection implemented?
This isn't impossible, but it may not be extremely accurate, given that the accuracy of the accelerometer and gyroscopes in phones have improved a lot.
What your app will doing is taking sensor data, and doing a regression analysis.
1) You will need to build a model of data that you classify as five sit and stands. This could be done by asking the user to do five sit and stands, or by loading the app with a more fine-tuned model from data that you've collected beforehand. There may be tricks you could do, such as loading several models of people with different heights, and asking the user to submit their own height in the app, to use the best model.
2) When run, your app will be trying to fit the data from the sensors (Android has great libraries for this), to the model that you've made. Hopefully, when the user performs five sit-stands, he will generate a set of motion data similar enough to your definition of five sit-stands that your algorithm accepts it as such.
A lot of the work here is assembling and classifying your model, and playing with it until you get an acceptable accuracy. Focus on what makes a stand-sit unique to other up and down motions - For instance, there might be a telltale sign of extending the legs in the data, followed by a different shape for straightening up fully. Or, if you expect the phone to be in the pocket, you may not have a lot of rotational motion, so you can reject test sets that registered lots of change from the gyroscope.
It is impossible. You can recognize downward and upward comparing acceleration with main gravity force but how do you know is your phone is in the back pocket when you rise or just in your waving hand when you say hello? Was if 5 stand ups or 5 hellos?
Forward and backward are even more unpredictable. What is forward for upside-down phone? What if forward at all from phone point of view?
And ground level as well as height are completely out of measurement. Phone will move and produce accelerations in exact way for dwarf or giant - it more depends on person behavior or motionless then on height.
It's a topic of research and probably I'm way too late to post it here, but I'm foraging the literature anyway, so what?
All kind of machine learning approaches have been set on the issue, I'll mention some on the way. Andy Ng's MOOC on machine learning gives you an entry point to the field and into Matlab/Octave that you instantly can put to practice, it demystifies the monsters too ("Support vector machine").
I'd like to detect if somebody is drunk from phone acceleration and maybe angle, therefore I'm flirting with neuronal networks for the issue (they're good for every issue basically, if you can afford the hardware), since I don't want to assume pre-defined patterns to look for.
Your task could be approached pattern based it seems, an approach applied to classify golf play motions, dancing, behavioural every day walking patterns, and two times drunk driving detection where one addresses the issue of finding a base line for what actually is longitudinal motion as opposed to every other direction, which maybe could contribute to find the baselines you need, like what is the ground level.
It is a dense shrub of aspects and approaches, below just some more.
Lim e.a. 2009: Real-time End Point Detection Specialized for Acceleration Signal
He & Yin 2009: Activity Recognition from acceleration data Based on
Discrete Consine Transform and SVM
Dhoble e.a. 2012: Online Spatio-Temporal Pattern Recognition with Evolving Spiking Neural Networks utilising Address Event Representation, Rank Order, and Temporal Spike Learning
Panagiotakis e.a.: Temporal segmentation and seamless stitching of motion patterns for synthesizing novel animations of periodic dances
This one uses visual data, but walks you through a matlab implementation of a neuronal network classifier:
Symeonidis 2000: Hand Gesture Recognition Using Neural Networks
I do not necessarily agree with Alex's response. This is possible (although maybe not as accurate as you would like) using accelerometer, device rotation and ALOT of trial/error and data mining.
The way I see that this can work is by defining a specific way that the user holds the device (or the device is locked and positioned on the users' body). As they go through the motions the orientation combined with acceleration and time will determine what sort of motion is being performed. You will need to use class objects like OrientationEventListener, SensorEventListener, SensorManager, Sensor and various timers e.g. Runnables or TimerTasks.
From there, you need to gather a lot of data. Observe, record and study what the numbers are for doing specific actions, and then come up with a range of values that define each movement and sub-movements. What I mean by sub-movements is, maybe a situp has five parts:
1) Rest position where phone orientation is x-value at time x
2) Situp started where phone orientation is range of y-values at time y (greater than x)
3) Situp is at final position where phone orientation is range of z-values at time z (greater than y)
4) Situp is in rebound (the user is falling back down to the floor) where phone orientation is range of y-values at time v (greater than z)
5) Situp is back at rest position where phone orientation is x-value at time n (greatest and final time)
Add acceleration to this as well, because there are certain circumstances where acceleration can be assumed. For example, my hypothesis is that people perform the actual situp (steps 1-3 in my above breakdown) at a faster acceleration than when they are falling back. In general, most people fall slower because they cannot see what's behind them. That can also be used as an additional condition to determine the direction of the user. This is probably not true for all cases, however, which is why your data mining is necessary. Because I can also hypothesize that if someone has done many situps, that final situp is very slow and then they just collapse back down to rest position due to exhaustion. In this case the acceleration will be opposite of my initial hypothesis.
Lastly, check out Motion Sensors: http://developer.android.com/guide/topics/sensors/sensors_motion.html
All in all, it is really a numbers game combined with your own "guestimation". But you might be surprised at how well it works. Perhaps (hopefully) good enough for your purposes.
Good luck!

I was wondering if anyone knew what the 3 axis on an accelerometer are?

I am trying to create an algorithm to record data from a accelerometer, I was wondering if anyone knew what the x,y and z axis values were exactly?
If you have a look at the Sensors Overview:
Measures the acceleration force in m/s^2 that is applied to a device on
all three physical axes (x, y, and z), including the force of gravity.
Well, not that I can go into too much detail but can give you some insight.
The accelerometer measures forces made on the mobile, so if you put the phone on the table it should output x=0, y=1, z=0 (depending on which way the device is facing). The reason for this is that the table applies force to the phone keeping it from falling to the ground. As a consequence a reading of all zeroes would imply free-fall.
This means that often one of the first tasks when gathering accelerometer data is to determine which axis (or combination of) reads the forces of gravity and thereby knowing which part of the phone is facing up.
This guy explains it rather well: https://www.youtube.com/watch?v=KZVgKu6v808
Hope this is clear enough, otherwise attend to the documentation as mharper suggests.

Detect horizontal and vertical movement of the Smartphone on a table

I'm trying to write a small android game where the phone is placed on the table.
On the screen there is a ball, which the user control its movement by moving the phone.
Along all the game the user won't lift the phone from the table.
At the beginning the ball will placed in the middle of the screen:
Pushing the phone from the user:
should move the ball toward the top of the smartphone screen:
And from the current position of the ball, moving the phone back to the user, and to the right:
will move the ball accordingly:
I read the Android Motion Sensors Guide carefully but I didn't even realize what Sensor \ Sensors should I use.
I would love to get any directions.
First of all TYPE_LINEAR_ACCELERATION, TYPE_ROTATION_VECTOR, TYPE_GRAVITY are not physical sensors but made from sensor fusion.
Secondly from Android 4+ these fused sensors make use of device Gyroscope, so they WON'T work if the mobile device doesn't has a gyroscope.
So if you want to make a generic app for all phones prefer using only Accelerometer (TYPE_ACCELEROMETER).
Now for your case since user won't lift the mobile from table, and if you want you can easily subtract the Gravity component from accelerometer. See http://developer.android.com/reference/android/hardware/SensorEvent.html
under section Sensor.TYPE_ACCELEROMETER. (Code is given too).
Now you can see How can I find distance traveled with a gyroscope and accelerometer? to find the linear displacement & the 1st answer states their is NO use of Gyroscope. (Or you can just google for finding the displacement/Linear velocity from acceleromter readings)
Hope this all would give you quite lot an idea.
It's really difficult to do this type of linear position sensing using the types of sensors that smartphones have. Acceleration is the second derivative of position with respect to time. So in theory, you could use the accelerometer data, integrating twice in time to achieve the absolute position. In practice, noise makes that calculation inaccurate.
On the other hand, if you're really talking about putting the camera facedown on the table, maybe you can come up with some clever way of using OpenCV's optical flow features to detect motion using the camera, almost like an optical mouse. I don't know how you would light the surface (the flash would probably burn through the battery) but it might be possible to work something out.

How to use the sensor values in the same way as google skymap?

I am trying to replicate how Google Skymap makes use of sensor inputs. I currently use getOrientation and the accelerometer values to determine up and rotation. This works fine whilst the phone is held perpindicular to the horizon, but starts to fail when pointing the phone towards the ground. for example when laid flat on a table, the readings used (accel_x) when spinning the phone does not affect this input, but when held perpindicular the values are useful and I am able to rotate the display based on the values.
(I currently display a basic horizon quad)
I am thinking that I need to make use of more than one value here but am quite at a loss of how to do it.
Any pointers?
Also, is there a 2.1 alternative to Display.getRotation?
Thanks in advance
Note: I am only interested in the roll (in the aviation sense) of the phone, regardless of the pitch in which it is held.

Categories

Resources