I am working on an application which requires me to manually handle the fling process rather than giving it to the framework. What I want to achieve is basically calculate the amount of pixels a listview moves when it receives a fling action. As the scroll method already provides distance in form of delta, I have handled it easily. But is there a way to get fling distance as only velocity parameter is being passed in the super method.
Note- I have to move another view in accordance with the fling distance, so I need to get it simultaneously just like onScroll provides it.
Thanks.
It is passed 3 years but no answer yet. I found some workaround to achieve it.
Actually it is kind of advanced topic as there are a lot of nuances but basically you can refer to Android source code(OverScroller class in particular) and use this method. You will need to copy it into your class and use it.
private double getSplineFlingDistance(int velocity) {
final double l = getSplineDeceleration(velocity);
final double decelMinusOne = DECELERATION_RATE - 1.0;
return mFlingFriction * PHYSICAL_COEF * Math.exp(DECELERATION_RATE / decelMinusOne * l);
}
Other methods and values can be obtained from the same class.
The link to the source code: https://android.googlesource.com/platform/frameworks/base/+/jb-release/core/java/android/widget/OverScroller.java
Keep in mind that in some devices the value can be different (not too much). Some vendors change the formula depending on their requirements and hardware to make it more smooth.
It looks like the original question ended up with nothing, but it was formulated pretty good, so I landed here and started my research. Here are my results.
My question was: What is the final value at the end of Android standard FlingAnimation?
new FlingAnimation(new FloatValueHolder(0f))
.addEndListener((animation, canceled, value, velocity) -> {
? value
I needed that value before animation start based on the start velocity to make some preparations at the destination point of the FlingAnimation.
Actually I started with Overscroller.java mentioned by #Adil Aliyev. I collected all the portions of code, but the result was way less, that came from the animation.
Then I took a look into FlingAnimation.java in pair with DynamicAnimation.java.
The key function in FlingAnimation.java to start the research was:
MassState updateValueAndVelocity(float value, float velocity, long deltaT) {
After playing with some equations I composed this final code. It gives not totally exact estimation to the last digit, but very close. I will use it for my needs. You are welcome too:
final float DEFAULT_FRICTION = -4.2f;
final float VELOCITY_THRESHOLD_MULTIPLIER = 1000f / 16f;
float mFriction = 1.1f * DEFAULT_FRICTION; // set here friction that you set in .setFriction(1.1f) or 1 by default
final float THRESHOLD_MULTIPLIER = 0.75f;
float mVelocityThreshold = THRESHOLD_MULTIPLIER * VELOCITY_THRESHOLD_MULTIPLIER;
double time = Math.log(mVelocityThreshold / startVelocity) * 1000d / mFriction;
double flingDistance = startVelocity / mFriction * (Math.exp(mFriction * time / 1000d) - 1);
Related
I am using below code to identify the movement of the device, means I would like to know that device is moving or not. I also use Google Activity APIs which provides different activity modes like WALKING, ON_FOOT, STILL, etc without using GPS. I would like to achieve the same with Sensors but I am not able to get it accurately.
The issue with the following code is that as soon as I move the device quickly like take it from the table then I am getting the result as moving whereas it's not actually moving.
// calling method from onSensorChanged method and using TYPE_ACCELEROMETER sensor.
double speed = getAccelerometer(event.values);
// then checking the speed.
if(speed > 0.9 && speed < 1.1) {
// device is not moving
} else {
// device is moving.
}
/**
* #return
*/
private double getAccelerometer(float[] values) {
// Movement
float x = values[0];
float y = values[1];
float z = values[2];
float accelerationSquareRoot =
(float) ((x * x + y * y + z * z) / (9.80665 * 9.80665));
return Math.sqrt(accelerationSquareRoot);
}
Can anyone guide me how to make this logic accurate so that I can identify the device is moving or not?
The accelerometer is made to return acceleration data and according to Netwon's 2nd law if the acceleration is constant then the body is not moving or moving with constant speed(this is quite impossibile in your case).
Therefore if you keep reading the same data on all three axis(or better in a quite strict range) from accelerometer over time it means the phone is not moving otherwise it is.
For the purpose, you need to use Activity Recognition API which will provide you some events like moving, stop, driving, e.t.c, And activity recognize use some sensor data and also help of location service when is running. For the more how we can use and what actually it. You can read from below link
https://developers.google.com/location-context/activity-recognition/
I am currently attempting to get a person's speed through the use of GooglePlayServices.
I've managed to connect to the Services and get updates on the location, but the problem is on working out the speed for the person in question - that is, I am having trouble dividing the time and distance.
I have posted this previously, and someone suggested that casting long would fix the problem. It was quickly marked duplicate, so I didn't get to continue the discussion, but I wanted to say it did not work.
The below is my current code.
float Distance = OldLocation.distanceTo(location);
//Getting Difference in seconds
long TimeDiff = location.getTime()-OldLocation.getTime();
float SecondDiff = TimeUnit.MILLISECONDS.toSeconds(TimeDiff);
//Finally working out speed (m/s)
if (SecondDiff == 0) {
SecondDiff = 1;
}
float Speed = ((long)Distance/SecondDiff);
Speed keeps on resulting on 0, rather than the proper value. I've cast the long on it, but it refused.
I've also tried:
float Speed = ((long)Distance/(long)SecondDiff);
and
float Speed = (long)(Distance/SecondDiff);
What could be the source of my problem here?
Why don't you just get the speed from every location object by the getSpeed() method, like:
float speed = location.getSpeed();
I am trying to calculate the approximate position of an Android phone in a room. I tried with different methods such as location (wich is terrible in indoors) and gyroscope+compass. I only need to know the approximate position after walking during 5-10seconds so I think the integration of linear acceleration could be enough. I know the error is terrible because of the propagation of the error but maybe it will work in my setup. I only need the approximate position to point a camera to the Android phone.
I coded the double integration but I am doing sth wrong. IF the phone is static on a table the position (x,y,z) always keep increasing. What is the problem?
static final float NS2S = 1.0f / 1000000000.0f;
float[] last_values = null;
float[] velocity = null;
float[] position = null;
float[] acceleration = null;
long last_timestamp = 0;
SensorManager mSensorManager;
Sensor mAccelerometer;
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_LINEAR_ACCELERATION)
return;
if(last_values != null){
float dt = (event.timestamp - last_timestamp) * NS2S;
acceleration[0]=(float) event.values[0] - (float) 0.0188;
acceleration[1]=(float) event.values[1] - (float) 0.00217;
acceleration[2]=(float) event.values[2] + (float) 0.01857;
for(int index = 0; index < 3;++index){
velocity[index] += (acceleration[index] + last_values[index])/2 * dt;
position[index] += velocity[index] * dt;
}
}
else{
last_values = new float[3];
acceleration = new float[3];
velocity = new float[3];
position = new float[3];
velocity[0] = velocity[1] = velocity[2] = 0f;
position[0] = position[1] = position[2] = 0f;
}
System.arraycopy(acceleration, 0, last_values, 0, 3);
last_timestamp = event.timestamp;
}
These are the positions I get when the phone is on the table (no motion). The (x,y,z) values are increasing but the phone is still.
And these are the positions after calculate the moving average for each axis and substract from each measurement. The phone is also still.
How to improve the code or another method to get the approximate position inside a room?
There are unavoidable measurement errors in the accelerometer. These are caused by tiny vibrations in the table, imperfections in the manufacturing, etc. etc. Accumulating these errors over time results in a Random Walk. This is why positioning systems can only use accelerometers as a positioning aid through some filter. They still require some form of dead reckoning such as GPS (which doesn't work well in doors).
There is a great deal of current research for indoor positioning systems. Some areas of research into systems that can take advantage of existing infrastructure are WiFi and LED lighting positioning. There is no obvious solution yet, but I'm sure we'll need a dedicated solution for accurate, reliable indoor positioning.
You said the position always keeps increasing. Do you mean the x, y, and z components only ever become positive, even after resetting several times? Or do you mean the position keeps drifting from zero?
If you output the raw acceleration measurements when the phone is still you should see the measurement errors. Put a bunch of these measurements in an Excel spreadsheet. Calculate the mean and the standard deviation. The mean should be zero for all axes. If not there is a bias that you can remove in your code with a simple averaging filter (calculate a running average and subtract that from each result). The standard deviation will show you how far you can expect to drift in each axis after N time steps as standard_deviation * sqrt(N). This should help you mathematically determine the expected accuracy as a function of time (or N time steps).
Brian is right, there are already deployed indoor positioning systems that work with infrastructure that you can easily find in (almost) any room.
One of the solutions that has proven to be most reliable is WiFi fingerprinting. I recommend you take a look at indoo.rs - www.indoo.rs - they are pioneers in the industry and have a pretty developed system already.
This may not be the most elegant or reliable solution, but in my case it serves the purpose.
Note In my case, I am grabbing a location before the user can even enter the activity that needs indoor positioning.. and I am only concerned with a rough estimate of how much they have moved around.
I have a sensor manager that is creating a rotation matrix based on the device orientation. (using Sensor.TYPE_ROTATION_VECTOR) That obviously doesn't give me movement forward, backward, or side to side, but instead only the device orientation. With that device orientation i have a good idea of the user's bearing in degrees (which way they are facing) and using the Sensor_Step_Detector available in KitKat 4.4, I make the assumption that a step is 1 meter in the direction the user is facing..
Again, I know this is not full proof or very accurate, but depending on your purpose this too might be a simple solution..
everytime a step is detected i basically call this function:
public void computeNewLocationByStep() {
Location newLocal = new Location("");
double vAngle = getBearingInDegrees(); // returns my users bearing
double vDistance = 1 / g.kEarthRadiusInMeters; //kEarthRadiusInMeters = 6353000;
vAngle = Math.toRadians(vAngle);
double vLat1 = Math.toRadians(_location.getLatitude());
double vLng1 = Math.toRadians(_location.getLongitude());
double vNewLat = Math.asin(Math.sin(vLat1) * Math.cos(vDistance) +
Math.cos(vLat1) * Math.sin(vDistance) * Math.cos(vAngle));
double vNewLng = vLng1 + Math.atan2(Math.sin(vAngle) * Math.sin(vDistance) * Math.cos(vLat1),
Math.cos(vDistance) - Math.sin(vLat1) * Math.sin(vNewLat));
newLocal.setLatitude(Math.toDegrees(vNewLat));
newLocal.setLongitude(Math.toDegrees(vNewLng));
stepCount =0;
_location = newLocal;
}
I'm working on implementing gesture recognition in my app, using the Gestures Builder to create a library of gestures. I'm wondering if having multiple variations of a gesture will help or hinder recognition (or performance). For example, I want to recognize a circular gesture. I'm going to have at least two variations - one for a clockwise circle, and one for counterclockwise, with the same semantic meaning so that the user doesn't have to think about it. However, I'm wondering if it would be desirable to save several gestures for each direction, for example, of various radii, or with different shapes that are "close enough" - like egg shapes, ellipses, etc., including different angular rotations of each. Anybody have experience with this?
OK, after some experimenation and reading of the android source, I've learned a little... First, it appears that I don't necessarily have to worry about creating different gestures in my gesture library to cover different angular rotations or directions (clockwise/counterclockwise) of my circular gesture. By default, a GestureStore uses a sequence type of SEQUENCE_SENSITIVE (meaning that the starting point and ending points matter), and an orientation style of ORIENTATION_SENSITIVE (meaning that the rotational angle matters). However, these defaults can be overridden with 'setOrientationStyle(ORIENTATION_INVARIANT)' and setSequenceType(SEQUENCE_INVARIANT).
Furthermore, to quote from the comments in the source... "when SEQUENCE_SENSITIVE is used, only single stroke gestures are currently allowed" and "ORIENTATION_SENSITIVE and ORIENTATION_INVARIANT are only for SEQUENCE_SENSITIVE gestures".
Interestingly, ORIENTATION_SENSITIVE appears to mean more than just "orientation matters". It's value is 2, and the comments associated with it and some related (undocumented) constants imply that you can request different levels of sensitivity.
// at most 2 directions can be recognized
public static final int ORIENTATION_SENSITIVE = 2;
// at most 4 directions can be recognized
static final int ORIENTATION_SENSITIVE_4 = 4;
// at most 8 directions can be recognized
static final int ORIENTATION_SENSITIVE_8 = 8;
During a call to GestureLibary.recognize(), the orientation type value (1, 2, 4, or 8) is passed through to GestureUtils.minimumCosineDistance() as the parameter numOrientations, whereupon some calculations are performed that are above my pay grade (see below). If someone can explain this, I'm interested. I get that it is calculating the angular difference between two gestures, but I don't understand the way it's using the numOrientations parameter. My expectation is that if I specify a value of 2, it finds the minimum distance between gesture A and two variations of gesture B -- one variation being "normal B", and the other being B spun around 180 degrees. Thus, I would expect a value of 8 would consider 8 variations of B, spaced 45 degrees apart. However, even though I don't fully understand the math below, it doesn't look to me like a numOrientations value of 4 or 8 is used directly in any calculations, although values greater than 2 do result in a distinct code path. Maybe that's why those other values are undocumented.
/**
* Calculates the "minimum" cosine distance between two instances.
*
* #param vector1
* #param vector2
* #param numOrientations the maximum number of orientation allowed
* #return the distance between the two instances (between 0 and Math.PI)
*/
static float minimumCosineDistance(float[] vector1, float[] vector2, int numOrientations) {
final int len = vector1.length;
float a = 0;
float b = 0;
for (int i = 0; i < len; i += 2) {
a += vector1[i] * vector2[i] + vector1[i + 1] * vector2[i + 1];
b += vector1[i] * vector2[i + 1] - vector1[i + 1] * vector2[i];
}
if (a != 0) {
final float tan = b/a;
final double angle = Math.atan(tan);
if (numOrientations > 2 && Math.abs(angle) >= Math.PI / numOrientations) {
return (float) Math.acos(a);
} else {
final double cosine = Math.cos(angle);
final double sine = cosine * tan;
return (float) Math.acos(a * cosine + b * sine);
}
} else {
return (float) Math.PI / 2;
}
}
Based on my reading, I theorized that the simplest and best approach would be to have one stored circular gesture, setting the sequence type and orientation to invariant. That way, anything circular should match pretty well, regardless of direction or orientation. So I tried that, and it did return high scores (in the range of about 25 to 70) for pretty much anything remotely resembling a circle. However, it also returned scores of 20 or so for gestures that were not even close to circular (horizontal lines, V shapes, etc.). So, I didn't feel good about the separation between what should be matches and what should not. What seems to be working best is to have two stored gestures, one in each direction, and using SEQUENCE_SENSITIVE in conjunction with ORIENTATION_INVARIANT. That's giving me scores of 2.5 or higher for anything vaguely circular, but scores below 1 (or no matches at all) for gestures that are not circular.
I'm writing an application and my aim is to detect when a user is walking.
I'm using a Kalman filter like this:
float kFilteringFactor=0.6f;
gravity[0] = (accelerometer_values[0] * kFilteringFactor) + (gravity[0] * (1.0f - kFilteringFactor));
gravity[1] = (accelerometer_values[1] * kFilteringFactor) + (gravity[1] * (1.0f - kFilteringFactor));
gravity[2] = (accelerometer_values[2] * kFilteringFactor) + (gravity[2] * (1.0f - kFilteringFactor));
linear_acceleration[0] = (accelerometer_values[0] - gravity[0]);
linear_acceleration[1] = (accelerometer_values[1] - gravity[1]);
linear_acceleration[2] = (accelerometer_values[2] - gravity[2]);
float magnitude = 0.0f;
magnitude = (float)Math.sqrt(linear_acceleration[0]*linear_acceleration[0]+linear_acceleration[1]*linear_acceleration[1]+linear_acceleration[2]*linear_acceleration[2]);
magnitude = Math.abs(magnitude);
if(magnitude>0.2)
//walking
The array gravity[] is initialized with 0s.
I can detect when a user is walking or not (looking at the value of the magnitude of the acceleration vector), but my problem is that when a user is not walking and he moves the phones, it seems that he is walking.
Am I using the right filter?
Is it right to watch only the magnitude of the vector or have I to look at the single values ??
Google provides an API for this called DetectedActivity that can be obtained using the ActivityRecognitionApi. Those docs can be accessed here and here.
DetectedActivity has the method public int getType() to get the current activity of the user and also public int getConfidence() which returns a value from 0 to 100. The higher the value returned by getConfidence(), the more certain the API is that the user is performing the returned activity.
Here is a constant summary of what is returned by getType():
int IN_VEHICLE The device is in a vehicle, such as a car.
int ON_BICYCLE The device is on a bicycle.
int ON_FOOT The device is on a user who is walking or running.
int RUNNING The device is on a user who is running.
int STILL The device is still (not moving).
int TILTING The device angle relative to gravity changed significantly.
int UNKNOWN Unable to detect the current activity.
int WALKING The device is on a user who is walking.
My first intuition would be to run an FFT analysis on the sensor history, and see what frequencies have high magnitudes when walking.
It's essentially seeing what walking "sounds like", treating the accelerometer sensor inputs like a microphone and seeing the frequencies that are loud when walking (in other words, at what frequency is the biggest acceleration happening).
I'd guess you'd be looking for a high magnitude at some low frequency (like footstep rate) or maybe something else. It would be interesting to see the data.
My guess is you run the FFT and look for the magnitude at some frequency to be greater than some threshold, or the difference between magnitudes of two of the frequencies is more than some amount. Again, the actual data would determine how you attempt to detect it.
For walking detection I use the derivative applied to the smoothed signal from accelerometer. When the derivative is greater than threshold value I can suggest that it was a step. But I guess that it's not best practise, furthermore it only works when the phone is placed in a pants pocket.
The following code was used in this app https://play.google.com/store/apps/details?id=com.tartakynov.robotnoise
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER){
return;
}
final float z = smooth(event.values[2]); // scalar kalman filter
if (Math.abs(z - mLastZ) > LEG_THRSHOLD_AMPLITUDE)
{
mInactivityCount = 0;
int currentActivity = (z > mLastZ) ? LEG_MOVEMENT_FORWARD : LEG_MOVEMENT_BACKWARD;
if (currentActivity != mLastActivity){
mLastActivity = currentActivity;
notifyListeners(currentActivity);
}
} else {
if (mInactivityCount > LEG_THRSHOLD_INACTIVITY) {
if (mLastActivity != LEG_MOVEMENT_NONE){
mLastActivity = LEG_MOVEMENT_NONE;
notifyListeners(LEG_MOVEMENT_NONE);
}
} else {
mInactivityCount++;
}
}
mLastZ = z;
}
EDIT: I don't think it's accurate enough since when walking normally the average acceleration would be near 0. The most you could do measuring acceleration is detect when someone starts walking or stops (But as you said, it's difficult to filter it from the device moved by someone standing at one place)
So... what I wrote earlier, probably wouldn't work anyway:
You can "predict" whether the user is moving by discarding when the user is not moving (obvious), And first two options coming to my mind are:
Check whether the phone is "hidden", using proximity and light sensor (optional). This method is less accurate but easier.
Controlling the continuity of the movement, if the phone is moving for more than... 10 seconds and the movement is not despicable, then you consider he is walking. I know is not perfet either, but it's difficult wihout using any kind of positioning, by the way... why don't you just use LocationManager?
Try detecting the up and down oscillations, the fore and aft oscillations and the frequency of each and make sure they stay aligned within bounds on average, because you would detect walking and specifically that person's gait style which should remain relatively constant for several steps at once to qualify as moving.
As long as the last 3 oscillations line up within reason then conclude walking is occurring as long as this also is true:-
You measure horizontal acceleration and update a velocity value with it. Velocity will drift with time, but you need to keep a moving average of velocity smoothed over the time of a step, and as long as it doesn't drift more than say half of walking speed per 3 oscillations then it's walking but only if it initially rose to walking speed within a short time ie half a second or 2 oscillations perhaps.
All of that should just about cover it.
Of course, a little ai would help make things simpler or just as complex but amazingly accurate if you considered all of these as inputs to a NN. Ie preprocessing.