How can I get real time exercise count and angle using ML kit? Here, I check https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html for push up and squat exercise count.
I am getting angle by following method :
fun getAngle(firstPoint: PoseLandmark, midPoint: PoseLandmark, lastPoint: PoseLandmark): Double {
var result = Math.toDegrees(atan2(lastPoint.getPosition().y - midPoint.getPosition().y,
lastPoint.getPosition().x - midPoint.getPosition().x)
- atan2(firstPoint.getPosition().y - midPoint.getPosition().y,
firstPoint.getPosition().x - midPoint.getPosition().x))
result = Math.abs(result) // Angle should never be negative
if (result > 180) {
result = 360.0 - result // Always get the acute representation of the angle
}
return result
}
I have added logic from my side but still want help if any proper way I got. What I am doing checking angle every time.
I want to display count and feedback based on user doing exercise.
I made a simple demo about squat count https://www.youtube.com/watch?v=XKrZV864rEQ
I just made three simple logical judgments
The height of the elbow is higher than the shoulder, otherwise the prompt "please hold your hands behind your head"
Standing straight is judged by the angle of the thigh and calf, and the effect is currently not good.
Compare the distance between the legs and the shoulders, and the legs should have a certain proportion of the shoulder width, otherwise the prompt "Please spread your feet and shoulder width" Both standing and squatting are judged by taking the human leg length/5 as the minimum movement unit, and the minimum distance from the last coordinate. Because the distance between the person standing and the camera will affect the coordinate ratio
My english is poor, most sentences tanslated by google translate
Here are several things you could try:
(1) You need to ask your users to face the camera in a certain way, e.g. side way might be the easiest for detecting squat and frontal would be the hardiest. You could try something in-between. Also how high the camera is (on the ground, head level, etc..) could also affect the angle.
(2) Then you can calculate and track the angle between body and thigh and the angle between thigh and calf to determine whether a squat is done.
(3) About feedback, you may set some expected angles, and if the user's angle is smaller than that, you could say "squat deeper"...
(4) To get the expected angles, you would need to find some sample images and run the detector on it to get them.
Related
I need find angle of vehicle turn measured in degrees.
Location points update with equal intervals (1 sec). Therefore device makes like 4-5 points during turn. I schematically displayed that on picture.
Is it possible to calculate the angle of turn using Location? If it is possible, how?
What I tried:
Create two geometric vectors from points 3, 4 and 1, 2 respectively and find angle between those vectors. Coordinates of vectors I calculated like Vector1 (lat2 - lat1; lon2 - lon2). Not sure this approach could be applied to Location coordinates.
Use location1.bearingTo(location2). But this doesn't give expected results. Seems like it gives "compass" results. Perhabs I could use it somehow but not sure.
Also tried few trigonometric formulas like here or here or here. They didn't give expected angle.
EDIT: Solution
The accepted answer works great. But to complete the answer I have to show that method of angleDifference. This one works for me:
public int getAngleDifference(int currentAngle){
int r = 0;
angleList.add(currentAngle);
if (angleList.size() == 4) {
int d = Math.abs(angleList.get(0) - angleList.get(3)) % 360;
r = d > 180 ? 360 - d : d;
angleList.clear();
}
return r;
}
I add points to list untill there're 4 of them and then calculate angle difference between 1st and 4th points for better results.
Hope it will help for someone!
vect1 = LatLon2 - LatLon1; // vector subtraction
vect2 = LatLon4 - LatLon3;
By definition of the dot product has the property:
vect1.vect2 = ||vect1||*||vect2||*Cos(theta)
Here's a breakdown of the notation
The term vect1.vect2 is the dot product of vect1 and vect2.
The general form of a dot product can be broken down component wise let v1 = <x1,y1> and v2=<x2,y2> for two arbitrary vectors v1 and v2 the dot product would be:
v1.v2 = x1*x2 + y1*y2
and the magnitude of some arbitrary vector v is:
||v|| = sqrt(v.v); which is a scalar.
The above is equivalent to the Euclidean distance formula with components x and y:
||v|| = sqrt(x^2 + y^2)
Getting the angle
Find a value for theta given the two vectors vect1 and vect2:
theta = Math.ArcCos(vect1.vect2/(||vect1||*||vect2||))
Approach 1 does not work as you described: Lat, Lon are not cartesian coordinates (One degree of longitude expressed in meters is not one degree of latitide, this is only valid at the equator). You would have first to transform to a (local) cartesian system.
An error is in the drawing: The angle marked with "?" is placed at the wrong side. You most probably want angle: 180 - ?
In your example the car ist turning less than 90°, altough your angle shows more than 90°.
To understand better make another drawing where the car turns left for only 10 degrees. In your drawing this would be 170°, which is wrong.
Approach 2) works better, but you need to sum up the angle differences.
You have to write yourself a method
double angleDifference(double angle1, double angle2);
This look easier than it is, although the code is only a few lines long.
Make sure that you have some test cases that tests the behaviour when crossing the 360° limit.
Example
(turn from bearing 10 to bearing 350), should either give 20 or -20, depending if you want that the method give sthe absolut evalue or the relative angle
I rotated my android device in x direction (from -180 degree to 180 degree), see image below.
And I assume only Rotation vector x value is changed. Y and z maybe have some noise, but it should be not much difference among the values.
However, I receive this. Kindly see
https://docs.google.com/spreadsheets/d/1ZLoSKI8XNjI1v4exaXxsuMtzP0qWTP5Uu4C3YTwnsKo/edit?usp=sharing
I suspect my sensor has some problem.
Any idea? Thank you very much.
Jimmy
Your sensor is fine.Well, the rotation vector entries cannot simply be related to the rotation angle around a particular axis. The SensorEvent structure constitutes of timestamp, sensor, accuracy and values. Depending on the vector the float[] of values vary in size 1-5. The rotation vectors values are based on unit quaternions, all together forming a vector representing the orientation of this world frame relative to your smartphone fixed frame above
They are unitless and positive counter-clockwise.
The orientation of the phone is represented by the rotation necessary to align the East-North-Up coordinates with the phone's coordinates. That is, applying the rotation to the world frame (X,Y,Z) would align them with the phone coordinates (x,y,z).
If the vector would be a Rotation-Matrix one could write it as v_body = R_rot_vec * v_world (<--)pushing the world vector into a smartphone fixed description.
Furthermore about the vector:
The three elements of the rotation vector are equal to the last three components of a unit quaternion <cos(θ/2), xsin(θ/2), ysin(θ/2), z*sin(θ/2)>.
Q: So what to do with it? Depending on your Euler-angles convention (possible 24 sequences, valid 12 ones) you could calculate the corresponding angles u := [ψ,θ,φ] by e.g. applying the 123 sequence:
If you already have the rotation matrix entries get euler like so:
the 321 sequence:
with q1-3 always being the values[0-2] (Dont get confused by u_ijk as ref(Diebel) uses different conventions comp. to the standard)But wait, your linked table only does have 3 values, which is similar to what I get. This is oneSensorEvent of mine, the last three are printed from values[]
timestamp sensortype accuracy values[0] values[1] values[2]
23191581386897 11 -75 -0.0036907701 -0.014922042 0.9932963
4q - 3 values = 1q unknown. The first q0 is redundant info (also the doku says it should be there under values[3], depends on your API-level). So we can use the norm (=length) to calculate q0 from the other three. Set the equation ||q|| = 1 and solve for q0. Now all q0-3 are known.
Furthermore my android 4.4.2 does not have the fourth estimated heading Accuracy (in radians) inside value[4], so I evaluate the event.accuracy:
for (SensorEvent e : currentEvent) {
if (e != null) {
String toMsg = "";
for(int i = 0; i < e.values.length;i++) {
toMsg += " " + String.valueOf(e.values[i]);
}
iBinder.msgString(String.valueOf(e.timestamp) + " "+String.valueOf(e.sensor.getType()) + " " + String.valueOf(e.accuracy) + toMsg, 0);
}
}
Put those equations into code and you will get things sorted.
Here is a short conversion helper, converting Quats. using either XYZ or ZYX. It can be run from shell github. (BSD-licensed)
The relevant part for XYZ
/*quaternation to euler in XYZ (seq:123)*/
double* quat2eulerxyz(double* q) {
/*euler-angles*/
double psi = atan2( -2.*(q[2]*q[3] - q[0]*q[1]) , q[0]*q[0] - q[1]*q[1]- q[2]*q[2] + q[3]*q[3] );
double theta = asin( 2.*(q[1]*q[3] + q[0]*q[2]) );
double phi = atan2( 2.*(-q[1]*q[2] + q[0]*q[3]) , q[0]*q[0] + q[1]*q[1] - q[2]*q[2] - q[3]*q[3] );
/*save var. by simply pushing them back into the array and return*/
q[1] = psi;
q[2] = theta;
q[3] = phi;
return q;
}
Here some examples applying quats to euls:
**Q:** What do the sequence ijk stand for? Take two coordinate-frames A and B superposing each other(all axis within each other) and start rotating frame B through i-axis having angle `psi`, then j-axis having angle `theta` and last z-axis having `phi`. It could also be α, β, γ for i,j,k. *I don't pick up the numbers as they are confusing (Diebel vs other papers).*
R(psi,theta,phi) = R_z(phi)R_y(theta)R_x(psi) (<--)
The trick is elementary rotations are applied from right to left, although we read the sequence from left to right.
Those are the three elementary rotations youre going through to go from
A to B: *v_B = R(psi,theta,phi) v_A*
**Q:** So how to get the euler angles/quats turn from [0°,0°,0°] to eg. [0°,90°,0°]?First align both frames from the pictures, respective the known device frame B to the "invisible" worldframe A. Your done superposing when the angles all get to [0°,0°,0°]. Just figure out where is north, south and east where you are sitting right now and point the devices frame B into those directions. Now when you rotate around y-axis counter-clockwise 90° you will have the desired [0°,90°,0°], when converting the quaternion.
*Julian*
*kinematics source: [Source Diebel(Stanford)][11] with solid info on the mechanics background (careful: for Diebel XYZ is denoted u_321 (1,2,3) while ZYX is u_123 (3,2,1)), and [this][12] is a good starting point.
I am making a 2d game. The phone is held horizontally and a character moves up/down & left/right to avoid obstacles. The character is controlled by the accelerometer on the phone. Everything works fine if the player doesn't mind (0,0) (the point where the character stands still) being when the phone is held perfectly flat. In this scenario it's possible to just read the Y and X values directly and use them to control the character. The accelerometer values are between -10 and 10 (they get multiplied by an acceleration constant to decide the movement speed of the character), libgdx is the framework used.
The problem is that having (0,0) isn't very comfortable, so the idea is to calibrate it so that 0,0 will be set to the phones position at a specific point in time.
Which brings me to my question, how would I do this? I tried just reading the current X and Y values then subtracting it. The problem with that is that when the phone is held at a 90 degree angle then the X offset value is 10 (which is the max value) so it ends up becoming impossible to move because the value will never go over 10 (10-10 = 0). The Z axis has to come into play here somehow, I'm just not sure how.
Thanks for the help, I tried explaining as best as I can, I did try searching for the solution, but I don't even know what the proper term is for what I'm looking for.
An old question, but I am providing the answer here as I couldn't find a good answer for Android or LibGDX anywhere. The code below is based on a solution someone posted for iOS (sorry, I have lost the reference).
You can do this in three parts:
Capture a vector representing the neutral direction:
Vector3 tiltCalibration = new Vector3(
Gdx.input.getAccelerometerX(),
Gdx.input.getAccelerometerY(),
Gdx.input.getAccelerometerZ() );
Transform this vector into a rotation matrix:
public void initTiltControls( Vector3 tiltCalibration ) {
Vector3.tmp.set( 0, 0, 1 );
Vector3.tmp2.set( tiltCalibration ).nor();
Quaternion rotateQuaternion = new Quaternion().setFromCross( Vector3.tmp, Vector3.tmp2 );
Matrix4 m = new Matrix4( Vector3.Zero, rotateQuaternion, new Vector3( 1f, 1f, 1f ) );
this.calibrationMatrix = m.inv();
}
Whenever you need inputs from the accelerometer, first run them through the rotation matrix:
public void handleAccelerometerInputs( float x, float y, float z ) {
Vector3.tmp.set( x, y, z );
Vector3.tmp.mul( this.calibrationMatrix );
x = Vector3.tmp.x;
y = Vector3.tmp.y;
z = Vector3.tmp.z;
[use x, y and z here]
...
}
For a simple solution you can look at the methods:
Gdx.input.getAzimuth(), Gdx.input.getPitch(), Gdx.input.getRoll()
The downside is that those somehow use the internal compass to give your devices rotation compared to North/South/East/West. I did only test that very shortly so I'm not 100% sure about it though. Might be worth a look.
The more complex method involves some trigonometry, basically you have to calculate the angle the phone is held at from Gdx.input.getAccelerometerX/Y/Z(). Must be something like (for rotation along the longer side of the phone):
Math.atan(Gdx.input.getAccelerometerX() / Gdx.input.getAccelerometerZ());
For both approaches you then store the initial angle and subtract it later on again. You have to watch out for the ranges though, I think Math.atan(...) is within -Pi and Pi.
Hopefully that'll get you started somehow. You might search for "Accelerometer to pitch/roll/rotation" and similar, too.
I am creating an app in android where i need to detect if the person has fall down. I know that this question has been asked and answered as to use vector mathematics in other forums but i am not getting the accurate results out of it.
Below is my code to detect the fall:
#Override
public void onSensorChanged(SensorEvent arg0) {
// TODO Auto-generated method stub
if (arg0.sensor.getType()==Sensor.TYPE_ACCELEROMETER) {
double gvt=SensorManager.STANDARD_GRAVITY;
float vals[] = arg0.values;
//int sensor=arg0.sensor.getType();
double xx=arg0.values[0];
double yy=arg0.values[1];
double zz=arg0.values[2];
double aaa=Math.round(Math.sqrt(Math.pow(xx, 2)
+Math.pow(yy, 2)
+Math.pow(zz, 2)));
if (aaa<=6.0) {
min=true;
//mintime=System.currentTimeMillis();
}
if (min==true) {
i++;
if(aaa>=13.5) {
max=true;
}
}
if (min==true && max==true) {
Toast.makeText(FallDetectionActivity.this,"FALL DETECTED!!!!!" ,Toast.LENGTH_LONG).show();
i=0;
min=false;
max=false;
}
if (i>4) {
i=0;
min=false;
max=false;
}
}
}
To explain the above code i have used the vector sum and checking if the value has reached below or equal to 6(while fall) and suddenly greater than 13.5(while landing) to confirm the fall.
Now i was been told in the forums that if the device is still the vector sum will return the value of 9.8. While fall it should be close to 0 and should go to around 20 while landing. This doesn't seem to happen in my case. Please can anybody suggest if i am going wrong anywhere?
There is a guy who developed an android app for that. Maybe you can get some information from his site: http://ww2.cs.fsu.edu/~sposaro/iFall/. He also made an article explaining how he detected the fall. It is really interesting, you should check it out!
Link for the paper: http://ww2.cs.fsu.edu/~sposaro/publications/iFall.pdf
Resuming, the fall detection is based on the resultant of the X-Y-Z acceleration. Based on this value:
When falling, the falling generally starts with a free fall period, making the resultand drop significantly below 1g.
On the impact on the ground, there is a peak in the amplitude of the resultant, with values higher than 3g.
After that, if the person could not move due to the fall, the resultant will remain close to 1G.
Following will happen if person / phone falls down:
absolute acceleration vector value goes to 0 ( with some noise of course )
there will be fair spike in absolute vector value on landing ( up to maximal value provided by accelerometer )
When phone is immobile, you have vector of modulo earth gravity pointing up
Your code is basically correct, but I would use some averaging because accelerometers used in phones are cheap crap - noisy and lacking precision
To add averaging to your signal means:- moving average. It depends on your windows size. For example. Say I have a one vector with the following numbers: 1,2,3,4,5,6. and my window size is 2. Then the moving average is to take every two numbers from your vector and average them by 2. So you would take 1+2/2, and then move one to the next twos. 2+3/2, and so on.
I'm writing an application and my aim is to detect when a user is walking.
I'm using a Kalman filter like this:
float kFilteringFactor=0.6f;
gravity[0] = (accelerometer_values[0] * kFilteringFactor) + (gravity[0] * (1.0f - kFilteringFactor));
gravity[1] = (accelerometer_values[1] * kFilteringFactor) + (gravity[1] * (1.0f - kFilteringFactor));
gravity[2] = (accelerometer_values[2] * kFilteringFactor) + (gravity[2] * (1.0f - kFilteringFactor));
linear_acceleration[0] = (accelerometer_values[0] - gravity[0]);
linear_acceleration[1] = (accelerometer_values[1] - gravity[1]);
linear_acceleration[2] = (accelerometer_values[2] - gravity[2]);
float magnitude = 0.0f;
magnitude = (float)Math.sqrt(linear_acceleration[0]*linear_acceleration[0]+linear_acceleration[1]*linear_acceleration[1]+linear_acceleration[2]*linear_acceleration[2]);
magnitude = Math.abs(magnitude);
if(magnitude>0.2)
//walking
The array gravity[] is initialized with 0s.
I can detect when a user is walking or not (looking at the value of the magnitude of the acceleration vector), but my problem is that when a user is not walking and he moves the phones, it seems that he is walking.
Am I using the right filter?
Is it right to watch only the magnitude of the vector or have I to look at the single values ??
Google provides an API for this called DetectedActivity that can be obtained using the ActivityRecognitionApi. Those docs can be accessed here and here.
DetectedActivity has the method public int getType() to get the current activity of the user and also public int getConfidence() which returns a value from 0 to 100. The higher the value returned by getConfidence(), the more certain the API is that the user is performing the returned activity.
Here is a constant summary of what is returned by getType():
int IN_VEHICLE The device is in a vehicle, such as a car.
int ON_BICYCLE The device is on a bicycle.
int ON_FOOT The device is on a user who is walking or running.
int RUNNING The device is on a user who is running.
int STILL The device is still (not moving).
int TILTING The device angle relative to gravity changed significantly.
int UNKNOWN Unable to detect the current activity.
int WALKING The device is on a user who is walking.
My first intuition would be to run an FFT analysis on the sensor history, and see what frequencies have high magnitudes when walking.
It's essentially seeing what walking "sounds like", treating the accelerometer sensor inputs like a microphone and seeing the frequencies that are loud when walking (in other words, at what frequency is the biggest acceleration happening).
I'd guess you'd be looking for a high magnitude at some low frequency (like footstep rate) or maybe something else. It would be interesting to see the data.
My guess is you run the FFT and look for the magnitude at some frequency to be greater than some threshold, or the difference between magnitudes of two of the frequencies is more than some amount. Again, the actual data would determine how you attempt to detect it.
For walking detection I use the derivative applied to the smoothed signal from accelerometer. When the derivative is greater than threshold value I can suggest that it was a step. But I guess that it's not best practise, furthermore it only works when the phone is placed in a pants pocket.
The following code was used in this app https://play.google.com/store/apps/details?id=com.tartakynov.robotnoise
#Override
public void onSensorChanged(SensorEvent event) {
if (event.sensor.getType() != Sensor.TYPE_ACCELEROMETER){
return;
}
final float z = smooth(event.values[2]); // scalar kalman filter
if (Math.abs(z - mLastZ) > LEG_THRSHOLD_AMPLITUDE)
{
mInactivityCount = 0;
int currentActivity = (z > mLastZ) ? LEG_MOVEMENT_FORWARD : LEG_MOVEMENT_BACKWARD;
if (currentActivity != mLastActivity){
mLastActivity = currentActivity;
notifyListeners(currentActivity);
}
} else {
if (mInactivityCount > LEG_THRSHOLD_INACTIVITY) {
if (mLastActivity != LEG_MOVEMENT_NONE){
mLastActivity = LEG_MOVEMENT_NONE;
notifyListeners(LEG_MOVEMENT_NONE);
}
} else {
mInactivityCount++;
}
}
mLastZ = z;
}
EDIT: I don't think it's accurate enough since when walking normally the average acceleration would be near 0. The most you could do measuring acceleration is detect when someone starts walking or stops (But as you said, it's difficult to filter it from the device moved by someone standing at one place)
So... what I wrote earlier, probably wouldn't work anyway:
You can "predict" whether the user is moving by discarding when the user is not moving (obvious), And first two options coming to my mind are:
Check whether the phone is "hidden", using proximity and light sensor (optional). This method is less accurate but easier.
Controlling the continuity of the movement, if the phone is moving for more than... 10 seconds and the movement is not despicable, then you consider he is walking. I know is not perfet either, but it's difficult wihout using any kind of positioning, by the way... why don't you just use LocationManager?
Try detecting the up and down oscillations, the fore and aft oscillations and the frequency of each and make sure they stay aligned within bounds on average, because you would detect walking and specifically that person's gait style which should remain relatively constant for several steps at once to qualify as moving.
As long as the last 3 oscillations line up within reason then conclude walking is occurring as long as this also is true:-
You measure horizontal acceleration and update a velocity value with it. Velocity will drift with time, but you need to keep a moving average of velocity smoothed over the time of a step, and as long as it doesn't drift more than say half of walking speed per 3 oscillations then it's walking but only if it initially rose to walking speed within a short time ie half a second or 2 oscillations perhaps.
All of that should just about cover it.
Of course, a little ai would help make things simpler or just as complex but amazingly accurate if you considered all of these as inputs to a NN. Ie preprocessing.