Im using the latest UX SDK 4.12 on Android with DJI MAVIC 2 enterprise.
I upload 2 WayPoints in a WayPointMission. The aircraft is heading in the direction of the flight:
WaypointMissionHeadingMode.AUTO
Once the last waypoint is reached, I send startShootPhoto command to take a single photo.
For some strange reason, the aircraft turns to the absolute north or west north before taking the photo and then turns back to the original heading position.
Can any one please suggest how to keep the aircraft heading in the same direction when taking the photo?
I found the following attributes to control the Flight Orientation and it solved the problem. It seems that camera was not the one causing the aircraft to turn.
aircraft.getFlightController().setFlightOrientationMode(FlightOrientationMode.AIRCRAFT_HEADING, new CommonCallbacks.CompletionCallback() {
#Override
public void onResult(DJIError djiError) {
Log.i(TAG, "FlightOrientationMode " + djiError);
}
});
WaypointMissionHeadingMode mHeadingMode = WaypointMissionHeadingMode.AUTO;
waypointMissionBuilder = new WaypointMission.Builder().finishedAction(mFinishedAction)
.headingMode(mHeadingMode)
.autoFlightSpeed(SPEED)
.maxFlightSpeed(SPEED)
.flightPathMode(WaypointMissionFlightPathMode.NORMAL);
Related
How can I get real time exercise count and angle using ML kit? Here, I check https://ai.googleblog.com/2020/08/on-device-real-time-body-pose-tracking.html for push up and squat exercise count.
I am getting angle by following method :
fun getAngle(firstPoint: PoseLandmark, midPoint: PoseLandmark, lastPoint: PoseLandmark): Double {
var result = Math.toDegrees(atan2(lastPoint.getPosition().y - midPoint.getPosition().y,
lastPoint.getPosition().x - midPoint.getPosition().x)
- atan2(firstPoint.getPosition().y - midPoint.getPosition().y,
firstPoint.getPosition().x - midPoint.getPosition().x))
result = Math.abs(result) // Angle should never be negative
if (result > 180) {
result = 360.0 - result // Always get the acute representation of the angle
}
return result
}
I have added logic from my side but still want help if any proper way I got. What I am doing checking angle every time.
I want to display count and feedback based on user doing exercise.
I made a simple demo about squat count https://www.youtube.com/watch?v=XKrZV864rEQ
I just made three simple logical judgments
The height of the elbow is higher than the shoulder, otherwise the prompt "please hold your hands behind your head"
Standing straight is judged by the angle of the thigh and calf, and the effect is currently not good.
Compare the distance between the legs and the shoulders, and the legs should have a certain proportion of the shoulder width, otherwise the prompt "Please spread your feet and shoulder width" Both standing and squatting are judged by taking the human leg length/5 as the minimum movement unit, and the minimum distance from the last coordinate. Because the distance between the person standing and the camera will affect the coordinate ratio
My english is poor, most sentences tanslated by google translate
Here are several things you could try:
(1) You need to ask your users to face the camera in a certain way, e.g. side way might be the easiest for detecting squat and frontal would be the hardiest. You could try something in-between. Also how high the camera is (on the ground, head level, etc..) could also affect the angle.
(2) Then you can calculate and track the angle between body and thigh and the angle between thigh and calf to determine whether a squat is done.
(3) About feedback, you may set some expected angles, and if the user's angle is smaller than that, you could say "squat deeper"...
(4) To get the expected angles, you would need to find some sample images and run the detector on it to get them.
I'm using Wikitude (on Android with the Javascript APIs) to show a transparent video inside of the AR experience. I don't have a marker on which putting my video. That video has its coordinates (relative to the user's position) and I want to place it on an exact position so that the user can see it when its device is pointing towards that direction.
In order to do that I used an AR.RelativeLocation object and placed a VideoDrawable on a particular position.
// the location
var location = new AR.RelativeLocation(null, 5, 5, 0);
// the video
var video = new AR.VideoDrawable("assets/transparentVideo.mp4", 5, {
scale: {
x: 1,
y: 1,
z: 1
},
isTransparent: true,
onLoaded: this.worldLoaded
});
video.play(0) // the video starts immediately
// the GeoObject showing the video
var obj = new AR.GeoObject(location, {
drawables: {
cam: [video]
}
});
The problem is that the video is not stable at all. When I turn my device I see the video approximately on its position but it's not fixed: it moves following my camera movements for a while as if it would placed by using the movement sensor rather than the gyroscope. Is there a possibility to stabilize it?
The Wikitude Android SDK is not using the gyroscope at the moment but a combination of accelerometer and compass to calculate the orientation. There is also no method to change the accuracy. What you can try is calibrating your compass like described here: https://support.google.com/maps/answer/6145351?hl=en or here: https://android.stackexchange.com/questions/10145/how-can-i-calibrate-the-compass-on-my-phone and try to keep a distance to things that can influence the compass accuracy like other electrical devices.
I'm new to scripting and I'm trying really hard to make an AR application for Android mobiles in Unity3d. I came up to a problem, that cannot solve no matter what I've though off.
Here is the situation. I have an AR camera scripted with the NyARToolkit plugin for Augmented Reality for Unity3d.
This AR camera, tagged as "MainCamera", needs to get switched off, when a certain distance is reached (meaning moving away the android phone) form the marker that renders the model on it.
When this distance is reached, I want to set off "MainCamera" and load another unity scene.
The following code is what I've tried so far, without much success. Need to mention that, this script is attached on "MainCamera", and this is the gameObject that needs to be disabled.
Here is the script:
#pragma strict
var mainCamera : Transform;
var camera : GameObject;
function Update() {
var distance = Vector3.Distance(mainCamera.position, transform.position);
if(distance<0) {
Debug.Log("CloseUp camera is on : " + distance);
}
if(distance > 1) {
Debug.Log("CloseUp camera is off : " + distance);
Camera.main.gameObject.SetActiveRecursively(false);
}
}
Can somebody, take a look at my code snippet and post a working edit, so I can get the distance right and set the camera off when user moves the android mobile away form the marker?
Thank you all in advance for your answers.
"Edit 1"
I know I'm not even close to the function need to make work on mobile, but this link, will give you a view of that functionality. I need to get the distance from the marker to mobile first, and if that distance is e.g. above 1.5 m, the AR camera should be switched off and a new level should be loaded.
You didn't mention what exactly wasn't working. Based on what I could piece together from your post and the code sample you provided you just need to add the level loading code.
#pragma strict
var mainCamera : Transform;
var camera : GameObject;
function Update() {
var distance = Vector3.Distance(mainCamera.position, transform.position);
if(distance<0) {
Debug.Log("CloseUp camera is on : " + distance);
}
if(distance > 1) {
Debug.Log("CloseUp camera is off : " + distance);
// If you are loading a new scene you likely don't need to bother turning off the camera
Camera.main.gameObject.SetActiveRecursively(false);
Application.LoadLevel ("nameOfTheLevelToLoad");
}
}
Here's the code I use...
float DistanceMainCamPosSpawnedPrefab =
Vector3.Distance(MainCam.transform.position,
m_SpawnedOnePrefab.transform.position); // the m_SpawnedOnePrefab would be your marker
if (DistanceMainCamPosSpawnedPrefab > .3f && DistanceMainCamPosSpawnedPrefab <
.5f)
{
// run your code here
}
I am using Camera.Face to detect face and min3D to load 3d models.
I want to let the model move with face, but it is not working well.
#Override
public void updateScene() {
if (mFaces == null) {
animeModel.position().x = animeModel.position().y = animeModel
.position().z = 0;
return;
}
for (Face face : mFaces) {
if (face == null) {
continue;
}
animeModel.position().x = face.rect.centerX();
animeModel.position().y = face.rect.centerY();
}
}
Is that model's coordinate and rectangle's coordinate are different systems?
(world coordinates to screen coordinates or something?)
How to solve this?
UPDATE:
I have try to get model's coordinate and face's coordinate.
These two value are totally different.
How to convert face.rect.centerX() to animeModel.position().x?
Here is an article all about how a face tracking demo was developed:
http://www.smallscreendesign.com/2011/02/07/about-face-detection-on-android-%E2%80%93-part-1/
That app is also available on the Play store. Part 1 of the above article has some performance metrics on recognition time. It looks like it may take up to two seconds or more to detect a face.
You could use the code in that article to do your prototyping. You may discover that face detection doesn't happen fast or often enough to track a face in realtime.
Here is the documentation for face tracking on the Android Developer site:
http://developer.android.com/reference/android/hardware/Camera.Face.html
UPDATE:
Check out this library: https://code.google.com/p/asmlib-opencv/
I am creating an app in android where i need to detect if the person has fall down. I know that this question has been asked and answered as to use vector mathematics in other forums but i am not getting the accurate results out of it.
Below is my code to detect the fall:
#Override
public void onSensorChanged(SensorEvent arg0) {
// TODO Auto-generated method stub
if (arg0.sensor.getType()==Sensor.TYPE_ACCELEROMETER) {
double gvt=SensorManager.STANDARD_GRAVITY;
float vals[] = arg0.values;
//int sensor=arg0.sensor.getType();
double xx=arg0.values[0];
double yy=arg0.values[1];
double zz=arg0.values[2];
double aaa=Math.round(Math.sqrt(Math.pow(xx, 2)
+Math.pow(yy, 2)
+Math.pow(zz, 2)));
if (aaa<=6.0) {
min=true;
//mintime=System.currentTimeMillis();
}
if (min==true) {
i++;
if(aaa>=13.5) {
max=true;
}
}
if (min==true && max==true) {
Toast.makeText(FallDetectionActivity.this,"FALL DETECTED!!!!!" ,Toast.LENGTH_LONG).show();
i=0;
min=false;
max=false;
}
if (i>4) {
i=0;
min=false;
max=false;
}
}
}
To explain the above code i have used the vector sum and checking if the value has reached below or equal to 6(while fall) and suddenly greater than 13.5(while landing) to confirm the fall.
Now i was been told in the forums that if the device is still the vector sum will return the value of 9.8. While fall it should be close to 0 and should go to around 20 while landing. This doesn't seem to happen in my case. Please can anybody suggest if i am going wrong anywhere?
There is a guy who developed an android app for that. Maybe you can get some information from his site: http://ww2.cs.fsu.edu/~sposaro/iFall/. He also made an article explaining how he detected the fall. It is really interesting, you should check it out!
Link for the paper: http://ww2.cs.fsu.edu/~sposaro/publications/iFall.pdf
Resuming, the fall detection is based on the resultant of the X-Y-Z acceleration. Based on this value:
When falling, the falling generally starts with a free fall period, making the resultand drop significantly below 1g.
On the impact on the ground, there is a peak in the amplitude of the resultant, with values higher than 3g.
After that, if the person could not move due to the fall, the resultant will remain close to 1G.
Following will happen if person / phone falls down:
absolute acceleration vector value goes to 0 ( with some noise of course )
there will be fair spike in absolute vector value on landing ( up to maximal value provided by accelerometer )
When phone is immobile, you have vector of modulo earth gravity pointing up
Your code is basically correct, but I would use some averaging because accelerometers used in phones are cheap crap - noisy and lacking precision
To add averaging to your signal means:- moving average. It depends on your windows size. For example. Say I have a one vector with the following numbers: 1,2,3,4,5,6. and my window size is 2. Then the moving average is to take every two numbers from your vector and average them by 2. So you would take 1+2/2, and then move one to the next twos. 2+3/2, and so on.