I'm trying to get a the Euler angle of a Face that is detected by FaceDetector.
Here is what I use to output to Logcat:
Log.v("debug", " X: " + face.pose(Face.EULER_X) + " Y: " + face.pose(Face.EULER_Y) + " Z: " + face.pose(Face.EULER_Z) );
But it always returns 0.0 for all three, no matter what angle the face is at. Any ideas why?
Yeah the FaceDetector from API 1 never returns a pose angle. You can look at the source code to verify.
The newer FaceDetectionListener from API 14 will return a pose angle, but it's only available on a limited number of devices right now. Not even all devices running API 14 can use it. You have to call getMaxNumDetectedFaces() to see if your device supports that API.
You can alternately try using OpenCV. A couple options for that are http://code.opencv.org/projects/opencv/wiki/OpenCV4Android and http://code.google.com/p/javacv/. In my experience they aren't worth the hassle unless you really, really need the pose angle.
There are a few similar questions here. Check out the first answer from this link:
Android Facedetector pose values are always 0
And see below for someone that says they solved the problem:
Android Face Detection
Set the detector setMode to ACCURATE_MODE
Here is an example that worked for me in Kotlin:
val detector = FaceDetector.Builder(context)
.setClassificationType(FaceDetector.ACCURATE_MODE)
.setMode(FaceDetector.ACCURATE_MODE)
.setTrackingEnabled(true)
.build()
On the latest version developers needs to set the perfomance mode to PERFORMANCE_MODE_ACCURATE not PERFORMANCE_MODE_FAST
setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)
Related
Context
I need to get the current speed of the aircraft.
Efforts
Looking at the documentation, I can see that exists the method WaypointMissionOperator.getAutoFlightSpeed which gives me that info.
Problem
While implementing the code, this method was missing. Was it moved, renamed or something?
Environment
OS: Android
DJI SDK version: 4.13.1
Have a look at the FlightControllerState class, it contains the current status of pretty much all of the aircraft components.
Specifically, you should look at FlightController.setStateCallback(). This returns current information, specifically you want to use getVelocityX, getVelocityY and getVelocityZ.
There are also keys for each, the documentation gives the details.
Having X, Y and Z velocities, the speed can be computed in this way:
final speed = sqrt(pow(velocity.x, 2) + pow(velocity.y, 2) + pow(velocity.z, 2));
I have trained a yolo-tiny model on my own dataset. The model works great in Python with OpenCV. But when I want to run the same model in OpenCV (3.4.3) on an Android Smartphone, I get false detections on the top edge of the frame. I am using the dnn tutorial from OpenCV.
The net is created like:
// Net net = Dnn.readNet(getPath("my_yolov3-tiny.weights", this), getPath("my_yolov3-tiny.cfg", this));
Net net = Dnn.readNetFromDarknet(getPath("my_yolov3-tiny.cfg", this), getPath("my_yolov3-tiny.weights", this));
The result is the same on both methods.
I am logging the detection to Logcat with the following code:
Log.e(TAG, "detection 0th object: classID=" + classId + " - label: " + label + " - xleft: " + xLeftBottom + " - yLeft: " + yLeftBottom + " - xright: " + xRightTop + " - yright: " + yRightTop);
and get the following output:
classID=0 - label: [my_object_name]: 0.24151088297367096 - xleft: 43 - yLeft: 0 - xright: 0 - yright: 0
I get detections, even though the frame is black. Is there any known problem in this version?
Im sorry, the information you provide is not enough for us to help find the bug.
If you enter a black screen on noraml ubuntu PC with your transfer learned model, does it returns false as well? If yes, model problem. If no, go next line
If your transfer learned model works on both opencv and python, then there shouldn`t be issue running in android. It seems you have a bug or sth. Post all or the key part where you think you might make a mistake.
If you want to run it in android urgently to finish a school project, you can follow this post in getting it done. Just switch the model to yours.
https://github.com/ishay2b/android-yolo
I am currently work on face recognition in android. I spent reasonable time on internet and I found FaceDetector.Face class in Android. And these are the utilities of this class:
Constants
float CONFIDENCE_THRESHOLD
int EULER_X The x-axis Euler angle of a face.
int EULER_Y The y-axis Euler angle of a face.
int EULER_Z The z-axis Euler angle of a face.
Public Methods
float confidence()
float eyesDistance()
void getMidPoint(PointF point)
float pose(int euler)
The problem is, I do not know how to use these methods and I cannot find any tutorial or example source code for this. The question is, should I use eyesDistance() for differenciating the people? For example Sarah's eyesDistance is= 6.51 cm and John's is= 6.82. When the code calculates a persons eyes distance and when it is 6.82, is it tell you that "it is john" is this the way for identifind the people? Or what is the algorithm for that? Or should I use EULER constants? In what way? I think I am going to use these methods for face recognition, but I do not know how to use it.
Or can you suggest another solution for face recognition?
Any help would be appreciated.
The FaceDetector class doesn't do what you think it does. Specifically, it doesn't do Facial Recognition, but instead Facial Detection (hence the class name).
It analyzes an image and returns Faces found in the image. It makes no distinction between Faces (you can't tell if it's John's Face or Sarah's Face) other than the distance between their eyes - but that isn't really a valid comparison point. It just gives you the Faces found and the confidence level that the objects found are actually Faces.
Ex:
int maxNumFaces = 2; // Set this to whatever you want
FaceDetector fd = new FaceDetector(imageWidth,imageHeight,maxNumFaces);
Faces[] faces = new Faces[maxNumFaces];
try {
int numFacesFound = fd.findFaces(image, faces);
for (int i = 0; i < maxNumFaces; ++i) {
Face face = faces[i];
Log.d("Face " + i + " found with " + face.confidence() + " confidence!");
Log.d("Face " + i + " eye distance " + face.eyesDistance());
Log.d("Face " + i + " pose " + face.pose());
Log.d("Face " + i + " midpoint (between eyes) " + face.getMidPoint());
}
} catch (IllegalArgumentException e) {
// From Docs:
// if the Bitmap dimensions don't match the dimensions defined at initialization
// or the given array is not sized equal to the maxFaces value defined at
// initialization
}
As Tushar said, the FaceDetector only detects the faces. You can't recognize them using FaceDetector though. The eye distance output is measured in pixels, not in cm or inches. It represents how big the face is inside the bitmap image. The euler angles are supposed to represent the 3D rotation of the head. However, if your app uses any api before 14, the euler angles values will always be 0.0 (they are not computed). So, take care with this.
If you want to do face recognition, you can use opencv. I haven't used it myself, but I think it is available on Android.
Face Recognition in OpenCV
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/
If you don't want or can't add OpenCV to your project, you can program the face recognition by yourself. It take some time, but it's not so hard. You can implement some variation of eigenfaces: http://www.youtube.com/watch?v=LYgBqJorF44&list=PLd3hlSJsX_Imk_BPmB_H3AQjFKZS9XgZm&index=16
Good luck!
I'm trying to detect shaking event using Cordova 2.2.0 for android devices.
I found some question related to this topic but t's in native code for example this question and this question.
Does anyone knows how to detect this event using phonegap Cordova ? or I should write a plugin ?
You can try shake.js. Ive been looking into it, but not implemented it. It looks promising.
Use the accelerometer to store the previous values (x, y and z). Defining the thresholds (x,y,z) you can detect shaking if the different betwen the previosValues and the actual once (event.value[i] where i=x,y ans z) is higher than the thresholds.
You also can use the magnitude of the acceleration values (Acc=sqrt(x*x+y*y+z*z)) or the timestamp to obtain better results.
Cordova offers the device-motion plugin, which (surprisingly) exposes a navigator.accelerometer object, instead of aligning with the W3C deviceorientation/devicemotion standard published since 2011.
When the device lays flat on a surface, the (x, y, z) acceleration will be (0, 0, 9.81). The basic idea for detecting a shake is to watch the acceleration with a given frequency, calculate the delta from the previous sample, and decide if it's larger than a threshold.
navigator.accelerometer.watchAcceleration(onSuccess, onError, { frequency: 300 });
// Assess the current acceleration parameters to determine a shake
function onSuccess(acceleration) {
var accelerationChange = {};
if (previousAcceleration.x !== null) {
accelerationChange.x = Math.abs(previousAcceleration.x - acceleration.x);
accelerationChange.y = Math.abs(previousAcceleration.y - acceleration.y);
accelerationChange.z = Math.abs(previousAcceleration.z - acceleration.z);
}
previousAcceleration = {
x: acceleration.x,
y: acceleration.y,
z: acceleration.z
};
if (accelerationChange.x + accelerationChange.y + accelerationChange.z > sensitivity) {
// Shake detected, invoke callback
}
}
A plugin doing that is Lee Crossley's cordova-plugin-shake-detection.
Does Android have a similar method to the iPhone AVAudioPlayer's averagePowerForChannel?
I want to get an average reading of the amplitude of as a value.
I don't think there is a built-in function but you can calculate it yourself.
To do this, calculate the root-mean-square average of continuous samples.
rms = sqrt((sample0^2 + sample1^2 + sample2^2 + sample3^2) / numberOfSamples)
The following links should be helpful to you as they contain the full source code to two excellent Android sound related projects.
Ringdroid
Rehearsal Assistant