I am currently work on face recognition in android. I spent reasonable time on internet and I found FaceDetector.Face class in Android. And these are the utilities of this class:
Constants
float CONFIDENCE_THRESHOLD
int EULER_X The x-axis Euler angle of a face.
int EULER_Y The y-axis Euler angle of a face.
int EULER_Z The z-axis Euler angle of a face.
Public Methods
float confidence()
float eyesDistance()
void getMidPoint(PointF point)
float pose(int euler)
The problem is, I do not know how to use these methods and I cannot find any tutorial or example source code for this. The question is, should I use eyesDistance() for differenciating the people? For example Sarah's eyesDistance is= 6.51 cm and John's is= 6.82. When the code calculates a persons eyes distance and when it is 6.82, is it tell you that "it is john" is this the way for identifind the people? Or what is the algorithm for that? Or should I use EULER constants? In what way? I think I am going to use these methods for face recognition, but I do not know how to use it.
Or can you suggest another solution for face recognition?
Any help would be appreciated.
The FaceDetector class doesn't do what you think it does. Specifically, it doesn't do Facial Recognition, but instead Facial Detection (hence the class name).
It analyzes an image and returns Faces found in the image. It makes no distinction between Faces (you can't tell if it's John's Face or Sarah's Face) other than the distance between their eyes - but that isn't really a valid comparison point. It just gives you the Faces found and the confidence level that the objects found are actually Faces.
Ex:
int maxNumFaces = 2; // Set this to whatever you want
FaceDetector fd = new FaceDetector(imageWidth,imageHeight,maxNumFaces);
Faces[] faces = new Faces[maxNumFaces];
try {
int numFacesFound = fd.findFaces(image, faces);
for (int i = 0; i < maxNumFaces; ++i) {
Face face = faces[i];
Log.d("Face " + i + " found with " + face.confidence() + " confidence!");
Log.d("Face " + i + " eye distance " + face.eyesDistance());
Log.d("Face " + i + " pose " + face.pose());
Log.d("Face " + i + " midpoint (between eyes) " + face.getMidPoint());
}
} catch (IllegalArgumentException e) {
// From Docs:
// if the Bitmap dimensions don't match the dimensions defined at initialization
// or the given array is not sized equal to the maxFaces value defined at
// initialization
}
As Tushar said, the FaceDetector only detects the faces. You can't recognize them using FaceDetector though. The eye distance output is measured in pixels, not in cm or inches. It represents how big the face is inside the bitmap image. The euler angles are supposed to represent the 3D rotation of the head. However, if your app uses any api before 14, the euler angles values will always be 0.0 (they are not computed). So, take care with this.
If you want to do face recognition, you can use opencv. I haven't used it myself, but I think it is available on Android.
Face Recognition in OpenCV
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/
If you don't want or can't add OpenCV to your project, you can program the face recognition by yourself. It take some time, but it's not so hard. You can implement some variation of eigenfaces: http://www.youtube.com/watch?v=LYgBqJorF44&list=PLd3hlSJsX_Imk_BPmB_H3AQjFKZS9XgZm&index=16
Good luck!
Related
I am working on a project that requires finding the patterns made by device movement (like a golf swing for e.g.). I've searched a lot and still couldn't get any prepackaged library for this.
Now I'm trying to build one from scratch. In order to do this, I've retrieved gyroscope data from device to find those patterns but unsuccessful so far.
These are the cases that I mentioned in a nut-shell.
Case 1: Find the wave motion like a golf swing.
Case 2: Plot this
motion in a 3D plane so that user can view the motion of device.
Current source code (data from gyroscope)
float[] values = event.values;
// Movement
float x = values[0];
float y = values[1];
float z = values[2];
xAxis.setText("X : " + (int)x + " rad/s");
yAxis.setText("Y : " + (int)y + " rad/s");
zAxis.setText("Z : " + (int)z + " rad/s");
boolean waveFactor = (((int)z) > 3) && (((int)x) > 1);
if(waveFactor) {
Toast.makeText(context, "Horizontal wave success", Toast.LENGTH_SHORT).show();
}
Any sort of help/direction is well appreciated.
Gyroscope is not enough for your plans. You will also need accelerometer data. And also take into account, that axes information from event is in coordinate system tied to device - not real world. So you will need more sophisticated code to detect and evaluate movement. I did some small projects to record and display FFT analysed data from accelerometer. Feel free to take inspiration from it.
https://github.com/ko5tik/accmeter/blob/master/src/main/java/de/pribluda/android/accmeter/Sampler.java
https://github.com/ko5tik/accanalyser
I'm beginner in Vuforia library, I’m trying to calculate the real distance as cm between the AR camera and an image target via Vuforia on Android studio, I found this code on the vuforia forums but when I try it, I don't get the good results.
TrackableResult result = state.getTrackableResult(tIdx);
Trackable trackable = result.getTrackable();
Matrix44F modelViewMatrix_Vuforia =Tool.convertPose2GLMatrix(result.getPose());
Matrix44F inversMV = SampleMath.Matrix44FInverse(modelViewMatrix_Vuforia);
Matrix44F invTranspMV = SampleMath.Matrix44FTranspose(inversMV);
float cam_x = invTranspMV.getData()[12];
float cam_y = invTranspMV.getData()[13];
float cam_z = invTranspMV.getData()[14];
Log.v("QCV", "Posx=" + cam_x + ",posy=" + cam_y + ",posz=" + cam_z);
float distance = new Float(Math.sqrt(cam_x * cam_x + cam_y * cam_y + cam_z * cam_z));
Log.v("distance ",""+distance);
Can you help me please ? is there another function or code to calculate the real distance as cm?
Thanks
as far as I know, you should take the cam_x, cam_y and cam_z from the original target matrix, meaning modelViewMatrix_Vuforia in your code. The rest of your calculation is fine. You can look here for the formal Vuforia article: Determining target distance for native Android and iOS.
As for your second question - Vuforia has no way of knowing the real size. However, you can find it yourself by the size of the target you've defined in the dataset and the size of the printed target. This is also explained in the link above.
I rotated my android device in x direction (from -180 degree to 180 degree), see image below.
And I assume only Rotation vector x value is changed. Y and z maybe have some noise, but it should be not much difference among the values.
However, I receive this. Kindly see
https://docs.google.com/spreadsheets/d/1ZLoSKI8XNjI1v4exaXxsuMtzP0qWTP5Uu4C3YTwnsKo/edit?usp=sharing
I suspect my sensor has some problem.
Any idea? Thank you very much.
Jimmy
Your sensor is fine.Well, the rotation vector entries cannot simply be related to the rotation angle around a particular axis. The SensorEvent structure constitutes of timestamp, sensor, accuracy and values. Depending on the vector the float[] of values vary in size 1-5. The rotation vectors values are based on unit quaternions, all together forming a vector representing the orientation of this world frame relative to your smartphone fixed frame above
They are unitless and positive counter-clockwise.
The orientation of the phone is represented by the rotation necessary to align the East-North-Up coordinates with the phone's coordinates. That is, applying the rotation to the world frame (X,Y,Z) would align them with the phone coordinates (x,y,z).
If the vector would be a Rotation-Matrix one could write it as v_body = R_rot_vec * v_world (<--)pushing the world vector into a smartphone fixed description.
Furthermore about the vector:
The three elements of the rotation vector are equal to the last three components of a unit quaternion <cos(θ/2), xsin(θ/2), ysin(θ/2), z*sin(θ/2)>.
Q: So what to do with it? Depending on your Euler-angles convention (possible 24 sequences, valid 12 ones) you could calculate the corresponding angles u := [ψ,θ,φ] by e.g. applying the 123 sequence:
If you already have the rotation matrix entries get euler like so:
the 321 sequence:
with q1-3 always being the values[0-2] (Dont get confused by u_ijk as ref(Diebel) uses different conventions comp. to the standard)But wait, your linked table only does have 3 values, which is similar to what I get. This is oneSensorEvent of mine, the last three are printed from values[]
timestamp sensortype accuracy values[0] values[1] values[2]
23191581386897 11 -75 -0.0036907701 -0.014922042 0.9932963
4q - 3 values = 1q unknown. The first q0 is redundant info (also the doku says it should be there under values[3], depends on your API-level). So we can use the norm (=length) to calculate q0 from the other three. Set the equation ||q|| = 1 and solve for q0. Now all q0-3 are known.
Furthermore my android 4.4.2 does not have the fourth estimated heading Accuracy (in radians) inside value[4], so I evaluate the event.accuracy:
for (SensorEvent e : currentEvent) {
if (e != null) {
String toMsg = "";
for(int i = 0; i < e.values.length;i++) {
toMsg += " " + String.valueOf(e.values[i]);
}
iBinder.msgString(String.valueOf(e.timestamp) + " "+String.valueOf(e.sensor.getType()) + " " + String.valueOf(e.accuracy) + toMsg, 0);
}
}
Put those equations into code and you will get things sorted.
Here is a short conversion helper, converting Quats. using either XYZ or ZYX. It can be run from shell github. (BSD-licensed)
The relevant part for XYZ
/*quaternation to euler in XYZ (seq:123)*/
double* quat2eulerxyz(double* q) {
/*euler-angles*/
double psi = atan2( -2.*(q[2]*q[3] - q[0]*q[1]) , q[0]*q[0] - q[1]*q[1]- q[2]*q[2] + q[3]*q[3] );
double theta = asin( 2.*(q[1]*q[3] + q[0]*q[2]) );
double phi = atan2( 2.*(-q[1]*q[2] + q[0]*q[3]) , q[0]*q[0] + q[1]*q[1] - q[2]*q[2] - q[3]*q[3] );
/*save var. by simply pushing them back into the array and return*/
q[1] = psi;
q[2] = theta;
q[3] = phi;
return q;
}
Here some examples applying quats to euls:
**Q:** What do the sequence ijk stand for? Take two coordinate-frames A and B superposing each other(all axis within each other) and start rotating frame B through i-axis having angle `psi`, then j-axis having angle `theta` and last z-axis having `phi`. It could also be α, β, γ for i,j,k. *I don't pick up the numbers as they are confusing (Diebel vs other papers).*
R(psi,theta,phi) = R_z(phi)R_y(theta)R_x(psi) (<--)
The trick is elementary rotations are applied from right to left, although we read the sequence from left to right.
Those are the three elementary rotations youre going through to go from
A to B: *v_B = R(psi,theta,phi) v_A*
**Q:** So how to get the euler angles/quats turn from [0°,0°,0°] to eg. [0°,90°,0°]?First align both frames from the pictures, respective the known device frame B to the "invisible" worldframe A. Your done superposing when the angles all get to [0°,0°,0°]. Just figure out where is north, south and east where you are sitting right now and point the devices frame B into those directions. Now when you rotate around y-axis counter-clockwise 90° you will have the desired [0°,90°,0°], when converting the quaternion.
*Julian*
*kinematics source: [Source Diebel(Stanford)][11] with solid info on the mechanics background (careful: for Diebel XYZ is denoted u_321 (1,2,3) while ZYX is u_123 (3,2,1)), and [this][12] is a good starting point.
Recently I have made some research to use both the accelerometer + Gyroscope to use those senser to track a smartphone without the help of the GPS (see this post)
Indoor Positioning System based on Gyroscope and Accelerometer
For that purpose I will need my orientation (angle (pitch, roll etc..)) so here what i have done so far:
public void onSensorChanged(SensorEvent arg0) {
if (arg0.sensor.getType() == Sensor.TYPE_ACCELEROMETER)
{
accel[0] = arg0.values[0];
accel[1] = arg0.values[1];
accel[2] = arg0.values[2];
pitch = Math.toDegrees(Math.atan2(accel[1], Math.sqrt(Math.pow(accel[2], 2) + Math.pow(accel[0], 2))));
tv2.setText("Pitch: " + pitch + "\n" + "Roll: " + roll);
} else if (arg0.sensor.getType() == Sensor.TYPE_GYROSCOPE )
{
if (timestamp != 0) {
final float dT = (arg0.timestamp - timestamp) * NS2S;
angle[0] += arg0.values[0] * dT;
filtered_angle[0] = (0.98f) * (filtered_angle[0] + arg0.values[0] * dT) + (0.02f)* (pitch);
}
timestamp = arg0.timestamp;
}
}
Here I'm trying to angle (just for testing) from my accelerometer (pitch), from integration the gyroscope_X trough time filtering it with a complementary filter
filtered_angle[0] = (0.98f) * (filtered_angle[0] + gyro_x * dT) + (0.02f)* (pitch)
with dT begin more or less 0.009 secondes
But I don't know why but my angle are not really accurate...when the device is position flat on the table (Screen facing up)
Pitch (angle fromm accel) = 1.5 (average)
Integrate gyro = 0 to growing (normal it's drifting)
filtered gyro angle = 1.2
and when I lift the phone of 90° (see the screen is facing the wall in front of me)
Pitch (angle fromm accel) = 86 (MAXIMUM)
Integrate gyro = he is out ok its normal
filtered gyro angle = 83 (MAXIMUM)
So the angles never reach 90 ??? Even if I try to lift the phone a bit more...
Why doesn't it going until 90° ? Are my calculation wrong? or is the quality of the sensor crap?
AN other thing that I'm wondering it is that: with Android I don't "read out" the value of the sensor but I'm notified when they change. The problem is that as you see in the code the Accel and Gyro share the same method.... so when I compute the filtered angle I will take the pitch of the accel measure 0.009 seconds before, no ? Is that maybe the source of my problem?
Thank you !
I can only repeat myself.
You get position by integrating the linear acceleration twice but the error is horrible. It is useless in practice. In other words, you are trying to solve the impossible.
What you actually can do is to track just the orientation.
Roll, pitch and yaw are evil, do not use them. Check in the video I already recommended, at 38:25.
Here is an excellent tutorial on how to track orientation with gyros and accelerometers.
Similar questions that you might find helpful:
track small movements of iphone with no GPS
What is the real world accuracy of phone accelerometers when used for positioning?
how to calculate phone's movement in the vertical direction from rest?
iOS: Movement Precision in 3D Space
How to use Accelerometer to measure distance for Android Application Development
Distance moved by Accelerometer
How can I find distance traveled with a gyroscope and accelerometer?
I wrote a tutorial on the use of the Complementary Filter for oriëntation tracking with gyroscope and accelerometer: http://www.pieter-jan.com/node/11 maybe it can help you.
I test your code and found that probably the scale factor is not consistent.
Convert the pitch to 0-pi gives better result.
In my test, the filtered result is ~90 degrees.
pitch = (float) Math.toDegrees(Math.atan2(accel[1], Math.sqrt(Math.pow(accel[2], 2) + Math.pow(accel[0], 2))));
pitch = pitch*PI/180.f;
filtered_angle = weight * (filtered_angle + event.values[0] * dT) + (1.0f-weight)* (pitch);
i tried and this will give you angle 90...
filtered_angle = (filtered_angle / 83) * 90;
I'm trying to get a the Euler angle of a Face that is detected by FaceDetector.
Here is what I use to output to Logcat:
Log.v("debug", " X: " + face.pose(Face.EULER_X) + " Y: " + face.pose(Face.EULER_Y) + " Z: " + face.pose(Face.EULER_Z) );
But it always returns 0.0 for all three, no matter what angle the face is at. Any ideas why?
Yeah the FaceDetector from API 1 never returns a pose angle. You can look at the source code to verify.
The newer FaceDetectionListener from API 14 will return a pose angle, but it's only available on a limited number of devices right now. Not even all devices running API 14 can use it. You have to call getMaxNumDetectedFaces() to see if your device supports that API.
You can alternately try using OpenCV. A couple options for that are http://code.opencv.org/projects/opencv/wiki/OpenCV4Android and http://code.google.com/p/javacv/. In my experience they aren't worth the hassle unless you really, really need the pose angle.
There are a few similar questions here. Check out the first answer from this link:
Android Facedetector pose values are always 0
And see below for someone that says they solved the problem:
Android Face Detection
Set the detector setMode to ACCURATE_MODE
Here is an example that worked for me in Kotlin:
val detector = FaceDetector.Builder(context)
.setClassificationType(FaceDetector.ACCURATE_MODE)
.setMode(FaceDetector.ACCURATE_MODE)
.setTrackingEnabled(true)
.build()
On the latest version developers needs to set the perfomance mode to PERFORMANCE_MODE_ACCURATE not PERFORMANCE_MODE_FAST
setPerformanceMode(FaceDetectorOptions.PERFORMANCE_MODE_ACCURATE)