FaceDetector.findFaces accuracy params - android

I have been implementing the FaceDetector.findFaces() feature in my app to find/recognize faces in a selected Bitmap and I see that it works only for 100% clear faces.
Is there a way to apply a kind of 'accuracy' params so that a partial visible face is still accepted?
In my app I want to restrict profile pictures selection only to the ones showing the face and the code is plain simple:
private boolean faceIsDetected(Bitmap image) {
Bitmap image2 = image.copy(Bitmap.Config.RGB_565, false);
FaceDetector faceDetector = new FaceDetector(image2.getWidth(), image2.getHeight(), 5);
if (faceDetector.findFaces(image2, new FaceDetector.Face[5]) > 0)
return true;
return false;
}
The generated bitmap and see that it follows the requirements: it is RGB_565 and it has an even width.

The object Face includes already a confidence() value. From the documentation:
Returns a confidence factor between 0 and 1. This indicates how
certain what has been found is actually a face. A confidence factor
above 0.3 is usually good enough.
You would need to define your minimum value for this field, and decide when is good enough for you.
There is also a CONFIDENCE_THRESHOLD constant that defines the minimum confidence factor of good face recognition (which is 0.4F).
In my experiments this value typically oscillates between 0.5 and 0.53, and I never had anything outside that range. You are probably better off using ML Kit.

Related

Adjustable but consistent/raw feed from Camera X (no AE or AWB)

*Note: I figured out why the image was too dark at max SENSOR_SENSITIVITY: it was just a matter of EXPOSURE_TIME being too short, had to pump it up to 8 digits. Now all I have to do is save the values for SENSOR_SENSITIVITY and EXPOSURE_TIME. So, this question is answered, unless there's a better way?
Using Camera X (Camera2Interop) I want to turn off auto-exposure and white balance. Because I want to manually adjust the brightness, and ideally have it locked at that value until I adjust it again.
I'm not capturing an image but just working with the preview, and image analysis.
When turning CONTROL_MODE off or AE off the preview & analysis are black (unless pointed directly at light bulb), and I can get an image by adjusting SENSOR_SENSITIVITY, but the image is still too dark at maximum and has noise in it. When I modify EXPOSURE_TIME the image goes 100% black no matter what.*
Using AE_LOCK + AE_EXPOSURE_COMPENSATION instead fixes the issue of the black preview and prevents auto-correction, but isn't consistent, for I don't have a way to reset the brightness to what was set previously. For example: if I open the main camera app and go back to mine the brightness is now locked at a different value.
Camera2Interop.Extender extender = new Camera2Interop.Extender(builder);
//extender.setCaptureRequestOption(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_OFF);
//extender.setCaptureRequestOption(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION,-1);
extender.setCaptureRequestOption(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_MACRO);
//extender.setCaptureRequestOption(CaptureRequest.SENSOR_EXPOSURE_TIME,10);
//extender.setCaptureRequestOption(CaptureRequest.CONTROL_MODE, CaptureRequest.CONTROL_MODE_OFF);
//extender.setCaptureRequestOption(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_OFF);
extender.setCaptureRequestOption(CaptureRequest.CONTROL_AWB_MODE, CaptureRequest.CONTROL_AWB_MODE_OFF);
//extender.setCaptureRequestOption(CaptureRequest.SENSOR_SENSITIVITY,99999);
extender.setCaptureRequestOption(CaptureRequest.CONTROL_AWB_LOCK,true);
extender.setCaptureRequestOption(CaptureRequest.CONTROL_AE_LOCK,true);
//extender.setCaptureRequestOption(CaptureRequest.BLACK_LEVEL_LOCK,true);
Another issue is When adjusting brightness via EXPOSURE_COMPENSATION it seems to compound. For example, if it's set to -3 the image gets a little darker with every start of the application, or probably every time startCameraX is run. Perhaps this is an easy fix by moving it out of startcamera, but I was attempting to start or reset to it's previously set value.
ExposureState exposureState = camera.getCameraInfo().getExposureState();
if (!exposureState.isExposureCompensationSupported()) return;
Range<Integer> range = exposureState.getExposureCompensationRange();
int index = exposureState.getExposureCompensationIndex();
if (range.contains(1) && index != -3) {
camera.getCameraControl().setExposureCompensationIndex(-3);
}
Also, all the warning that come along with the camera2 interop extender makes it seem odd that's the official solution.
Other failed attempts: Canceling focus and metering, and set the metering point to 0x0 pixels.
camera.getCameraControl().cancelFocusAndMetering();//.setExposureCompensationIndex(12);
MeteringPointFactory meteringPointFactory = previewView.getMeteringPointFactory();
MeteringPoint meteringPoint = meteringPointFactory.createPoint(0,0, 0);
FocusMeteringAction action = new FocusMeteringAction.Builder(meteringPoint).build();
//.setAutoCancelDuration(1, TimeUnit.MICROSECONDS).build();
cameraControl.startFocusAndMetering(action);

Android Camera2 API - Set AE-regions not working

In my Camera2 API project for Android, I want to set a region for my Exposure Calculation. Unfortunately it doesn't work. On the other side the Focus region works without any problems.
Device: Samsung S7 / Nexus 5
1.) Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_AUTO);
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON);
2.) Create the MeteringRectangle List
meteringFocusRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
3.) Check if it is supported by the device and set the CONTROL_AE_REGIONS (same for CONTROL_AF_REGIONS)
if (camera2SupportHandler.cameraCharacteristics.get(CameraCharacteristics.CONTROL_MAX_REGIONS_AE) > 0) {
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_REGIONS, meteringFocusRectangleList);
}
4.) Tell the camera to start Exposure control
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CameraMetadata.CONTROL_AE_PRECAPTURE_TRIGGER_START);
The CONTROL_AE_STATE is always in CONTROL_AE_STATE_SEARCHING, but doesn't use the configured regions...
After long testing & development I've found an answer.
The coordinate system - Camera 1 API VS Camera 2 API
RED = CAM1; GREEN = CAM2; As shown in the image below, the blue rect are the coordinates for a possible focus/exposure area for the Cam1. By using the Cam2 API, there must be firstly queried the max of the height and the width. Please find more info here.
Initial values for CONTROL_AF_MODE & CONTROL_AE_MODE: See in the question above.
Set the CONTROL_AE_REGIONS: See in the question above.
Set the CONTROL_AE_PRECAPTURE_TRIGGER.
// This is how to tell the camera to start AE control
CaptureRequest captureRequest = camera2SupportHandler.mPreviewRequestBuilder.build();
camera2SupportHandler.mCaptureSession.setRepeatingRequest(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
camera2SupportHandler.mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER, CaptureRequest.CONTROL_AE_PRECAPTURE_TRIGGER_START);
camera2SupportHandler.mCaptureSession.capture(captureRequest, captureCallbackListener, camera2SupportHandler.mBackgroundHandler);
The ''captureCallbackListener'' gives feedback of the AE control (of course also for AF control)
So this configuration works for the most Android phones. Unfortunately it doesn't work for the Samsung S6/7. For this reason I've tested their Camera SDK, which can be found here.
After deep investigations I've found the config field ''SCaptureRequest.METERING_MODE''. By setting this to the value of ''SCaptureRequest.METERING_MODE_MANUAL'', the AE area works also the Samsung phones.
I'll add an example to github asap.
Recently I had the same problem and finally found a solution that helped me.
All I needed to do was to step 1 pixel from the edges of the active sensor rectangle. In your example instead of this rectangle:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(0,0,500,500,1000)};
I would use this:
meteringRectangleList = new MeteringRectangle[]{new MeteringRectangle(1,1,500,500,1000)};
and it started working as magic on both Samsung and Nexus 5!
(note that you should also step 1 pixel from right/bottom edges if you use maximum values there)
It seems that many vendors have poorly implemented this part of documentation
If the metering region is outside the used android.scaler.cropRegion returned in capture result metadata, the camera device will ignore the sections outside the crop region and output only the intersection rectangle as the metering region in the result metadata. If the region is entirely outside the crop region, it will be ignored and not reported in the result metadata.

play-services-vision: How do I sync the Face Detection speed with the Camera Preview speed?

I have some code that allows me to detect faces in a live camera preview and draw a few GIFs over their landmarks using the play-services-vision library provided by Google.
It works well enough when the face is static, but when the face moves at a moderate speed, the face detector takes longer than the camera's framerate to detect the landmarks at the face's new position. I know it might have something to do with the bitmap draw speed, but I took steps to minimize the lag in them.
(Basically I get complaints that the GIFs' repositioning isn't 'smooth enough')
EDIT: I did try getting the coordinate detection code...
List<Landmark> landmarksList = face.getLandmarks();
for(int i = 0; i < landmarksList.size(); i++)
{
Landmark current = landmarksList.get(i);
//canvas.drawCircle(translateX(current.getPosition().x), translateY(current.getPosition().y), FACE_POSITION_RADIUS, mFacePositionPaint);
//canvas.drawCircle(current.getPosition().x, current.getPosition().y, FACE_POSITION_RADIUS, mFacePositionPaint);
if(current.getType() == Landmark.LEFT_EYE)
{
//Log.i("current_landmark", "l_eye");
leftEyeX = translateX(current.getPosition().x);
leftEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.RIGHT_EYE)
{
//Log.i("current_landmark", "r_eye");
rightEyeX = translateX(current.getPosition().x);
rightEyeY = translateY(current.getPosition().y);
}
if(current.getType() == Landmark.NOSE_BASE)
{
//Log.i("current_landmark", "n_base");
noseBaseY = translateY(current.getPosition().y);
noseBaseX = translateX(current.getPosition().x);
}
if(current.getType() == Landmark.BOTTOM_MOUTH) {
botMouthY = translateY(current.getPosition().y);
botMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "b_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.LEFT_MOUTH) {
leftMouthY = translateY(current.getPosition().y);
leftMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
if(current.getType() == Landmark.RIGHT_MOUTH) {
rightMouthY = translateY(current.getPosition().y);
rightMouthX = translateX(current.getPosition().x);
//Log.i("current_landmark", "l_mouth "+translateX(current.getPosition().x)+" "+translateY(current.getPosition().y));
}
}
eyeDistance = (float)Math.sqrt(Math.pow((double) Math.abs(rightEyeX - leftEyeX), 2) + Math.pow(Math.abs(rightEyeY - leftEyeY), 2));
eyeCenterX = (rightEyeX + leftEyeX) / 2;
eyeCenterY = (rightEyeY + leftEyeY) / 2;
noseToMouthDist = (float)Math.sqrt(Math.pow((double)Math.abs(leftMouthX - noseBaseX), 2) + Math.pow(Math.abs(leftMouthY - noseBaseY), 2));
...in a separate thread within the View draw method, but it just nets me a SIGSEGV error.
My questions:
Is syncing the Face Detector's processing speed with the Camera Preview framerate the right thing to do in this case, or is it the other way around, or is it some other way?
As the Face Detector finds the faces in a camera preview frame, should I drop the frames that the preview feeds before the FD finishes? If so, how can I do it?
Should I just use setClassificationMode(NO_CLASSIFICATIONS) and setTrackingEnabled(false) in a camera preview just to make the detection faster?
Does the play-services-vision library use OpenCV, and which is actually better?
EDIT 2:
I read one research paper that, using OpenCV, the face detection and other functions available in OpenCV is faster in Android due to their higher processing power. I was wondering whether I can leverage that to hasten the face detection.
There is no way you can guarantee that face detection will be fast enough to show no visible delay even when the head motion is moderate. Even if you succeed to optimize the hell of it on your development device, you will sure find another model among thousands out there, that will be too slow.
Your code should be resilient to such situations. You can predict the face position a second ahead, assuming that it moves smoothly. If the users decide to twitch their head or device, no algorithm can help.
If you use the deprecated Camera API, you should pre-allocate a buffer and use setPreviewCallbackWithBuffer(). This way you can guarantee that the frames arrive to you image processor one at a time. You should also not forget to open the Camera on a background thread, so that the [onPreviewFrame()](http://developer.android.com/reference/android/hardware/Camera.PreviewCallback.html#onPreviewFrame(byte[], android.hardware.Camera)) callback, where your heavy image processing takes place, will not block the UI thread.
Yes, OpenCV face-detection may be faster in some cases, but more importantly it is more robust that the Google face detector.
Yes, it's better to turn the classificator off if you don't care about smiles and open eyes. The performance gain may vary.
I believe that turning tracking off will only slow the Google face detector down, but you should make your own measurements, and choose the best strategy.
The most significant gain can be achieved by turning setProminentFaceOnly() on, but again I cannot predict the actual effect of this setting for your device.
There's always going to be some lag, since any face detector takes some amount of time to run. By the time you draw the result, you will usually be drawing it over a future frame in which the face may have moved a bit.
Here are some suggestions for minimizing lag:
The CameraSource implementation provided by Google's vision library automatically handles dropping preview frames when needed so that it can keep up the best that it can. See the open source version of this code if you'd like to incorporate a similar approach into your app: https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java#L1144
Using a lower camera preview resolution, such as 320x240, will make face detection faster.
If you're only tracking one face, using the setProminentFaceOnly() option will make face detection faster. Using this and LargestFaceFocusingProcessor as well will make this even faster.
To use LargestFaceFocusingProcessor, set it as the processor of the face detector. For example:
Tracker<Face> tracker = *your face tracker implementation*
detector.setProcessor(
new LargestFaceFocusingProcessor.Builder(detector, tracker).build());
Your tracker implementation will receive face updates for only the largest face that it initially finds. In addition, it will signal back to the detector that it only needs to track that face for as long as it is visible.
If you don't need to detect smaller faces, using the setMinFaceSize() larger will make face detection faster. It's faster to detect only larger faces, since it doesn't need to spend time looking for smaller faces.
You can turn of classification if you don't need eyes open or smile indication. However, this would only give you a small speed advantage.
Using the tracking option will make this faster as well, but at some accuracy expense. This uses a predictive algorithm for some intermediate frames, to avoid the expense running full face detection on every frame.

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

How to collide objects with high speed in Unity

I try to create game for Android and I have problem with high speed objects, they don't wanna to collide.
I have Sphere with Sphere Collider and Bouncy material, and RigidBody with this param (Gravity=false, Interpolate=Interpolate, Collision Detection = Continuous Dynamic)
Also I have 3 walls with Box Collider and Bouncy material.
This is my code for Sphere
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
In project Settings I set: "Min Penetration For Penalty Force"=0.001, "Solver Interation Count"=50
When I play on the start it work fine (it bounces) but when speed go to high, Sphere just passes the wall.
Can anyone help me?
Thanks.
Edited
var hit : RaycastHit;
var mainGameScript : MainGame;
var particles_splash : GameObject;
function Awake () {
rigidbody.AddForce(4, 4, 0, ForceMode.Impulse);
InvokeRepeating("IncreaseBallVelocity", 2, 2);
}
function Update() {
if (rigidbody.SweepTest(transform.forward, hit, 0.5))
Debug.Log(hit.distance + "mts distance to obstacle");
if(transform.position.y < -3) {
mainGameScript.GameOver();
//Application.LoadLevel("Menu");
}
}
function IncreaseBallVelocity() {
rigidbody.velocity *= 1.05;
}
function OnCollisionEnter(collision : Collision) {
Instantiate(particles_splash, transform.position, transform.rotation);
}
EDITED added more info
Fixed Timestep = 0.02 Maximum Allowed Tir = 0.333
There is no difference between running the game in editor player and on Android
No. It looks OK when I set 0.01
My Paddle is Box Collider without Rigidbody, walls are the same
There are all in same layer (when speed is normal it all works) value in PhysicsManager are the default (same like in image) exept "Solver Interation Co..." = 50
No. When I change speed it pass other wall
I am using standard cube but I expand/shrink it to fit my screen and other objects, when I expand wall more then it's OK it bouncing
No. It's simple project simple example from Video http://www.youtube.com/watch?v=edfd1HJmKPY
I don't use gravity
See:
Similar SO Question
A community script that uses ray tracing to help manage fast objects
UnityAnswers post leading to the script in (2)
You could also try changing the fixed time step for physics. The smaller this value, the more times Unity calculates the physics of a scene. But be warned, making this value too small, say <= 0.005, will likely result in an unstable game, especially on a portable device.
The script above is best for bullets or small objects. You can manually force rigid body collisions tests:
public class example : MonoBehaviour {
public RaycastHit hit;
void Update() {
if (rigidbody.SweepTest(transform.forward, out hit, 10))
Debug.Log(hit.distance + "mts distance to obstacle");
}
}
I think the main problem is the manipulation of Rigidbody's velocity. I would try the following to solve the problem.
Redesign your code to ensure that IncreaseBallVelocity and every other manipulation of Rigidbody is called within FixedUpdate. Check that there are no other manipulations to Transform.position.
Try to replace setting velocity directly by using AddForce or similar methods so the physics engine has a higher chance to calculate all dependencies.
If there are more items (main player character, ...) involved related to the physics calculation, ensure that their code runs in FixedUpdate too.
Another point I stumbled upon were meshes that are scaled very much. Having a GameObject with scale <= 0.01 or >= 100 has definitely a negative impact on physics calculation. According to the docs and this Unity forum entry from one of the gurus you should avoid Transform.scale values != 1
Still not happy? OK then the next test is starting with high velocities but no acceleration. At this phase we want to know, if the high velocity itself or the acceleration is to blame for the problem. It would be interesting to know the velocities' values at which the physics engine starts to fail - please post them so that we can compare them.
EDIT: Some more things to investigate
6.7 m/sec does not sound that much so that I guess there is a special reason or a combination of reasons why things go wrong.
Is your Maximum Allowed Timestep high enough? For testing I suggest 5 to 10x Fixed Timestep. Note that this might kill the frame rate but that can be dfixed later.
Is there any difference between running the game in editor player and on Android?
Did you notice any drops in frame rate because of the 0.01 FixedTimestep? This would indicate that the physics engine might be in trouble.
Could it be that there are static colliders (objects having a collider but no Rigidbody) that are moved around or manipulated otherwise? This would cause heavy recalculations within PhysX.
What about the layers: Are all walls on the same layer resp. are the involved layers are configured appropriately in collision detection matrix?
Does the no-bounce effect always happen at the same wall? If so, can you just copy the 1st wall and put it in place of the second one to see if there is something wrong with this specific wall.
If not to much effort, I would try to set up some standard cubes as walls just to be sure that transform.scale is not to blame for it (I made really bad experience with this).
Do you manipulate gravity or TimeManager.timeScale from within a script?
BTW: are you using gravity? (Should be no problem just

Categories

Resources