I am experiencing a strange issue with the FaceDetector from the Android SDK. The code below is working fine and detecting faces correctly when using the back camera, but no matter what, when the picture is from the front camera, no face is detected.
FaceDetector.Face[] faces = new FaceDetector.Face[1];
FaceDetector faceDetector = new FaceDetector(width, height, 1);
int facesFound = faceDetector.findFaces(picture, faces);
Log.d(TAG, "Face found: "+(facesFound == 1));
I am trying to find an explanation for this, but I haven't come to any conclusion. I have even tried to clean the meta-data of the picture, in case the FaceDetector was set to not detect faces in pictures coming from the front camera.
There are various factors that could be the cause of the problem. Here are some I've run into:
The bitmap is not upright
The bitmap is not in 565 pixel format
The image is poor quality (too dark, too noisy, poor resolution, obscured by a finger, etc.)
Related
I'm building a camera that needs to detect the user's face/eyes and measure distance through the eyes.
I found that on this project https://github.com/IvanLudvig/Screen-to-face-distance, it works great but it doesn't happen to use a preview of the frontal camera (Really, I tested it on at least 10 people, all measurements were REALLY close or perfect).
My app already had a selfie camera part made by me, but using the old camera API, and I didn't find a solution to have both camera preview and the face distance to work together on that, always would receive an error that the camera was already in use.
I decided to move to the camera2 to use more than one camera stream, and I'm still learning this process of having two streams at the same time for different things. Btw documentation on this seems to be so scarce, I'm really lost on it.
Now, am I on the right path to this?
Also, on his project,Ivan uses this:
Camera camera = frontCam();
Camera.Parameters campar = camera.getParameters();
F = campar.getFocalLength();
angleX = campar.getHorizontalViewAngle();
angleY = campar.getVerticalViewAngle();
sensorX = (float) (Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float) (Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This is the old camera API, how can I call this on the new one?
Judging from this answer: Android camera2 API get focus distance in AF mode
Do I need to get min and max focal lenghts?
For the horizontal and vertical angles I found this one: What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?
The rest I believe is done by Google's Cloud Vision API
EDIT:
I got it to work on camera2, using GMS's own example, CameraSourcePreview and GraphicOverlay to display whatever I want to display together the preview and detect faces.
Now to get the camera characteristics:
CameraManager manager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
character = manager.getCameraCharacteristics(String.valueOf(1));
} catch (CameraAccessException e) {
Log.e(TAG, "CamAcc1Error.", e);
}
angleX = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
angleY = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getHeight();
sensorX = (float)(Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float)(Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This pretty much gives me mm accuracy to face distance, which is exactly what I needed.
Now what is left is getting a picture from this preview with GMS's CameraSourcePreview, so that I can use later.
Final Edit here:
I solved the picture issue, but I forgot to edit here. The thing is, all the examples using camera2 to take a picture are really complicated (rightly so, it's a better API than camera, has a lot of options), but it can be really simplyfied to what I did here:
mCameraSource.takePicture(null, bytes -> {
Bitmap bitmap;
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
Matrix matrix = new Matrix();
matrix.postRotate(180);
matrix.postScale(1, -1);
rotateBmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
saveBmp2SD(STORAGE_PATH, rotateBmp);
rotateBmp.recycle();
bitmap.recycle();
}
});
That's all I needed to take a picture and save to a location I specified, don't mind the recycling here, it's not right, I'm working on it
It looks like that bit of math is calculating the physical dimensions of the image sensor, via the angle-of-view equation:
The camera2 API has the sensor dimensions as part of the camera characteristics directly: SENSOR_INFO_PHYSICAL_SIZE.
In fact, if you want to get the field of view in camera2, you have to use the same equation in the other direction, since FOVs are not part of camera characteristics.
Beyond that, it looks like the example you linked just uses the old camera API to fetch that FOV information, and then closes the camera and uses the Vision API to actually drive the camera. So you'd have to look at the vision API docs to see how you can give it camera input instead of having it drive everything. Or you could use the camera API's built-in face detector, which on many devices gives you eye locations as well.
https://developers.google.com/android/reference/com/google/android/gms/vision/face/FaceDetector.Builder
I'm using the above google service in my app for face detection. I made sure my phone has the minimum google play service version, which on my phone is 8.3, but still I can't get the face detection to work! I imported the library by importing the google play library in my eclipse project.... Here's the code:
#Override
protected void onPreExecute()
{
detector = new FaceDetector.Builder(MainContext)
.setTrackingEnabled(false)
//.setProminentFaceOnly(true)
.setLandmarkType(FaceDetector.ALL_LANDMARKS) //required
.build();
}
private void detectTheFace(Bitmap converted)
{
Frame frame = new Frame.Builder().setBitmap(converted).build();
faces = detector.detect(frame);
}
I don't know if it's necessary to convert the bitmap you use to detect the faces has to be RGB_565 configuration but I did it anyways. I tried with and without changing the RGB configuration and it yields the same results. Basically the faces sparse array is of size 0 meaning it doesnt detect a face.... ever. Btw just to give some context on the above code, I'm executing the face detection in a async task because I want to run it on the background.
I have the same problem ,i.e. it was working fine on nexus but not in galaxy. I resolved the problem by rotating the bitmap to 90 degree in case detector.detect() method gives faces of zero size. so maximum retry is 3 times after calling detector.detect() because 4th rotation gives you the same bitmap.
Bitmap rotateBitmap(Bitmap bitmapToRotate) {
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmapToRotate, 0, 0,
bitmapToRotate.getWidth(), bitmapToRotate.getHeight(), matrix,
true);
return rotatedBitmap;
}
check if the face return by detector.detect() has zero size then below code should run.
if(!faces.size()>0){
if (rotationCounter < 3) {
rotationCounter++;
bitmap= rotateBitmap(bitmapToRotate);
//again call detector.detect() here
}
}
you can check for the need of rotating bitmap without writing above code. From your original, code try to capture the image in landscape mode of phone or just rotate the image to 90 degree ans capture it.
To solve this problem, use the orientation specification from the EXIF of the photo.
I want to use Vision API in android to detect the face and the landmarks over the face.
I followed the Vision API sample :
https://github.com/googlesamples/android-vision/tree/master/visionSamples/photo-demo/
My issues are:
1) I cannot understand the details of this object while debugging:
FaceDetector detector = new FaceDetector.Builder(context)
.setTrackingEnabled(false)
.setLandmarkType(FaceDetector.ALL_LANDMARKS)
.setProminentFaceOnly(true)
.build();
image that shows the details of 'detector'
Cannot understand 'zzbbc','zzbbd'...etc
2)
Frame frame = new Frame.Builder().setBitmap(bitmap).build();
SparseArray<Face> faces = detector.detect(frame);`
Here the size of faces is returned as zero.
There is no exception thrown, I can see the image but the rectangle and dots cannot be seen.
Can anyone please help me out with this issue?
zzbbc, zzbbd, etc. are internal details of the implementation that aren't meant to be inspected. You don't need to know what these are to use the API.
In this case, no faces were detected. Note that the "prominentFaceOnly" setting will mean that the detector only looks a single large face (i.e., filling greater than a third of the screen width). If the faces in your photo are smaller than this, they will not be detected.
I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.
Hello I am new in OpenCV and Android, I'm running the tutorial examples, nevertheless the camera orientation is rotated, I have tried this code to fix it:
mRgba = inputFrame.rgba();
Mat mRgbaT = mRgba.t();
Core.flip(mRgba.t(), mRgbaT, 1);
Imgproc.resize(mRgbaT, mRgbaT, mRgba.size());
return mRgbaT;
This works fine for a minute or so, frames are captured and represented correctly during one minute, but after that I get a SIGSev BpMemory fail and the application crashes.
Is there any workaround to change the camera orientation without having to do a flip and image resize?
May thanks for your help and advice.
If you are using android.hardware.Camera and a custom camera application, your issue is probably that it defaults to layout mode, and that is probably what is setting the orientation when you transform it into a mat.
Can you provide some more context for how you are passing camera output into the module where you are using openCV?
After you open the camera call:
mCamera.setDisplayOrientation(90);
You can use something like this to change the orientation.
Mat draw = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Mat show = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Mat how = new Mat(mRgba.rows(),mRgba.cols(),mRgba.type());
Core.transpose(draw,how);
Imgproc.resize(how, show, show.size(),0,0,0);
Core.flip(show,draw,1);
return draw;