https://developers.google.com/android/reference/com/google/android/gms/vision/face/FaceDetector.Builder
I'm using the above google service in my app for face detection. I made sure my phone has the minimum google play service version, which on my phone is 8.3, but still I can't get the face detection to work! I imported the library by importing the google play library in my eclipse project.... Here's the code:
#Override
protected void onPreExecute()
{
detector = new FaceDetector.Builder(MainContext)
.setTrackingEnabled(false)
//.setProminentFaceOnly(true)
.setLandmarkType(FaceDetector.ALL_LANDMARKS) //required
.build();
}
private void detectTheFace(Bitmap converted)
{
Frame frame = new Frame.Builder().setBitmap(converted).build();
faces = detector.detect(frame);
}
I don't know if it's necessary to convert the bitmap you use to detect the faces has to be RGB_565 configuration but I did it anyways. I tried with and without changing the RGB configuration and it yields the same results. Basically the faces sparse array is of size 0 meaning it doesnt detect a face.... ever. Btw just to give some context on the above code, I'm executing the face detection in a async task because I want to run it on the background.
I have the same problem ,i.e. it was working fine on nexus but not in galaxy. I resolved the problem by rotating the bitmap to 90 degree in case detector.detect() method gives faces of zero size. so maximum retry is 3 times after calling detector.detect() because 4th rotation gives you the same bitmap.
Bitmap rotateBitmap(Bitmap bitmapToRotate) {
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmapToRotate, 0, 0,
bitmapToRotate.getWidth(), bitmapToRotate.getHeight(), matrix,
true);
return rotatedBitmap;
}
check if the face return by detector.detect() has zero size then below code should run.
if(!faces.size()>0){
if (rotationCounter < 3) {
rotationCounter++;
bitmap= rotateBitmap(bitmapToRotate);
//again call detector.detect() here
}
}
you can check for the need of rotating bitmap without writing above code. From your original, code try to capture the image in landscape mode of phone or just rotate the image to 90 degree ans capture it.
To solve this problem, use the orientation specification from the EXIF of the photo.
Related
Similar to what is laid out in the tutorial I am initializing CameraX's Image Analysis use case with code:
ImageAnalysis imageAnalysis =
new ImageAnalysis.Builder()
.setTargetResolution(new Size(1280, 720))
.setTargetRotation(Surface.ROTATION_90)
.setBackpressureStrategy(ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST)
.build();
imageAnalysis.setAnalyzer(executor, new ImageAnalysis.Analyzer() {
#Override
public void analyze(#NonNull ImageProxy image) {
int rotationDegrees = image.getImageInfo().getRotationDegrees();
// insert your code here.
}
});
cameraProvider.bindToLifecycle((LifecycleOwner) this, cameraSelector, imageAnalysis, preview);
I am trying to use setTargetRotation method but I am not clear as to how I am supposed to apply this rotation to the output image as vaguely described in the docs:
The rotation value of ImageInfo will be the rotation, which if applied to the output image, will make the image match target rotation specified here.
If I set a breakpoint in the analyze() method shown above, the image object does not get rotated when changing the setTargetRotation value, so I assume the docs are telling me to grab the orientation with getTargerRotation() in the sense that these two pieces of code (builder vs analyzer) are coded separately and this information can be passed between the two without actually applying any rotation. Did I understand this correctly? This really doesn't make sense to me as the setTargetResolution method actually changes the size sent via the ImageProxy. I'd think setTargetRotation should also apply said rotation, but it appears not.
If my understanding is correct, is there an optimal efficient way to rotate these ImageProxy objects after entering the analyze method? Right now I'm doing it after converting to Bitmap via
Bitmap myImg = BitmapFactory.decodeResource(someInutStream);
Matrix matrix = new Matrix();
matrix.postRotate(30);
Bitmap rotated = Bitmap.createBitmap(myImg, 0, 0, myImg.getWidth(), myImg.getHeight(),
matrix, true);
Above idea came from here, but I'd think this is not the most efficient way to do this. I'm sure I could also come up with a way to transpose the arrays, but this could get tedious and messy quickly. Isn't there any way to setup the ImageAnalysis Builder to send the rotated ImageProxy directly rather than having to make a bitmap of everything?
The rotation value of ImageInfo will be the rotation, which if applied
to the output image, will make the image match target rotation
specified here.
An example to understand the definition would be to assume the target rotation matches the device's orientation. Applying the returned rotation to the output image will result in the image being upright, i.e matching the device's orientation. So if the device is in its natural portrait position, the rotated output image will also be in that orientation.
Output image + Rotated by rotation value --> Upright output image
CameraX's documentation includes a section about rotations, since it can be a confusing topic. You can check it out here.
Going back to your question about setTargetRotation and the ImageAnalysis use case, it isn't meant to rotate the images passed to the Analyzer, but it should affect the rotation information of the images, i.e ImageProxy.getImageInfo().getRotationDegrees(). Transforming images (rotating, cropping, etc) can be an expensive operation, so CameraX does not perform any modification to the analysis frames, but it provides the required metadata to make sense of the output images, metadata that can then be used with image processors that analyze the frames.
If you need to rotate each analysis frame, using the Bitmap approach is one way, but it can be costly. Another more performant way may be to do it in native code.
I'm building a camera that needs to detect the user's face/eyes and measure distance through the eyes.
I found that on this project https://github.com/IvanLudvig/Screen-to-face-distance, it works great but it doesn't happen to use a preview of the frontal camera (Really, I tested it on at least 10 people, all measurements were REALLY close or perfect).
My app already had a selfie camera part made by me, but using the old camera API, and I didn't find a solution to have both camera preview and the face distance to work together on that, always would receive an error that the camera was already in use.
I decided to move to the camera2 to use more than one camera stream, and I'm still learning this process of having two streams at the same time for different things. Btw documentation on this seems to be so scarce, I'm really lost on it.
Now, am I on the right path to this?
Also, on his project,Ivan uses this:
Camera camera = frontCam();
Camera.Parameters campar = camera.getParameters();
F = campar.getFocalLength();
angleX = campar.getHorizontalViewAngle();
angleY = campar.getVerticalViewAngle();
sensorX = (float) (Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float) (Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This is the old camera API, how can I call this on the new one?
Judging from this answer: Android camera2 API get focus distance in AF mode
Do I need to get min and max focal lenghts?
For the horizontal and vertical angles I found this one: What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?
The rest I believe is done by Google's Cloud Vision API
EDIT:
I got it to work on camera2, using GMS's own example, CameraSourcePreview and GraphicOverlay to display whatever I want to display together the preview and detect faces.
Now to get the camera characteristics:
CameraManager manager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
character = manager.getCameraCharacteristics(String.valueOf(1));
} catch (CameraAccessException e) {
Log.e(TAG, "CamAcc1Error.", e);
}
angleX = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
angleY = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getHeight();
sensorX = (float)(Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float)(Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This pretty much gives me mm accuracy to face distance, which is exactly what I needed.
Now what is left is getting a picture from this preview with GMS's CameraSourcePreview, so that I can use later.
Final Edit here:
I solved the picture issue, but I forgot to edit here. The thing is, all the examples using camera2 to take a picture are really complicated (rightly so, it's a better API than camera, has a lot of options), but it can be really simplyfied to what I did here:
mCameraSource.takePicture(null, bytes -> {
Bitmap bitmap;
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
Matrix matrix = new Matrix();
matrix.postRotate(180);
matrix.postScale(1, -1);
rotateBmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
saveBmp2SD(STORAGE_PATH, rotateBmp);
rotateBmp.recycle();
bitmap.recycle();
}
});
That's all I needed to take a picture and save to a location I specified, don't mind the recycling here, it's not right, I'm working on it
It looks like that bit of math is calculating the physical dimensions of the image sensor, via the angle-of-view equation:
The camera2 API has the sensor dimensions as part of the camera characteristics directly: SENSOR_INFO_PHYSICAL_SIZE.
In fact, if you want to get the field of view in camera2, you have to use the same equation in the other direction, since FOVs are not part of camera characteristics.
Beyond that, it looks like the example you linked just uses the old camera API to fetch that FOV information, and then closes the camera and uses the Vision API to actually drive the camera. So you'd have to look at the vision API docs to see how you can give it camera input instead of having it drive everything. Or you could use the camera API's built-in face detector, which on many devices gives you eye locations as well.
I am experiencing a strange issue with the FaceDetector from the Android SDK. The code below is working fine and detecting faces correctly when using the back camera, but no matter what, when the picture is from the front camera, no face is detected.
FaceDetector.Face[] faces = new FaceDetector.Face[1];
FaceDetector faceDetector = new FaceDetector(width, height, 1);
int facesFound = faceDetector.findFaces(picture, faces);
Log.d(TAG, "Face found: "+(facesFound == 1));
I am trying to find an explanation for this, but I haven't come to any conclusion. I have even tried to clean the meta-data of the picture, in case the FaceDetector was set to not detect faces in pictures coming from the front camera.
There are various factors that could be the cause of the problem. Here are some I've run into:
The bitmap is not upright
The bitmap is not in 565 pixel format
The image is poor quality (too dark, too noisy, poor resolution, obscured by a finger, etc.)
I have searched and found simple code to rotate an image. I am pulling the image out of an ImageView object into a bitmap, rotating it then putting it back. I realize this is not the most effective method but I don't think it should crash without giving an error message in the CATCH block.
Here is my code. The only value passed in is "r" or "l" depending on which direction I want to rotate. Smaler images (1500x1500 or smaller) work just fine. Things go bad around the 2500x2500 size.
public void rotate(String dir)
{
try
{
float angle = (dir.equals("r") ? 90 : -90);
Bitmap bitmap = ((BitmapDrawable) imageView.getDrawable()).getBitmap();
Matrix matrix = new Matrix();
matrix.reset();
matrix.postRotate(angle);
bitmap = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(), bitmap.getHeight(), matrix, false);
imageView.setImageBitmap(bitmap);
}
catch(Exception e)
{
Utilities.logError(e.toString());
}
}
Any clue as to why it is crashing and why it doesn't thow an exception? I just get a message "Unfortuantly process .... has stopped" and I get kicked back to the welcome screen of my app.
Oh, for kicks I set the angle to ZERO (hard coded) and it didn't crash. I suspect that it is just taking too long to rotate and Android is having a fit. But I am not sure how to confirm that as the problem or how to tell Android to wait a little longer.
Even if I reduce the preview image for the rotation, when I go to save I will have to rotate the full size image at least once and will hit this same issue. Won't I?
I can more or less guarantee without looking at the logs that you're getting an Out Of Memory Exception.
You need to use smaller images, or use a different method to rotate that doesn't use up so much memory (you're allocating 2 2500x2500 bitmaps at the same time here! that's tons!).
Try using a RotateAnimation to get your effect instead.
Hope this helps :)
My app is using camera to take photos. The problem is, the photo is rotated by 90 degrees. The app is designed to run in portrait orientation and I have set
android:configChanges="orientation|screenSize"
to avoid orientation changes. I thought I managed to fix it with
parameters.setRotation(90);
but it turns out that it varies on different devices (tested on lenovo ThinkPad tablet and a copule of smartphones). I tried reading the EXIF of the photo but orientation is not included there. I know there are many similar posts but most of them regards default camera app. Could someone explain me what this problem is caused by and how can i fix it? Thanks in advance.
Try this for getting image as u wanted
public static Bitmap createRotatedBitmap(Bitmap bm, float degree) {
Bitmap bitmap = null;
if (degree != 0) {
Matrix matrix = new Matrix();
matrix.preRotate(degree);
bitmap = Bitmap.createBitmap(bm, 0, 0, bm.getWidth(),
bm.getHeight(), matrix, true);
}
return bitmap;
}
bitmap = createRotatedBitmap(bitmap, 90);
Yes, orientation will not be exactly same for all the devices. It is completely hardware dependent, can vary device to device. You can't fix it, only one option you have just allow user to set the rotation once your application launched at first time get the base rotation angle and save it in to the settings and then give your functionality afterwards.