I'm building a camera that needs to detect the user's face/eyes and measure distance through the eyes.
I found that on this project https://github.com/IvanLudvig/Screen-to-face-distance, it works great but it doesn't happen to use a preview of the frontal camera (Really, I tested it on at least 10 people, all measurements were REALLY close or perfect).
My app already had a selfie camera part made by me, but using the old camera API, and I didn't find a solution to have both camera preview and the face distance to work together on that, always would receive an error that the camera was already in use.
I decided to move to the camera2 to use more than one camera stream, and I'm still learning this process of having two streams at the same time for different things. Btw documentation on this seems to be so scarce, I'm really lost on it.
Now, am I on the right path to this?
Also, on his project,Ivan uses this:
Camera camera = frontCam();
Camera.Parameters campar = camera.getParameters();
F = campar.getFocalLength();
angleX = campar.getHorizontalViewAngle();
angleY = campar.getVerticalViewAngle();
sensorX = (float) (Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float) (Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This is the old camera API, how can I call this on the new one?
Judging from this answer: Android camera2 API get focus distance in AF mode
Do I need to get min and max focal lenghts?
For the horizontal and vertical angles I found this one: What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?
The rest I believe is done by Google's Cloud Vision API
EDIT:
I got it to work on camera2, using GMS's own example, CameraSourcePreview and GraphicOverlay to display whatever I want to display together the preview and detect faces.
Now to get the camera characteristics:
CameraManager manager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
character = manager.getCameraCharacteristics(String.valueOf(1));
} catch (CameraAccessException e) {
Log.e(TAG, "CamAcc1Error.", e);
}
angleX = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
angleY = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getHeight();
sensorX = (float)(Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float)(Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This pretty much gives me mm accuracy to face distance, which is exactly what I needed.
Now what is left is getting a picture from this preview with GMS's CameraSourcePreview, so that I can use later.
Final Edit here:
I solved the picture issue, but I forgot to edit here. The thing is, all the examples using camera2 to take a picture are really complicated (rightly so, it's a better API than camera, has a lot of options), but it can be really simplyfied to what I did here:
mCameraSource.takePicture(null, bytes -> {
Bitmap bitmap;
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
Matrix matrix = new Matrix();
matrix.postRotate(180);
matrix.postScale(1, -1);
rotateBmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
saveBmp2SD(STORAGE_PATH, rotateBmp);
rotateBmp.recycle();
bitmap.recycle();
}
});
That's all I needed to take a picture and save to a location I specified, don't mind the recycling here, it's not right, I'm working on it
It looks like that bit of math is calculating the physical dimensions of the image sensor, via the angle-of-view equation:
The camera2 API has the sensor dimensions as part of the camera characteristics directly: SENSOR_INFO_PHYSICAL_SIZE.
In fact, if you want to get the field of view in camera2, you have to use the same equation in the other direction, since FOVs are not part of camera characteristics.
Beyond that, it looks like the example you linked just uses the old camera API to fetch that FOV information, and then closes the camera and uses the Vision API to actually drive the camera. So you'd have to look at the vision API docs to see how you can give it camera input instead of having it drive everything. Or you could use the camera API's built-in face detector, which on many devices gives you eye locations as well.
Related
I'm using ML kit face detection in my android application. I'm have implemented this with Camera 2 libaray. The functionality works fire and I'm getting a bound box for the face detected in the camera preview. But when I compare it with ML kit sample application. I think there is mistake in my bound box resize calculation.
private fun drawAdjustedFaceBoundBox(canvas: Canvas){
val myPaint = Paint()
myPaint.color = Color.rgb(220, 249, 10)
myPaint.strokeWidth = 5f
myPaint.style = Paint.Style.STROKE
if(analyzedImageSize.width != 0 && analyzedImageSize.height != 0) {
val horizontalScaleFactor = previewScreenSize.width / analyzedImageSize.width.toFloat()
val verticalScaleFactor = previewScreenSize.height / analyzedImageSize.height.toFloat()
val adjustedBoundRect = RectF()
adjustedBoundRect.top = (objectBound.top * verticalScaleFactor) + surfaceTop.toFloat()
adjustedBoundRect.left = objectBound.left * horizontalScaleFactor
adjustedBoundRect.right = objectBound.right * horizontalScaleFactor
adjustedBoundRect.bottom = (objectBound.bottom * verticalScaleFactor) + surfaceTop.toFloat()
val adjustedMirrorObjectBound = RectF(adjustedBoundRect)
if(mirrorCoordinates){
val originalRight = adjustedBoundRect.right
val originalLeft = adjustedBoundRect.left
//mirror the coordination since it's the front facing camera
adjustedMirrorObjectBound.left = (previewScreenSize.width - originalRight)
adjustedMirrorObjectBound.right = (previewScreenSize.width - originalLeft)
}
canvas.drawRect(adjustedMirrorObjectBound, myPaint)
}
}
This is the function I use for calculating resize bound box. I'll explain my approach.
ML kit analyse image size is 480 * 640. preview window of the camera is 1080 * 2131. And I'm using front facing camera. What I first did was get a scale factor for X and Y axis. X is named as horizontalScaleFactor Y is named as verticalScaleFactor. then I multiplied original bound box values with matching scale factors. surfaceTop.toFloat() is the value from the top of the screen to my camera preview view. In my example it's 0. Since I'm using front facing camera I mirrored the coordination after the scale up calculation. After applying these calculations there is abound box on my detected face but it's not covering my whole face like in the ML kit sample project. When I move face left and rignt my bound box move with the face correctly so seems like my bound box mirroring calculation is correct. I suspect
something is wrong with my scale up calculation. Please refer to bellow image to get a idea about the difference between my application and sample application.
My Application preview bound box
ML kit sample application bound box
Basically my bound box doesn't cover two ears and chin areas. This seems like a more of a math problem rather than a code issue. Please help me identify what I'm doing wrong.
Edit
This is my repository I'm comparing against this sample repository face detection implementation. Please check the dev branch on my repo.
Edit 2
This is how it looks when application is running. I do not have access to the proxy image since it's not saving to the storage. Proxy image is created by the CameraX library which uses as for the image capturing.
What I'm trying to achieve:
What I achieve w/ my tensorflow app:
Background:
I'm using TensorFlow lite application that works badly because the images that are produced w/ the flash on are blurry and many similar white objects can't be distinguished. (The images are taken in a sort of a cup, which is a closed environment, this is why the flash is always on)
If you want to reproduce the project, you can follow the instructions below:
Set up the working directory
git clone https://github.com/tensorflow/examples.git
Open the project with Android Studio
Open a project with Android Studio by taking the following steps:
Open Android Studio. After it loads select " Open an existing
Android Studio project".
In the file selector, choose examples/lite/examples/image_classification/android from your
working directory to load the project.
In LegacyCameraConnectionFragment.java, in the function “onSurfaceTextureAvailable
“
Add in line 90~ this code is added to turn on the flash all the time (it's off by default):
List<String> flashModes = parameters.getSupportedFlashModes();
if (flashModes.contains(android.hardware.Camera.Parameters.FLASH_MODE_TORCH))
{
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
}
More info. about installation can be found here: https://www.tensorflow.org/lite/models/image_classification/overview#example_applications_and_guides
What I tried:
First, I tried tweaking the camera parameters:
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_MACRO);
previewRequestBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_TORCH);
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_MACRO);
parameters.set("iso","100");
parameters.setJpegQuality(100);
Then,
I've tried to implement autofocus w/ the following code (Which seems to do some focus, but still the image almost stays the same)
private Camera.AutoFocusCallback myAutoFocusCallback = new Camera.AutoFocusCallback() {
#Override
public void onAutoFocus(boolean arg0, Camera arg1) {
if (arg0) {
LegacyCameraConnectionFragment.camera.cancelAutoFocus();
}
}
};
public void doTouchFocus(final Rect tfocusRect) {
try {
Camera mCamera = LegacyCameraConnectionFragment.camera;
List<Camera.Area> focusList = new ArrayList<Camera.Area>();
Camera.Area focusArea = new Camera.Area(tfocusRect, 1000);
focusList.add(focusArea);
Camera.Parameters param = mCamera.getParameters();
param.setFocusAreas(focusList);
param.setMeteringAreas(focusList);
mCamera.setParameters(param);
mCamera.autoFocus(myAutoFocusCallback);
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
float x = event.getX();
float y = event.getY();
Rect touchRect = new Rect(
(int) (x - 100),
(int) (y - 100),
(int) (x + 100),
(int) (y + 100));
final Rect targetFocusRect = new Rect(
touchRect.left * 2000 / previewWidth - 1000,
touchRect.top * 2000 / previewHeight - 1000,
touchRect.right * 2000 / previewWidth - 1000,
touchRect.bottom * 2000 / previewHeight - 1000);
doTouchFocus(targetFocusRect);
}
return false;
}
Third,
I tried checking some repos:
First repo is Camera2Basic
https://github.com/googlearchive/android-Camera2Basic
This repo produces the same bad results.
Then I tried OpenCamera's source code which can be found in source-forge.net:
https://sourceforge.net/projects/opencamera/files/test_20200301/
And the app produces really good results, but after a few days I still couldn't figure out which part should I take from there to make this work. I believe it has to do w/ the focus, but I wasn't able to understand how to take the code from there.
I also watched some YouTube videos and went over 10 posts here about Android's Studio Camera API v1 and v2 and tried to fix it on my own.
I've no idea how to continue, any ideas are highly appreciated.
You are using a very old implementation of the Camera API in LegacyCameraConnectionFragment, which is deprecated. You should use android.hardware.camera2, which is being used in the other tensorflow example, CameraConnectionFragment. Recently, CameraX was released to Beta, and there would probably be fewer examples for you to follow online, but some people are enjoying it already. More info about cameraX here.
Looks like CameraConnectionFragment.java is already using optimal settings for you?
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
PS: I don't think you should hard-code or even 'fine-tune' your camera settings so that it only works in very small cases. Let the API do the work for you.
Suggestion: for OpenCamera, switch everything to manual and then see if you can find a combination of flash, shutter speed, focus mode, and ISO that does what you expect; then you could try to reproduce those parameters in your code.
Also, I would suggest to use the Camera2 API in the future, as it allows more fine-grained control over some parameters. See if you could rather use CameraConnectionFragment instead of LegacyCameraConnectionFragment.
Addendum: OpenCamera saves all parameters in an image's EXIF data, so you could just look up the parameters for the "good" image using any decent image viewer.
I'm working on some app using Android's Camera2 API. So far I've been able to get a preview displayed within a TextureView. The app is by default in landscape mode. When using the emulator the preview will appear upside-down. On my physical Nexus 5 the preview is usually displayed correctly (landscape, not upside-down), but occasionally it is rotated by 90 degrees, yet stretched to the dimensions of the screen.
I thought that should be easy and thought the following code would return the necessary information on the current orientation:
// display rotation
getActivity().getWindowManager().getDefaultDisplay().getRotation();
// sensor orientation
mManager.getCameraCharacteristics(mCameraId).get(CameraCharacteristics.SENSOR_ORIENTATION);
... I was pretty surprised when I saw that above code always returned 1 for the display rotation and 90 for the sensor orientation, regardless of the preview being rotated by 90 degree or not. (Within the emulator sensor orientation is always 270 which kinda makes sense if I assume 90 to be the correct orientation).
I also checked the width and height within onMeasure within AutoMeasureTextureView (adopted from Android's Camera2 example) that I'm using to create my TextureView. But no luck either - width and height reported from within onMeasure are always the same regardless of the preview rotation.
So I'm clueless on how to tackle this issue. Does anyone have an idea what could be the reason for the occasional hickups in my preview orientation?
[Edit]
A detail I just found out: Whenever the preview appears rotated onSurfaceTextureSizeChanged in the TextureView.SurfaceTextureListener seems not to get called. In the documentation for onSurfaceTextureSizeChanged it is said that this method is called whenever the SurfaceTexture's buffers size is changed. I have a method createCameraPreviewSession (copied from Android's Camera2 example) in which I set the default buffer size of my texture like
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
From my logging output I can tell that onSurfaceTextureSizeChanged is called exactly after that - however, not always... (or setting the default buffer size sometimes silently fails?).
I think I can answer my own question: I was creating my Camera2 Fragment after the Android's Camera2 example. However, I didn't really consider the method configureTransform to be important as, opposite to the example code, my application is forced to landscape mode anyway. It turned out that this assumption was wrong. Since having configureTransform reintegrated in my code I haven't experienced any more hiccups.
Update: The original example within the Android documentation pages doesn't seem to exist anymore. I've updated the link which is now pointing to the code on Github.
I followed the whole textureView.setTransform(matrix) method listed above, and it worked. However, I was also able to manually set the rotation using the much simpler textureView.setRotation(270) without the need to create a Matrix.
I had also faced a similar issue on the Nexus device. The below code is working for me.
Call this function before opening the camera and also on Resume().
private void transformImage(int width, int height)
{
if (textureView == null) {
return;
} else try {
{
Matrix matrix = new Matrix();
int rotation = getWindowManager().getDefaultDisplay().getRotation();
RectF textureRectF = new RectF(0, 0, width, height);
RectF previewRectF = new RectF(0, 0, textureView.getHeight(), textureView.getWidth());
float centerX = textureRectF.centerX();
float centerY = textureRectF.centerY();
if (rotation == Surface.ROTATION_90 || rotation == Surface.ROTATION_270) {
previewRectF.offset(centerX - previewRectF.centerX(), centerY - previewRectF.centerY());
matrix.setRectToRect(textureRectF, previewRectF, Matrix.ScaleToFit.FILL);
float scale = Math.max((float) width / width, (float) height / width);
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
}
textureView.setTransform(matrix);
}
} catch (Exception e) {
e.printStackTrace();
}
}
https://developers.google.com/android/reference/com/google/android/gms/vision/face/FaceDetector.Builder
I'm using the above google service in my app for face detection. I made sure my phone has the minimum google play service version, which on my phone is 8.3, but still I can't get the face detection to work! I imported the library by importing the google play library in my eclipse project.... Here's the code:
#Override
protected void onPreExecute()
{
detector = new FaceDetector.Builder(MainContext)
.setTrackingEnabled(false)
//.setProminentFaceOnly(true)
.setLandmarkType(FaceDetector.ALL_LANDMARKS) //required
.build();
}
private void detectTheFace(Bitmap converted)
{
Frame frame = new Frame.Builder().setBitmap(converted).build();
faces = detector.detect(frame);
}
I don't know if it's necessary to convert the bitmap you use to detect the faces has to be RGB_565 configuration but I did it anyways. I tried with and without changing the RGB configuration and it yields the same results. Basically the faces sparse array is of size 0 meaning it doesnt detect a face.... ever. Btw just to give some context on the above code, I'm executing the face detection in a async task because I want to run it on the background.
I have the same problem ,i.e. it was working fine on nexus but not in galaxy. I resolved the problem by rotating the bitmap to 90 degree in case detector.detect() method gives faces of zero size. so maximum retry is 3 times after calling detector.detect() because 4th rotation gives you the same bitmap.
Bitmap rotateBitmap(Bitmap bitmapToRotate) {
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmapToRotate, 0, 0,
bitmapToRotate.getWidth(), bitmapToRotate.getHeight(), matrix,
true);
return rotatedBitmap;
}
check if the face return by detector.detect() has zero size then below code should run.
if(!faces.size()>0){
if (rotationCounter < 3) {
rotationCounter++;
bitmap= rotateBitmap(bitmapToRotate);
//again call detector.detect() here
}
}
you can check for the need of rotating bitmap without writing above code. From your original, code try to capture the image in landscape mode of phone or just rotate the image to 90 degree ans capture it.
To solve this problem, use the orientation specification from the EXIF of the photo.
I am implementing an app that uses real-time image processing on live images from the camera. It was working, with limitations, using the now deprecated android.hardware.Camera; for improved flexibility & performance I'd like to use the new android.hardware.camera2 API. I'm having trouble getting the raw image data for processing however. This is on a Samsung Galaxy S5. (Unfortunately, I don't have another Lollipop device handy to test on other hardware).
I got the overall framework (with inspiration from the 'HdrViewFinder' and 'Camera2Basic' samples) working, and the live image is drawn on the screen via a SurfaceTexture and a GLSurfaceView. However, I also need to access the image data (grayscale only is fine, at least for now) for custom image processing. According to the documentation to StreamConfigurationMap.isOutputSupportedFor(class), the recommended surface to obtain image data directly would be ImageReader (correct?).
So I've set up my capture requests as:
mSurfaceTexture.setDefaultBufferSize(640, 480);
mSurface = new Surface(surfaceTexture);
...
mImageReader = ImageReader.newInstance(640, 480, format, 2);
...
List<Surface> surfaces = new ArrayList<Surface>();
surfaces.add(mSurface);
surfaces.add(mImageReader.getSurface());
...
mCameraDevice.createCaptureSession(surfaces, mCameraSessionListener, mCameraHandler);
and in the onImageAvailable callback for the ImageReader, I'm accessing the data as follows:
Image img = reader.acquireLatestImage();
ByteBuffer grayscalePixelsDirectByteBuffer = img.getPlanes()[0].getBuffer();
...but while (as said) the live image preview is working, there's something wrong with the data I get here (or with the way I get it). According to
mCameraInfo.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP).getOutputFormats();
...the following ImageFormats should be supported: NV21, JPEG, YV12, YUV_420_888. I've tried all (plugged in for 'format' above), all support the set resolution according to getOutputSizes(format), but none of them give the desired result:
NV21: ImageReader.newInstance throws java.lang.IllegalArgumentException: NV21 format is not supported
JPEG: This does work, but it doesn't seem to make sense for a real-time application to go through JPEG encode and decode for each frame...
YV12 and YUV_420_888: this is the weirdest result -- I can see get the grayscale image, but it is flipped vertically (yes, flipped, not rotated!) and significantly squished (scaled significantly horizontally, but not vertically).
What am I missing here? What causes the image to be flipped and squished? How can I get a geometrically correct grayscale buffer? Should I be using a different type of surface (instead of ImageReader)?
Any hints appreciated.
I found an explanation (though not necessarily a satisfactory solution): it turns out that the sensor array's aspect ratio is 16:9 (found via mCameraInfo.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);).
At least when requesting YV12/YUV_420_888, the streamer appears to not crop the image in any way, but instead scale it non-uniformly, to reach the requested frame size. The images have the correct proportions when requesting a 16:9 format (of which there are only two higher-res ones, unfortunately). Seems a bit odd to me -- it doesn't appear to happen when requesting JPEG, or with the equivalent old camera API functions, or for stills; and I'm not sure what the non-uniformly scaled frames would be good for.
I feel that it's not a really satisfactory solution, because it means that you can't rely on the list of output formats, but instead have to find the sensor size first, find formats with the same aspect ratio, then downsample the image yourself (as needed)...
I don't know if this is the expected outcome here or a 'feature' of the S5. Comments or suggestions still welcome.
I had the same problem and found a solution.
The first part of the problem is setting the size of the surface buffer:
// We configure the size of default buffer to be the size of camera preview we want.
//texture.setDefaultBufferSize(width, height);
This is where the image gets skewed, not in the camera. You should comment it out, and then set an up-scaling of the image when displaying it.
int[] rgba = new int[width*height];
//getImage(rgba);
nativeLoader.convertImage(width, height, data, rgba);
Bitmap bmp = mBitmap;
bmp.setPixels(rgba, 0, width, 0, 0, width, height);
Canvas canvas = mTextureView.lockCanvas();
if (canvas != null) {
//canvas.drawBitmap(bmp, 0, 0, null );//configureTransform(width, height), null);
//canvas.drawBitmap(bmp, configureTransform(width, height), null);
canvas.drawBitmap(bmp, new Rect(0,0,320,240), new Rect(0,0, 640*2,480*2), null );
//canvas.drawBitmap(bmp, (canvas.getWidth() - 320) / 2, (canvas.getHeight() - 240) / 2, null);
mTextureView.unlockCanvasAndPost(canvas);
}
image.close();
You can play around with the values to fine tune the solution for your problem.