New vision API - Picture size - android

I've been working on a project and got to make face detection working, with focus, thanks to SO.
I am now taking pictures, but using the front camera on my Nexus 5 and a preview size of 1280x960, the play services seem to set the picture size to 320x240.
I checked, 1280x960 is supported on both preview and picture.
I tried changing the parameters using reflection (same as for the focus), but nothing changed.
Seems to be necessary to change that before starting the preview...
I've been trying to read and debug the obfuscated code, but I can't get why the library decides to go for this low resolution :-(
The code used is close to what's included in the sample, just added the possibility to take a picture using CameraSource.takePicture(...)
You can find the code in the samples repo
Code to reproduce the issue => here
I changed the camera init with :
mCameraSource = new CameraSource.Builder(context, detector)
.setRequestedPreviewSize(1280, 960)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30.0f)
.build();
Added a button and connected a clik listener :
findViewById(R.id.snap).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
mCameraSource.takePicture(null, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Log.d("BITMAP", bmp.getWidth() + "x" + bmp.getHeight());
}
});
}
});
Log output :
BITMAP﹕ 320x240
Thanks for the help !

We have recently open sourced the CameraSource class. See here:
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java
This version includes a fix for the picture size issue. It will automatically select the highest resolution that the camera supports which matches the aspect ratio of the preview.

Related

Tensorflow's app Camera Images are Blurry

What I'm trying to achieve:
What I achieve w/ my tensorflow app:
Background:
I'm using TensorFlow lite application that works badly because the images that are produced w/ the flash on are blurry and many similar white objects can't be distinguished. (The images are taken in a sort of a cup, which is a closed environment, this is why the flash is always on)
If you want to reproduce the project, you can follow the instructions below:
Set up the working directory
git clone https://github.com/tensorflow/examples.git
Open the project with Android Studio
Open a project with Android Studio by taking the following steps:
Open Android Studio. After it loads select " Open an existing
Android Studio project".
In the file selector, choose examples/lite/examples/image_classification/android from your
working directory to load the project.
In LegacyCameraConnectionFragment.java, in the function “onSurfaceTextureAvailable
“
Add in line 90~ this code is added to turn on the flash all the time (it's off by default):
List<String> flashModes = parameters.getSupportedFlashModes();
if (flashModes.contains(android.hardware.Camera.Parameters.FLASH_MODE_TORCH))
{
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
}
More info. about installation can be found here: https://www.tensorflow.org/lite/models/image_classification/overview#example_applications_and_guides
What I tried:
First, I tried tweaking the camera parameters:
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_MACRO);
previewRequestBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_TORCH);
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_MACRO);
parameters.set("iso","100");
parameters.setJpegQuality(100);
Then,
I've tried to implement autofocus w/ the following code (Which seems to do some focus, but still the image almost stays the same)
private Camera.AutoFocusCallback myAutoFocusCallback = new Camera.AutoFocusCallback() {
#Override
public void onAutoFocus(boolean arg0, Camera arg1) {
if (arg0) {
LegacyCameraConnectionFragment.camera.cancelAutoFocus();
}
}
};
public void doTouchFocus(final Rect tfocusRect) {
try {
Camera mCamera = LegacyCameraConnectionFragment.camera;
List<Camera.Area> focusList = new ArrayList<Camera.Area>();
Camera.Area focusArea = new Camera.Area(tfocusRect, 1000);
focusList.add(focusArea);
Camera.Parameters param = mCamera.getParameters();
param.setFocusAreas(focusList);
param.setMeteringAreas(focusList);
mCamera.setParameters(param);
mCamera.autoFocus(myAutoFocusCallback);
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
float x = event.getX();
float y = event.getY();
Rect touchRect = new Rect(
(int) (x - 100),
(int) (y - 100),
(int) (x + 100),
(int) (y + 100));
final Rect targetFocusRect = new Rect(
touchRect.left * 2000 / previewWidth - 1000,
touchRect.top * 2000 / previewHeight - 1000,
touchRect.right * 2000 / previewWidth - 1000,
touchRect.bottom * 2000 / previewHeight - 1000);
doTouchFocus(targetFocusRect);
}
return false;
}
Third,
I tried checking some repos:
First repo is Camera2Basic
https://github.com/googlearchive/android-Camera2Basic
This repo produces the same bad results.
Then I tried OpenCamera's source code which can be found in source-forge.net:
https://sourceforge.net/projects/opencamera/files/test_20200301/
And the app produces really good results, but after a few days I still couldn't figure out which part should I take from there to make this work. I believe it has to do w/ the focus, but I wasn't able to understand how to take the code from there.
I also watched some YouTube videos and went over 10 posts here about Android's Studio Camera API v1 and v2 and tried to fix it on my own.
I've no idea how to continue, any ideas are highly appreciated.
You are using a very old implementation of the Camera API in LegacyCameraConnectionFragment, which is deprecated. You should use android.hardware.camera2, which is being used in the other tensorflow example, CameraConnectionFragment. Recently, CameraX was released to Beta, and there would probably be fewer examples for you to follow online, but some people are enjoying it already. More info about cameraX here.
Looks like CameraConnectionFragment.java is already using optimal settings for you?
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
PS: I don't think you should hard-code or even 'fine-tune' your camera settings so that it only works in very small cases. Let the API do the work for you.
Suggestion: for OpenCamera, switch everything to manual and then see if you can find a combination of flash, shutter speed, focus mode, and ISO that does what you expect; then you could try to reproduce those parameters in your code.
Also, I would suggest to use the Camera2 API in the future, as it allows more fine-grained control over some parameters. See if you could rather use CameraConnectionFragment instead of LegacyCameraConnectionFragment.
Addendum: OpenCamera saves all parameters in an image's EXIF data, so you could just look up the parameters for the "good" image using any decent image viewer.

How to get ARCore AcquireCameraImageBytes() in color?

I've been stuck on this problem for days. I am trying to do some image processing on my project. I have looked over the computer vision example ARCore provides but it only shows how to access the camera frame in black and white. I need color for my images. I've already looked at Save AcquireCameraImageBytes() from Unity ARCore to storage as an image
and had no luck.
I have a function called GetImage() that is supposed to return the camera frame as a texture.
public Texture GetImage() {
if (!Frame.CameraImage.AcquireCameraImageBytes().IsAvailable) {
return null;
}
var ptr = Frame.CameraImage.AcquireCameraImageBytes();
var bufferSize = ptr.Width * ptr.Height * 4;
if (_tex == null) {
_tex = new Texture2D(ptr.Width, ptr.Height, TextureFormat.RGBA32, false, false);
}
if (_bytes == null) {
_bytes = new byte[bufferSize];
}
Marshal.Copy(_ptr.Y, _bytes, 0, bufferSize);
_tex.LoadRawTextureData(_bytes);
_tex.Apply();
return _tex;
}
I run into two problems with this code.
First, after a few seconds my app freezes with the error "failed to acquire camera image with status error resources exhausted".
The second issue is if I do manage to display the image it is not in color and is repeated four times like in this post ARCore for Unity save camera image.
Does anyone have a working example of accessing ARCore images in color or know what is wrong with my code?
With "Frame.CameraImage.AcquireCameraImageBytes();" you get a greyscale image in the YUV-420-888 format (see doc) So with the right YUV to RGB transformation you should get a color camera image. (But I didn't find a right YUV to RGB transformation, yet)
I had a similiar questsion: ARCore Save Camera Image (Unity C#) on Button click in my updated question I posted code which get me the greyscale image from AcquireCameraImagesBytes().
If you want to do image processing, did you already looked at the Unity Computer Vision Example. example? See the _OnImageAvailable With the TextureReader API you can get colored images.
Also this github issue could be helpful: https://github.com/google-ar/arcore-unity-sdk/issues/221

Android Camera2 take picture while processing frames

I am using Camera2 API to create a Camera component that can scan barcodes and has ability to take pictures during scanning. It is kinda working but the preview is flickering - it seems like previous frames and sometimes green frames are interrupting realtime preview.
My code is based on Google's Camera2Basic. I'm just adding one more ImageReader and its surface as a new output and target for CaptureRequest.Builder. One of the readers uses JPEG and the other YUV. Flickering disappears when I remove the JPEG reader's surface from outputs (not passing this into createCaptureSession).
There's quite a lot of code so I created a gist: click - Tried to get rid of completely irrelevant code.
Is the device you're testing on a LEGACY-level device?
If so, any captures targeting a JPEG output may be much slower since they can run a precapture sequence, and may briefly pause preview as well.
But it should not cause green frames, unless there's a device-level bug.
If anyone ever struggles with this. There is table in the docs showing that if there are 3 targets specified, the YUV ImageReader can use images with maximum size equal to the preview size (maximum 1920x1080). Reducing this helped!
Yes you can. Assuming that you configure your preview to feed the ImageReader with YUV frames (because you could also put JPEG there, check it out), like so:
mImageReaderPreview = ImageReader.newInstance(mPreviewSize.getWidth(), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, 1);
You can process those frames inside your OnImageAvailable listener:
#Override
public void onImageAvailable(ImageReader reader) {
Image mImage = reader.acquireNextImage();
if (mImage == null) {
return;
}
try {
// Do some custom processing like YUV to RGB conversion, cropping, etc.
mFrameProcessor.setNextFrame(mImage));
mImage.close();
} catch (IllegalStateException e) {
Log.e("TAG", e.getMessage());
}

Android camera API blurry image on Samsung devices

After implementing the camera2 API for the inApp camera I noticed that on Samsung devices the images appear blurry. After searching about that I found the Sasmung Camera SDK (http://developer.samsung.com/galaxy#camera). So after implementing the SDK on Samsung Galaxy S7 the images are fine now, but on Galaxy S6 they are still blurry. Someone experienced those kind of issues with Samsung devices?
EDIT:
To complement #rcsumners comment. I am setting autofocus by using
mPreviewBuilder.set(SCaptureRequest.CONTROL_AF_TRIGGER, SCaptureRequest.CONTROL_AF_TRIGGER_START);
mSCameraSession.capture(mPreviewBuilder.build(), new SCameraCaptureSession.CaptureCallback() {
#Override
public void onCaptureCompleted(SCameraCaptureSession session, SCaptureRequest request, STotalCaptureResult result) {
isAFTriggered = true;
}
}, mBackgroundHandler);
It is a long exposure image where the use has to take an image of a static non moving object. For this I am using the CONTROL_AF_MODE_MACRO
mCaptureBuilder.set(SCaptureRequest.CONTROL_AF_MODE, SCaptureRequest.CONTROL_AF_MODE_MACRO);
and also I am enabling auto flash if it is available
requestBuilder.set(SCaptureRequest.CONTROL_AE_MODE,
SCaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
I am not really an expert in this API, I mostly followed the SDK example app.
There could be a number of issues causing this problem. One prominent one is the dimensions of your output image
I ran Camera2 API and the preview is clear, but the output was quite blurry
val characteristics: CameraCharacteristics? = cameraManager.getCameraCharacteristics(cameraId)
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // The issue
val width = imageDimension.width
val height = imageDimension.height
if (size != null) {
width = size[0].width; height = size[0].height
}
val imageReader = ImageReader.newInstance(width, height, ImageFormat.JPEG, 5)
The code below was returning a dimension about 245*144 which was way to small to be sent to the image reader. Some how the output was stretching the image making it end up been blurry. Therefore I removed this line below.
val size = characteristics?.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)?.getOutputSizes(ImageFormat.JPEG) // this was returning a small
Setting the width and height manually resolved the issue.
You're setting the AF trigger for one frame, but then are you waiting for AF to complete? For AF_MODE_MACRO (are you verifying the device lists support for this AF mode?) you need to wait for AF_STATE_FOCUSED_LOCKED before the image is guaranteed to be stable and sharp. (You may also receive NOT_FOCUSED_LOCKED if the AF algorithm can't reach sharp focus, which could be because the object is just too close for the lens, or the scene is too confusing)
On most modern devices, it's recommended to use CONTINUOUS_PICTURE and not worry about AF triggering unless you really want to lock focus for some time period. In that mode, the device will continuously try to focus to the best of its ability. I'm not sure all that many devices support MACRO, to begin with.

Face not detected using vision google service

https://developers.google.com/android/reference/com/google/android/gms/vision/face/FaceDetector.Builder
I'm using the above google service in my app for face detection. I made sure my phone has the minimum google play service version, which on my phone is 8.3, but still I can't get the face detection to work! I imported the library by importing the google play library in my eclipse project.... Here's the code:
#Override
protected void onPreExecute()
{
detector = new FaceDetector.Builder(MainContext)
.setTrackingEnabled(false)
//.setProminentFaceOnly(true)
.setLandmarkType(FaceDetector.ALL_LANDMARKS) //required
.build();
}
private void detectTheFace(Bitmap converted)
{
Frame frame = new Frame.Builder().setBitmap(converted).build();
faces = detector.detect(frame);
}
I don't know if it's necessary to convert the bitmap you use to detect the faces has to be RGB_565 configuration but I did it anyways. I tried with and without changing the RGB configuration and it yields the same results. Basically the faces sparse array is of size 0 meaning it doesnt detect a face.... ever. Btw just to give some context on the above code, I'm executing the face detection in a async task because I want to run it on the background.
I have the same problem ,i.e. it was working fine on nexus but not in galaxy. I resolved the problem by rotating the bitmap to 90 degree in case detector.detect() method gives faces of zero size. so maximum retry is 3 times after calling detector.detect() because 4th rotation gives you the same bitmap.
Bitmap rotateBitmap(Bitmap bitmapToRotate) {
Matrix matrix = new Matrix();
matrix.postRotate(90);
Bitmap rotatedBitmap = Bitmap.createBitmap(bitmapToRotate, 0, 0,
bitmapToRotate.getWidth(), bitmapToRotate.getHeight(), matrix,
true);
return rotatedBitmap;
}
check if the face return by detector.detect() has zero size then below code should run.
if(!faces.size()>0){
if (rotationCounter < 3) {
rotationCounter++;
bitmap= rotateBitmap(bitmapToRotate);
//again call detector.detect() here
}
}
you can check for the need of rotating bitmap without writing above code. From your original, code try to capture the image in landscape mode of phone or just rotate the image to 90 degree ans capture it.
To solve this problem, use the orientation specification from the EXIF of the photo.

Categories

Resources