What I'm trying to achieve:
What I achieve w/ my tensorflow app:
Background:
I'm using TensorFlow lite application that works badly because the images that are produced w/ the flash on are blurry and many similar white objects can't be distinguished. (The images are taken in a sort of a cup, which is a closed environment, this is why the flash is always on)
If you want to reproduce the project, you can follow the instructions below:
Set up the working directory
git clone https://github.com/tensorflow/examples.git
Open the project with Android Studio
Open a project with Android Studio by taking the following steps:
Open Android Studio. After it loads select " Open an existing
Android Studio project".
In the file selector, choose examples/lite/examples/image_classification/android from your
working directory to load the project.
In LegacyCameraConnectionFragment.java, in the function “onSurfaceTextureAvailable
“
Add in line 90~ this code is added to turn on the flash all the time (it's off by default):
List<String> flashModes = parameters.getSupportedFlashModes();
if (flashModes.contains(android.hardware.Camera.Parameters.FLASH_MODE_TORCH))
{
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
}
More info. about installation can be found here: https://www.tensorflow.org/lite/models/image_classification/overview#example_applications_and_guides
What I tried:
First, I tried tweaking the camera parameters:
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_MACRO);
previewRequestBuilder.set(CaptureRequest.FLASH_MODE,
CaptureRequest.FLASH_MODE_TORCH);
parameters.setFlashMode(parameters.FLASH_MODE_TORCH);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_MACRO);
parameters.set("iso","100");
parameters.setJpegQuality(100);
Then,
I've tried to implement autofocus w/ the following code (Which seems to do some focus, but still the image almost stays the same)
private Camera.AutoFocusCallback myAutoFocusCallback = new Camera.AutoFocusCallback() {
#Override
public void onAutoFocus(boolean arg0, Camera arg1) {
if (arg0) {
LegacyCameraConnectionFragment.camera.cancelAutoFocus();
}
}
};
public void doTouchFocus(final Rect tfocusRect) {
try {
Camera mCamera = LegacyCameraConnectionFragment.camera;
List<Camera.Area> focusList = new ArrayList<Camera.Area>();
Camera.Area focusArea = new Camera.Area(tfocusRect, 1000);
focusList.add(focusArea);
Camera.Parameters param = mCamera.getParameters();
param.setFocusAreas(focusList);
param.setMeteringAreas(focusList);
mCamera.setParameters(param);
mCamera.autoFocus(myAutoFocusCallback);
} catch (Exception e) {
e.printStackTrace();
}
}
#Override
public boolean onTouchEvent(MotionEvent event) {
if (event.getAction() == MotionEvent.ACTION_DOWN) {
float x = event.getX();
float y = event.getY();
Rect touchRect = new Rect(
(int) (x - 100),
(int) (y - 100),
(int) (x + 100),
(int) (y + 100));
final Rect targetFocusRect = new Rect(
touchRect.left * 2000 / previewWidth - 1000,
touchRect.top * 2000 / previewHeight - 1000,
touchRect.right * 2000 / previewWidth - 1000,
touchRect.bottom * 2000 / previewHeight - 1000);
doTouchFocus(targetFocusRect);
}
return false;
}
Third,
I tried checking some repos:
First repo is Camera2Basic
https://github.com/googlearchive/android-Camera2Basic
This repo produces the same bad results.
Then I tried OpenCamera's source code which can be found in source-forge.net:
https://sourceforge.net/projects/opencamera/files/test_20200301/
And the app produces really good results, but after a few days I still couldn't figure out which part should I take from there to make this work. I believe it has to do w/ the focus, but I wasn't able to understand how to take the code from there.
I also watched some YouTube videos and went over 10 posts here about Android's Studio Camera API v1 and v2 and tried to fix it on my own.
I've no idea how to continue, any ideas are highly appreciated.
You are using a very old implementation of the Camera API in LegacyCameraConnectionFragment, which is deprecated. You should use android.hardware.camera2, which is being used in the other tensorflow example, CameraConnectionFragment. Recently, CameraX was released to Beta, and there would probably be fewer examples for you to follow online, but some people are enjoying it already. More info about cameraX here.
Looks like CameraConnectionFragment.java is already using optimal settings for you?
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
PS: I don't think you should hard-code or even 'fine-tune' your camera settings so that it only works in very small cases. Let the API do the work for you.
Suggestion: for OpenCamera, switch everything to manual and then see if you can find a combination of flash, shutter speed, focus mode, and ISO that does what you expect; then you could try to reproduce those parameters in your code.
Also, I would suggest to use the Camera2 API in the future, as it allows more fine-grained control over some parameters. See if you could rather use CameraConnectionFragment instead of LegacyCameraConnectionFragment.
Addendum: OpenCamera saves all parameters in an image's EXIF data, so you could just look up the parameters for the "good" image using any decent image viewer.
Related
I'm building a camera that needs to detect the user's face/eyes and measure distance through the eyes.
I found that on this project https://github.com/IvanLudvig/Screen-to-face-distance, it works great but it doesn't happen to use a preview of the frontal camera (Really, I tested it on at least 10 people, all measurements were REALLY close or perfect).
My app already had a selfie camera part made by me, but using the old camera API, and I didn't find a solution to have both camera preview and the face distance to work together on that, always would receive an error that the camera was already in use.
I decided to move to the camera2 to use more than one camera stream, and I'm still learning this process of having two streams at the same time for different things. Btw documentation on this seems to be so scarce, I'm really lost on it.
Now, am I on the right path to this?
Also, on his project,Ivan uses this:
Camera camera = frontCam();
Camera.Parameters campar = camera.getParameters();
F = campar.getFocalLength();
angleX = campar.getHorizontalViewAngle();
angleY = campar.getVerticalViewAngle();
sensorX = (float) (Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float) (Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This is the old camera API, how can I call this on the new one?
Judging from this answer: Android camera2 API get focus distance in AF mode
Do I need to get min and max focal lenghts?
For the horizontal and vertical angles I found this one: What is the Android Camera2 API equivalent of Camera.Parameters.getHorizontalViewAngle() and Camera.Parameters.getVerticalViewAngle()?
The rest I believe is done by Google's Cloud Vision API
EDIT:
I got it to work on camera2, using GMS's own example, CameraSourcePreview and GraphicOverlay to display whatever I want to display together the preview and detect faces.
Now to get the camera characteristics:
CameraManager manager = (CameraManager) this.getSystemService(Context.CAMERA_SERVICE);
try {
character = manager.getCameraCharacteristics(String.valueOf(1));
} catch (CameraAccessException e) {
Log.e(TAG, "CamAcc1Error.", e);
}
angleX = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getWidth();
angleY = character.get(CameraCharacteristics.SENSOR_INFO_PHYSICAL_SIZE).getHeight();
sensorX = (float)(Math.tan(Math.toRadians(angleX / 2)) * 2 * F);
sensorY = (float)(Math.tan(Math.toRadians(angleY / 2)) * 2 * F);
This pretty much gives me mm accuracy to face distance, which is exactly what I needed.
Now what is left is getting a picture from this preview with GMS's CameraSourcePreview, so that I can use later.
Final Edit here:
I solved the picture issue, but I forgot to edit here. The thing is, all the examples using camera2 to take a picture are really complicated (rightly so, it's a better API than camera, has a lot of options), but it can be really simplyfied to what I did here:
mCameraSource.takePicture(null, bytes -> {
Bitmap bitmap;
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
Matrix matrix = new Matrix();
matrix.postRotate(180);
matrix.postScale(1, -1);
rotateBmp = Bitmap.createBitmap(bitmap, 0, 0, bitmap.getWidth(),
bitmap.getHeight(), matrix, false);
saveBmp2SD(STORAGE_PATH, rotateBmp);
rotateBmp.recycle();
bitmap.recycle();
}
});
That's all I needed to take a picture and save to a location I specified, don't mind the recycling here, it's not right, I'm working on it
It looks like that bit of math is calculating the physical dimensions of the image sensor, via the angle-of-view equation:
The camera2 API has the sensor dimensions as part of the camera characteristics directly: SENSOR_INFO_PHYSICAL_SIZE.
In fact, if you want to get the field of view in camera2, you have to use the same equation in the other direction, since FOVs are not part of camera characteristics.
Beyond that, it looks like the example you linked just uses the old camera API to fetch that FOV information, and then closes the camera and uses the Vision API to actually drive the camera. So you'd have to look at the vision API docs to see how you can give it camera input instead of having it drive everything. Or you could use the camera API's built-in face detector, which on many devices gives you eye locations as well.
I am making a custom camera app, and I am using the Android Camera API for that. I know this API is deprecated now and it is recommended to use the Camera2 API.
But I have to only preview the camera with some zoom.
Below is the code for setting the zoom
Camera.Parameters parameters = camera.getParameters();
parameters.setZoom(30);
parameters.setPreviewFpsRange(
previewFpsRange[Camera.Parameters.PREVIEW_FPS_MIN_INDEX],
previewFpsRange[Camera.Parameters.PREVIEW_FPS_MAX_INDEX]);
parameters.setPreviewFormat(IMAGE_FORMAT);
camera.setParameters(parameters);
Now, the problem is that the camera zoom is not equal across devices, some devices zoom in to a good amount whereas some devices zoom very much.
I am not able to find any explanation over the internet regarding the same.
I need the same zoom level across all the Android devices irrespective of the camera quality and MP.
The legal Value of Camera.Parameters.setZoom() goes from 0 to Camera.Parameters.getMaxZoom() as you can read in the Documentation of setZoom()
What you have to do is to normalize the Zoom factor over all devices, which can be done with the following method:
private setZoom(float zoom) {//to me, here zoom 0 to 1
try {
if (isCameraOpened() && mCameraParameters.isZoomSupported()) {
int maxZoom = mCameraParameters.getMaxZoom();
int scaledValue = (int) (zoom * maxZoom);
mCameraParameters.setZoom(scaledValue);
mZoom = zoom;
mCamera.setParameters(mCameraParameters);
}
} catch (Exception | Error throwable) {
Log.e(TAG, Objects.requireNonNull(throwable.getMessage()));
}
}
I'm an experienced native iOS developer making my first foray into Android through Unity. I'm trying to set up a custom shader, but I'm having some trouble with the Normal maps. I've got them working perfectly in the Unity simulator on my computer, but when I build to an actual device (Samsung Galaxy S8+), the Normal maps don't work at all.
I'm using Mars as my test case. Here's the model running in the simulator on my computer:
And here's a screenshot from my device, running exactly the same code.
I've done a LOT of research, and apparently using Normal maps on Android with Unity is not an easy thing. There are a lot of people asking about it, but almost every answer I've found has said the trick is to override the texture import settings, and force it to be "Truecolor" which seems to be "RGBA 32 Bit" according to Unity's documentation. This hasn't helped me, though.
Another thread suggested reducing the Asino Level to zero, and another suggested turning off Mip Maps. I don't know what either of those are, but neither helped.
Here's my shader code, simplified but containing all references to Normal mapping:
void surf (Input IN, inout SurfaceOutputStandard o) {
half4 d = tex2D (_MainTex , IN.uv_MainTex);
half4 n = tex2D (_BumpMap , IN.uv_BumpMap);
o.Albedo = d.rgb;
o.Normal = UnpackNormal(n);
o.Metallic = 0.0;
o.Smoothness = 0.0;
}
I've seen some threads suggesting replacements for the "UnpackNormal()" function in the shader code, indicating that it might not be the thing to do on Android or mobile in general, but none of the suggested replacements have changed anything for better or worse: the normal maps continue to work in the simulator, but not on the device.
I've even tried making my own normal maps programmatically from a grayscale heightmap, to try to circumvent any import settings I may have done wrong. Here's the code I used, and again it works in the simulator but not on the device.
public Texture2D NormalMap(Texture2D source, float strength = 10.0f) {
Texture2D normalTexture;
float xLeft;
float xRight;
float yUp;
float yDown;
float yDelta;
float xDelta;
normalTexture = new Texture2D (source.width, source.height, TextureFormat.RGBA32, false, true);
for (int y=0; y<source.height; y++) {
for (int x=0; x<source.width; x++) {
xLeft = source.GetPixel (x - 1, y).grayscale * strength;
xRight = source.GetPixel (x + 1, y).grayscale * strength;
yUp = source.GetPixel (x, y - 1).grayscale * strength;
yDown = source.GetPixel (x, y + 1).grayscale * strength;
xDelta = ((xLeft - xRight) + 1) * 0.5f;
yDelta = ((yUp - yDown) + 1) * 0.5f;
normalTexture.SetPixel(x,y,new Color(xDelta,yDelta,1.0f,yDelta));
}
}
normalTexture.Apply();
return normalTexture;
}
Lastly, in the Build Settings, I've got the Platform set to Android and I've tried it using Texture Compression set to both "Don't Override" and "ETC (default)". The former was the original setting and the latter seemed to be Unity's suggestion both by the name and in the documentation.
I'm sure there's just some flag I haven't checked or some switch I haven't flipped, but I can't for the life of me figure out what I'm doing wrong here, or why there would be such a stubborn difference between the simulator and the device.
Can anyone help a Unity newbie out, and show me how these damn Normal maps are supposed to work on Android?
Check under:
Edit -> Project Settings -> Quality
Android is usually set to Fastest.
I've been working on a project and got to make face detection working, with focus, thanks to SO.
I am now taking pictures, but using the front camera on my Nexus 5 and a preview size of 1280x960, the play services seem to set the picture size to 320x240.
I checked, 1280x960 is supported on both preview and picture.
I tried changing the parameters using reflection (same as for the focus), but nothing changed.
Seems to be necessary to change that before starting the preview...
I've been trying to read and debug the obfuscated code, but I can't get why the library decides to go for this low resolution :-(
The code used is close to what's included in the sample, just added the possibility to take a picture using CameraSource.takePicture(...)
You can find the code in the samples repo
Code to reproduce the issue => here
I changed the camera init with :
mCameraSource = new CameraSource.Builder(context, detector)
.setRequestedPreviewSize(1280, 960)
.setFacing(CameraSource.CAMERA_FACING_FRONT)
.setRequestedFps(30.0f)
.build();
Added a button and connected a clik listener :
findViewById(R.id.snap).setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
mCameraSource.takePicture(null, new CameraSource.PictureCallback() {
#Override
public void onPictureTaken(byte[] bytes) {
Bitmap bmp = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
Log.d("BITMAP", bmp.getWidth() + "x" + bmp.getHeight());
}
});
}
});
Log output :
BITMAP﹕ 320x240
Thanks for the help !
We have recently open sourced the CameraSource class. See here:
https://github.com/googlesamples/android-vision/blob/master/visionSamples/barcode-reader/app/src/main/java/com/google/android/gms/samples/vision/barcodereader/ui/camera/CameraSource.java
This version includes a fix for the picture size issue. It will automatically select the highest resolution that the camera supports which matches the aspect ratio of the preview.
A week ago I've started researching the Android camera API. I have successfully inited the camera and started preview, and it worked fine. Then I've found out I wasn't initializing and releasing the camera properly, so I overhauled the code somewhat, and now I have a problem that didn't occur initially: extremely low FPS. About 0.5, that's 2 seconds per frame. Interestingly enough, I get 1 frame with a delay, and then a second frame immediately after (1-15 ms), followed again by 2 seconds delay before the next frame.
This is my camera initialization code:
m_openedCamera = Camera.open(id);
m_surfaceHolder = new SurfaceView(MyApplication.instance().getApplicationContext()).getHolder();
Assert.assertNotNull(m_openedCamera);
// This is required on A500 for some reason
Camera.Parameters params = m_openedCamera.getParameters();
params.setPreviewFormat(ImageFormat.NV21);
params.setPreviewSize(320, 240);
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.ICE_CREAM_SANDWICH)
{
params.setRecordingHint(true);
params.setAutoExposureLock(true);
params.setAutoWhiteBalanceLock(true);
}
m_openedCamera.setParameters(params);
int bitsPerPx = ImageFormat.getBitsPerPixel(ImageFormat.NV21);
int width = params.getPreviewSize().width;
int height = params.getPreviewSize().height;
int size = (int)(width * height * bitsPerPx / 8.0);
m_openedCamera.addCallbackBuffer( new byte[size] );
m_openedCamera.addCallbackBuffer( new byte[size] );
m_openedCamera.addCallbackBuffer( new byte[size] );
m_openedCamera.addCallbackBuffer( new byte[size] );
m_openedCamera.setErrorCallback(this);
m_openedCamera.setPreviewDisplay(m_surfaceHolder);
m_openedCameraFacing = facing;
m_openedCamera.setPreviewCallback(this);
m_openedCamera.startPreview();
I have just added callback buffers - hasn't changed anything. In my initial code from a week ago I had no surface view, but removing it now has no effect either.
This occurs on my second, much newer tablet as well, even though FPS is higher there (8-10), and there's no double frame there, the frames are spaced evenly. The FPS used to be at least 20. Light conditions haven't changed between now and then, btw.
Update: tried opening the camera in a separate thread as described here - no change.
params.setRecordingHint(true); is used to start video record starts.
Please check it again with "params.setRecordingHint(false)"
Please make your own previewcallbak function and then
private PreviewCallback mPreviewCallback = new PreviewCallback();
....
setPreviewCallback(mPreviewCallback);
Regarding PreviewCallback(), you can refer to CameraTest.Java in cts directory : cts/tests/tests/hawdware/src/android/hardware/cts
From your comment below
"I have successfully inited the camera and started preview, and it worked fine. Then I've found out I wasn't initializing and releasing the camera properly"
I guess you was using the default camera parameters defined in camera HAL at first. Although it is not clear what "initializing" means, I think they is likely camera parameters setting.
So, I'd like to suggest you remove the code for params and then test it again.
Or, you can test it one by one
Remove only "params.setPreviewSize(320, 240);"
Remove only "params.setPreviewFormat(ImageFormat.NV21);"
Remove both 1 and 2.