I'm building a camera up designed to work exclusively on Pixel 3 XL. I'm using camera2 API and would like to take a picture using front facing camera with HDR and/or Night Mode enabled. This is a code snipped where I setup my capture request:
final CaptureRequest.Builder captureBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
captureBuilder.addTarget(mStillImageReader.getSurface());
captureBuilder.set(CaptureRequest.CONTROL_SCENE_MODE, CameraMetadata.CONTROL_SCENE_MODE_HDR);
captureBuilder.set(CaptureRequest.CONTROL_AWB_MODE, CameraMetadata.CONTROL_AWB_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_AE_MODE, CameraMetadata.CONTROL_AE_MODE_ON);
captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CameraMetadata.CONTROL_AF_MODE_AUTO);
captureBuilder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_SCENE_MODE_NIGHT);
captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, 0);
...
mCaptureSession.capture(captureBuilder.build(), CaptureCallback, mBackgroundHandler);
I was hoping to get something close to what Android's native camera app is doing when it's set to shoot in Night Mode or HDR+. Does anybody know if I need to do more than just setting flags in capture request to get the desired behavior?
In Android the image processing algorithms can be implemented at HAL level as well as at application level. Android framework sits between HAL and application and provide an interface to interact with camera. When you call the set method:
captureBuilder.set(
CaptureRequest.CONTROL_SCENE_MODE,
CameraMetadata.CONTROL_SCENE_MODE_HDR);
You request the HAL to perform HDR before returning the image to application. Note that: HAL is implemented by OEMs (or the vendors) (can consider Pixel to be an OEM) and it's upto the implementer to implement different CONTROL_SCENE_MODES. You can and should query the available control modes using:
// Get the camera charecteristics for current camera
CameraCharacteristics cameraCharacteristics = getCameraCharecteristics(cameraFacing);
int[] modes = cameraCharacteristics.get(
CameraCharacteristics.CONTROL_AVAILABLE_SCENE_MODES);
// Check if the modes array has the SCENE MODE you wish to apply
// ... by iterating through the list.
If you run this on Pixel 3 XL, you may not get HDR supported. Correct me if I am wrong.
If you do not wish to write your own algorithms for HDR and Night Mode, you'd have to rely on available scene modes, and for that best would be to query and validate. Alternatively you can request YUV or RAW image from camera 2 and run them through HDR or Night Mode algorithms at application level.
TL;DR;
The Google Pixel's native camera is most likely doing more image processing on top of image returned by camera2 and hence it cannot be replicated by leveraging camera2 alone by setting any configurations.
HDR+ / HDR does not seem supported on Google's own Pixel phones, even with CameraX (and of course not with Camera2)
Myabe worth trying CameraX with a 3rd party extension, like this.
Problem is, CameraX is in a (very) alpha state as of now, and its fate is not clear either.
Related
So I have different Samsung devices (up to S22 Ultra) and it is very easy to access ultra wide lens camera because CameraManager.cameraIdList returns 4 cameras which includes general back lens camera and ultra wide lens camera as well.
But many other devices (Xiaomi, Vivo and many others) return only two general cameras - back and front.
Some users of my apps said that they are able to use ultra wide lens camera with apps like mcpro24fps and gcam. One user with Xiaomi POCO X3 (Android 11)
How do such apps can access all cameras?
Also Camera2 API usually returns that video stabilization is not supported (manufacture doesn't exposes it within this api)
val characteristics = getCameraCharacteristics(context, cameraIdx)
val modes =
characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES)
?: intArrayOf()
val supported = CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON in modes
supported usually false but even if it returns true for some devices it seems it still doesn't really do any video stabilization and there is no any crop at the frames, no effect basically, based on users responses of my app
so the following code doesn't change anything if camera2 api returns that video stabilization is supported:
if (isVideoStabilizationSupported()) {
captureRequestBuilder.set(
CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE,
CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON
)
but again mcpro24fps and gcam supports this as well but it seems mb a custom solution but I don't really understand how you can implement something custom with camera2 API without effecting performance, because it has to implemented on low level
Update Mb such apps can access ultra wide lens camera by using new zoom ratio camera parameter:
captureRequestBuilder.set(CaptureRequest.CONTROL_ZOOM_RATIO, 0.6f)
It works for my Samsung devices, 0.6...10 zoom ratio. So can switch between lens without changing camera id
The long-term goal for Android's camera APIs is that multi-camera clusters (such as a combination of ultrawide/wide/tele cameras) can be used by applications without having to specially code for it.
That's done via the logical multi-camera APIs. When implemented, that means there'll be one logical camera, composed of two or more physical cameras. What you see in the camera ID list is the logical camera, and you can also get the list of the physical cameras from CameraCharacteristics#getPhysicalCameraIds(). With this arrangement, you can see extended zoom ranges such as what you mention ( 0.6 ... 10), and the camera implementation will automatically switch to the ultrawide or telephoto cameras when you zoom out or zoom in. That's subject to various other conditions; for example, most telephoto lenses can't focus very close, so if you zoom in while focused on a nearby object, the camera will likely stay with the default wide camera; similarly tele cameras are often worse in low light, so digital zoom may result in better quality than optical zoom plus more amplification.
If you have a particular reason to use the underlying uw/wide/tele camera, you can include them in the stream configuration via setPhysicalCameraId() and use them directly; that lets you force which camera is streamed from if you want to provide a 'use telephoto' button in your UI, instead of letting the logical camera try to use its best judgement to select the active camera.
Unfortunately, not all devices have migrated to the logical camera API; those devices might be listing the multi-camera clusters as individual camera IDs as you've seen, or may just hide some cameras from apps entirely. The main problem with this is that it requires an app to do extra work to just zoom out/in with best quality, and the variety of implementations makes it hard to code up in an app so that it works on all devices.
I'm working on an Android app that streams video with another party, and currently looking at brightening the image in low light scenes.
I noticed that on Google Duo app, the low light mode does it really well.
Is it achievable using just Camera2 API? The reason I am not using CameraX API is because my app is also utilising Vonage (formerly TokBox) SDK for two way video call and it's SDK sample code currently uses Camera2 API and haven't migrated to CameraX yet.
What I tried is to set CONTROL_AE_MODE (Auto Exposure) to CONTROL_AE_MODE_ON, this helps a bit but the image quality is no where as near as Google Duo app's. I'm looking to decrease the Camera FPS next but what else can I do to brighten the image?
If you set CaptureRequest.CONTROL_AE_MODE to CaptureRequest.CONTROL_AE_MODE_OFF then you can control ISO CaptureRequest.SENSOR_SENSITIVITYand exposure time CaptureRequest.SENSOR_EXPOSURE_TIME manually.
You can read the available range of sensorSensitivity (ISO) like this val sensorSensitivityRange = cameraCharacteristics?.get(CameraCharacteristics.SENSOR_INFO_SENSITIVITY_RANGE) as Range<Int>? as this will vary from device to device / camera to camera.
So in low-light mode you could control the brightness yourself, and in normal mode, you let the camera do it automatically.
More information here:
https://developer.android.com/reference/android/hardware/camera2/CaptureRequest
I have Samsung S10 which has video stabilization feature.
Using system default Camera app I can see the difference when it's enabled and not: first if it's enabled than there will be some zoomed preview, second it is noticeable during device movements.
I tried to enable stabilization in my own app using Camera2 API, also FullHD and rear camera (the same as with default system app).
I tested that characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES) returns only CONTROL_VIDEO_STABILIZATION_MODE_OFF so this is not supported.
But characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_OPTICAL_STABILIZATION)
has CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_ON
So I as I understand this is exactly the option to enable video stabilization (optical), should be the same as in default system app.
But when I do the next for camera capture session configuration it doesn't change anything, no zoomed preview (as it was with default system camera app) and no changes during movement, so the video is the same in my app as it would have been in default camera app with disabled video stabilization
captureRequestBuilder.set(
CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE,
CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_ON
)
So setting this parameter doesn't change anything.
Why video stabilization works in default system camera app but not in my own app using Camera2 API?
There are two types of stabilization that camera devices can support on Android
Video stabilization (Electronic Image Stabilization / EIS): This is done as a post-processing step after image capture, by analyzing camera motion between frames and compensating for it by shifting the image a bit. That's why there's a zoom-in effect, to give some room for that shift/warp. This stabilizes video over time, making consecutive image frames stable. The controls for this are accessed via the CONTROL_VIDEO_STABILIZATION_MODE setting, as you've discovered.
Optical image stabilization (OIS): This is a set of high-speed gyros and magnets around the camera lens, which rapidly shifts the lens (or sometimes the sensor) as the camera moves to stabilize the image. The amount of motion is limited, but it's very rapid, so it stabilizes images during a single exposure, not across multiple frames. So it's generally only useful for snapshots, not video. This is accessed via SENSOR_OPTICAL_STABILIZATION_MODE.
Unfortunately, many Android manufacturers do not make their EIS implementations available to applications outside of their default camera app. That's because making a good EIS implementation is complicated, and the manufacturers want to limit it to only working with their own app's recording mode (which is a single target to fix). For example, EIS for recorded video often applies a 1-second delay so that it can adjust image transforms for future frames, in addition to past ones, which is hard to do for real-time apps. Some manufacturers make simpler algorithms visible, or otherwise manage to make EIS work for everyone, but for others, the device doesn't list support for EIS even when the built-in app uses it.
Turning on OIS probably works fine - you'd only see an effect on long-exposure images, where they'll be blurry due to handshake when OIS off, but be sharp when OIS is on. Since it's a self-contained hardware unit, it's easy for manufacturers to make the on-switch available to everyone.
I'm trying to get the current preview's shutter speed and ISO settings.
I cannot find a way to do this using CameraX or Camera2. Is this not something that is available?
Failing that, is there a way to get the settings that were used to take a photo?
For Camera2, this information is available in the CaptureResult objects you get for every captured image, via onCaptureCompleted. However, not all devices support listing this information; only devices that list the READ_SENSOR_SETTINGS capability will do this. That includes all devices that list hardware level FULL or better, and may include some devices at the LIMITED level.
Specifically, you want to look at SENSOR_SENSITIVITY for ISO and SENSOR_EXPOSURE_TIME for shutter speed.
If you want the values used for a JPEG capture, look at the CaptureResult that comes from the CaptureRequest you used to request the JPEG.
I also was able to GET only the ISO settings using Jetpack/CameraX API through the Camera2CameraInfo.from and then getCameraCharacteristic. But it seems there is no way to set them (using CameraX).
In the latest releases of Jetpack/CameraX it is also possible to set the ISO/ShutterSpeed.
The below example sets the ISO for preview (written in C# language, but it is similar to Java):
var builder = new Preview.Builder();
var ext1 = new Camera2Interop.Extender(builder)
.SetCaptureRequestOption(CaptureRequest.ControlAeMode, (int)ControlAEMode.Off)
.SetCaptureRequestOption(CaptureRequest.SensorSensitivity, 3200);
var preview = builder.SetTargetName("PREVIEW").Build();
preview.SetSurfaceProvider(sfcPreview.SurfaceProvider);
For my project I need to implement HDR feature in my device that has Android Jelly bean on it. From the code I see that when HDR (High Dynamic Range) is selected the application is sending SCENE_MODE_HDR to the HAL layer. I am the developer from Camera HAL layer. What am I supposed to do when I get scene mode = SCENE_MODE_HDR. Do I need to request driver to give 3 images with different exposure compensation value and the application will take care of stitching the images to make the HDR image?
Or like panorama mode, the android application and framework layer can take care of HDR by themselves ?
The scene mode = SCENE_MODE_HDR seems to be introduced from Android Jelly Bean 4.2, and as i know, HDR at here indicates the Hardware HDR which means to be implemented by Camera Vendor.
I think the driver need handle this, not only give 3 images with different exposure compensation value, but also need do image composition and tone mapping.
So from the view of application, camera application just set the scene mode with SCENE_MODE_HDR and take picture, then, the HDR image will be output at onPictureTaken() callback function.