I have Samsung S10 which has video stabilization feature.
Using system default Camera app I can see the difference when it's enabled and not: first if it's enabled than there will be some zoomed preview, second it is noticeable during device movements.
I tried to enable stabilization in my own app using Camera2 API, also FullHD and rear camera (the same as with default system app).
I tested that characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES) returns only CONTROL_VIDEO_STABILIZATION_MODE_OFF so this is not supported.
But characteristics.get(CameraCharacteristics.LENS_INFO_AVAILABLE_OPTICAL_STABILIZATION)
has CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_ON
So I as I understand this is exactly the option to enable video stabilization (optical), should be the same as in default system app.
But when I do the next for camera capture session configuration it doesn't change anything, no zoomed preview (as it was with default system camera app) and no changes during movement, so the video is the same in my app as it would have been in default camera app with disabled video stabilization
captureRequestBuilder.set(
CaptureRequest.LENS_OPTICAL_STABILIZATION_MODE,
CameraMetadata.LENS_OPTICAL_STABILIZATION_MODE_ON
)
So setting this parameter doesn't change anything.
Why video stabilization works in default system camera app but not in my own app using Camera2 API?
There are two types of stabilization that camera devices can support on Android
Video stabilization (Electronic Image Stabilization / EIS): This is done as a post-processing step after image capture, by analyzing camera motion between frames and compensating for it by shifting the image a bit. That's why there's a zoom-in effect, to give some room for that shift/warp. This stabilizes video over time, making consecutive image frames stable. The controls for this are accessed via the CONTROL_VIDEO_STABILIZATION_MODE setting, as you've discovered.
Optical image stabilization (OIS): This is a set of high-speed gyros and magnets around the camera lens, which rapidly shifts the lens (or sometimes the sensor) as the camera moves to stabilize the image. The amount of motion is limited, but it's very rapid, so it stabilizes images during a single exposure, not across multiple frames. So it's generally only useful for snapshots, not video. This is accessed via SENSOR_OPTICAL_STABILIZATION_MODE.
Unfortunately, many Android manufacturers do not make their EIS implementations available to applications outside of their default camera app. That's because making a good EIS implementation is complicated, and the manufacturers want to limit it to only working with their own app's recording mode (which is a single target to fix). For example, EIS for recorded video often applies a 1-second delay so that it can adjust image transforms for future frames, in addition to past ones, which is hard to do for real-time apps. Some manufacturers make simpler algorithms visible, or otherwise manage to make EIS work for everyone, but for others, the device doesn't list support for EIS even when the built-in app uses it.
Turning on OIS probably works fine - you'd only see an effect on long-exposure images, where they'll be blurry due to handshake when OIS off, but be sharp when OIS is on. Since it's a self-contained hardware unit, it's easy for manufacturers to make the on-switch available to everyone.
Related
So I have different Samsung devices (up to S22 Ultra) and it is very easy to access ultra wide lens camera because CameraManager.cameraIdList returns 4 cameras which includes general back lens camera and ultra wide lens camera as well.
But many other devices (Xiaomi, Vivo and many others) return only two general cameras - back and front.
Some users of my apps said that they are able to use ultra wide lens camera with apps like mcpro24fps and gcam. One user with Xiaomi POCO X3 (Android 11)
How do such apps can access all cameras?
Also Camera2 API usually returns that video stabilization is not supported (manufacture doesn't exposes it within this api)
val characteristics = getCameraCharacteristics(context, cameraIdx)
val modes =
characteristics.get(CameraCharacteristics.CONTROL_AVAILABLE_VIDEO_STABILIZATION_MODES)
?: intArrayOf()
val supported = CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON in modes
supported usually false but even if it returns true for some devices it seems it still doesn't really do any video stabilization and there is no any crop at the frames, no effect basically, based on users responses of my app
so the following code doesn't change anything if camera2 api returns that video stabilization is supported:
if (isVideoStabilizationSupported()) {
captureRequestBuilder.set(
CaptureRequest.CONTROL_VIDEO_STABILIZATION_MODE,
CameraMetadata.CONTROL_VIDEO_STABILIZATION_MODE_ON
)
but again mcpro24fps and gcam supports this as well but it seems mb a custom solution but I don't really understand how you can implement something custom with camera2 API without effecting performance, because it has to implemented on low level
Update Mb such apps can access ultra wide lens camera by using new zoom ratio camera parameter:
captureRequestBuilder.set(CaptureRequest.CONTROL_ZOOM_RATIO, 0.6f)
It works for my Samsung devices, 0.6...10 zoom ratio. So can switch between lens without changing camera id
The long-term goal for Android's camera APIs is that multi-camera clusters (such as a combination of ultrawide/wide/tele cameras) can be used by applications without having to specially code for it.
That's done via the logical multi-camera APIs. When implemented, that means there'll be one logical camera, composed of two or more physical cameras. What you see in the camera ID list is the logical camera, and you can also get the list of the physical cameras from CameraCharacteristics#getPhysicalCameraIds(). With this arrangement, you can see extended zoom ranges such as what you mention ( 0.6 ... 10), and the camera implementation will automatically switch to the ultrawide or telephoto cameras when you zoom out or zoom in. That's subject to various other conditions; for example, most telephoto lenses can't focus very close, so if you zoom in while focused on a nearby object, the camera will likely stay with the default wide camera; similarly tele cameras are often worse in low light, so digital zoom may result in better quality than optical zoom plus more amplification.
If you have a particular reason to use the underlying uw/wide/tele camera, you can include them in the stream configuration via setPhysicalCameraId() and use them directly; that lets you force which camera is streamed from if you want to provide a 'use telephoto' button in your UI, instead of letting the logical camera try to use its best judgement to select the active camera.
Unfortunately, not all devices have migrated to the logical camera API; those devices might be listing the multi-camera clusters as individual camera IDs as you've seen, or may just hide some cameras from apps entirely. The main problem with this is that it requires an app to do extra work to just zoom out/in with best quality, and the variety of implementations makes it hard to code up in an app so that it works on all devices.
In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!
We are developing our own Android-based hardware and we wish to use Vuforia (developed via Unity3D) for certain applications. However, we are having problems making Vuforia work well with our current camera orientation settings.
On our hardware, when the camera is placed horizontally - everything works fine. That is, when the camera is parallel to the placement of the display. However, we need to place the camera vertically, or in other words, with a 90 degree difference to the placement of the display. These are all hardware settings. Our kernel is programmed according to such settings and every other program that utilises the camera works compatibly with everything, including our IMU sensors. However, apps developed with Vuforia behave completely odd when the camera is placed vertically.
We assume the problem to be related to Vuforia's algorithms of processing raw camera data however we are not sure. Moreover, we do not know how to fix the situation. For further details, I can list:
-When "Enable Video Background" is on, the projected image is distorted and no video feed is available. The AR projection appears on a black background with distorted dimensions.
-When "Enable Video Background" is on and the device is rotated, the black background is replaced by flickering solid colors.
-When "Enable Video Background" is off, the AR projection has normal dimensions (no distortion) however it is tracked with wrong axis settings. For example, when the target moves left in real world, the projection moves up.
-When "Enable Video Background" is off and the device is rotated, the AR projection is larger compared to its appearance when the device is in it's default state.
I will be glad to provide any more information you need.
Thank you very much, have a nice day.
PS: We have found out that applications that use the camera as a main purpose (Camera apps, Barcode Scanners, etc) work fine while apps for which camera usage is an extra quality (such as some games) have the same problem as Vuforia. This make me think that apps who access the camera directly work fine whereas those who use Android API and classes fail for some reason.
First understand that every platform deals with cameras differently and that beyond this different android phone manufacturers deal with these differently as well. In my testing WITHOUT vuforia I had to transform the plane I cast the video feed onto 0,-90,90 for android/iphone and -270,-90,90 for the windows surface tablet. Past this the iPhone rear camera was mirrored, the android front camera was mirrored as well as the surface front camera. That is easy to account for, but an annoying issue is that the Google Pixel and Samsung front cameras were mirrored across the y (as were ALL iOS on the back camera), but the Nexus 6p was mirrored across the x. What I am getting at here is that there are a LOT of devices to account for with android so try more than just that one device. Vuforia so far has dealt with my pixel and 4 of my iOS devices just fine.
As for how to fix your problem:
Go into your player settings for unity and look at the orientation. There are a few options here and my application only uses portrait so I force portrait and it seems to work fine (none of the problems I had to account for with the above mentioned scenario). Vuforia previously did NOT support auto rotation so you need to make sure you have the latest version since it sounds like that is what you need. If the auto rotate is set and it is not working right you may have to account for that specific device (don't do this for all devices until after you test those devices). To account for that device use an if (or construct a case statement if you have multiple instances of this problem with different devices) and then reflect or translate as needed. Cross platform development systems (like unity) doesn't always get everything perfect since there is basically no standard. In these cases you have to directly account for them by creating a method and a case statement within that so you can cleanly and modularly manipulate all necessary devices. It is a pain, but it beats developing for all devices separately.
One more thing is make sure you check out the vuforia configuration file as it has some settings such as camera mirror and direction settings on there. These seem to be public settings so you should also be able to script to these in your case statement in the event you need to use "Flip Horizontally" for one phone, but not another.
I am building an Android app where I need to capture the camera preview and display it on 2 views on the display (one view is flipped and rotated as in the picture example dual view).
In addition I need to stream the captured video preview over Direct WiFi to another device.
These 3 activities need to happen in real time and there is great importance to video latency which should be as small as the Android platform is capable on every device (on my Samsung A8, the max frame rate for FullHD capture is 30fps, so with the 4 buffers Android uses in its camera-to-screen pipeline, the latency between the source image and the display is around 120ms which is fine for this device).
I tried using normal TextureView with Camera and Camera2 API but did not find a way to display 2 different instances of the camera preview.
I tried using OpenGL which lets me display 2 instances of the view but I don't know how to capture a byte stream for sending over WiFi. I already implemented the Direct WiFi link, but I am not sure how to produce a streaming source with OpenGL.
My question is with which Android technologies can I achieve these multiple requirements all at once?
Gary
For my project I need to implement HDR feature in my device that has Android Jelly bean on it. From the code I see that when HDR (High Dynamic Range) is selected the application is sending SCENE_MODE_HDR to the HAL layer. I am the developer from Camera HAL layer. What am I supposed to do when I get scene mode = SCENE_MODE_HDR. Do I need to request driver to give 3 images with different exposure compensation value and the application will take care of stitching the images to make the HDR image?
Or like panorama mode, the android application and framework layer can take care of HDR by themselves ?
The scene mode = SCENE_MODE_HDR seems to be introduced from Android Jelly Bean 4.2, and as i know, HDR at here indicates the Hardware HDR which means to be implemented by Camera Vendor.
I think the driver need handle this, not only give 3 images with different exposure compensation value, but also need do image composition and tone mapping.
So from the view of application, camera application just set the scene mode with SCENE_MODE_HDR and take picture, then, the HDR image will be output at onPictureTaken() callback function.