How android camera api2 achieved in supporting multi-surfaces outputs - android

Say that i set up 3 output surfaces.
SurfaceView with 480p
MediaCodec's InputSurface with 1080p
ImageReader with YUV format with 720p
How did android managed to generate different resolution with requested data format?
How camera sensor driver involved in this?
Or the camera sensor can only output one certain resolution & format, then android camera framework did the rest of the job.
What technique the camera framework been used regarding to scaling and format translating.
Any Hardware acceleration tech involved?
--
I just learned that if the underlying was camera 1,
it use opengles to scale and render to multi surfaces.
https://cs.android.com/android/platform/superproject/+/master:frameworks/base/core/java/android/hardware/camera2/legacy/RequestThreadManager.java
Or implemented by OEM's Camera HAL?

In general, the ISP (image signal processor) that receives the raw camera data has several scaler units, which can crop and scale each frame to multiple different sizes.
So on Android, the generation of multiple outputs is handled by the camera HAL of the device, typically heavily leveraging the ISP scaling capability. Most high-end devices run the image sensor at full resolution (in photo mode at least) and hardware downscale for preview or other outputs.
The legacy code you point to was for implementing the new camera2 API on top of the deprecated camera1 API, which required doing scaling in the Android framework, where the GPU was leveraged for this since there's no generic access to ISP scaling units outside the camera HAL. That code isn't used for modern devices that have an up-to-date camera HAL.

Related

How to capture a high res still image (JPEG) during ARCore face augmentation session?

In the official document of "Shared camera access with ARCore", it mentions that High-end phones can support Simultaneous streams as following:
2x YUV CPU streams, e.g. 640x480 and 1920x1080
1x GPU stream, e.g. 1920x1080
1x occasional high res still image (JPEG), e.g. 12MP
And the shared camera app sample code does not show how to capture a "high res still image (JPEG), e.g. 12MP". I also saw it configures the camera preview with the supported resolution queried from ARCore's supported camera config. However, in my case, I use have to use the front camera, which only have 1 supported config: 640 x 480.
My question is that, in the face augmentation use case using front camera, how to use this shared camera API to capture a high res still image (JPEG), e.g. 12MP during ARCore session?
Thanks a lot.

Is it possible to capture a High-Res image while using ArCore?

In my app I'm trying to use ArCore as sort of a "camera assistant" in a custom camera view.
To be clear - I want to display images for the user in his camera and have him capture images that don't contain the AR models.
From what I understand, in order to capture an image with ArCore I'll have to use the Camera2 API which is enabled by configuring the session to use the "shared Camera".
However, I can't seem to configure the camera to use any high-end resolutions (I'm using pixel 3 so I should be able to go as high as 12MP).
In the "shared camera example", they toggle between Camera2 and ArCore (a shame there's no API for CameraX) and it has several problems:
In the ArCore mode the image is blurry (I assume that's because the depth sensor is disabled as stated in their documentation)
In the Camera2 mode I can't enhance the resolution at all.
I can't use the Camera2 API to capture an image while displaying models from ArCore.
Is this requirement at all possible at the moment?
I have not worked yet with shared camera with ARCore, but I can say a few things regarding the main point of your question.
In ARCore you can configure both CPU image size and GPU image size. You can do that by checking all available camera configurations (available through Session.getSupportedCameraConfigs(CameraConfigFilter cameraConfigFilter)) and selecting your preferred one by passing it back to the ARCore Session. On each CameraConfig you can check which CPU image size and GPU texture size you will get.
Probably you are currently using (maybe by default?) a CameraConfig with the lowest CPU image, 640x480 pixels if I remember correctly, so yes it definitely looks blurry when rendered (but nothing to do with depth sensor in this regard).
Sounds like you could just select a higher CPU image and you're good to go... but unfortunately that's not the case because that configuration applies to every frame. Getting higher resolution CPU images will result in much lower performance. When I tested this I got about 3-4 frames per second on my test device, definitely not ideal.
So now what? I think you have 2 options:
Pause the ARCore session, switch to a higher CPU image for 1 frame, get the image and switch back to the "normal" configuration.
Probably you are already getting a nice GPU image, maybe not the best due to camera Preview, but hopefully good enough? Not sure how you are rendering it, but with some OpenGL skills you can copy that texture. Not directly, of course, because of the whole GL_TEXTURE_EXTERNAL_OES thing... but rendering it onto another framebuffer and then reading the texture attached to it could work. Of course you might need to deal with texture coordinates yourself (full image vs visible area) but that's another topic.
Regarding CameraX, note that it is wrapping Camera2 API in order to provide some camera use cases so that app developers don't have to worry about the camera lifecycle. As I understand it would not be suitable for ARCore to use CameraX as I imagine they need full control of the camera.
I hope that helps a bit!

Android camera2: what kind of upsampling/downsampling is done to fit my streams?

I want to get image flows with the least distortions possible(no noise reduction, etc) without having to deal with RAW outputs.
I'm working with two streams(one when using the deprecated camera), one for the preview and one for the processing. I understand the camera2 api, but am wondering what kind of upsampling/downsampling is used when fitting the sensor output to the surfaces?
More specifically, I'm working on zoomed images, and according to the camera2 documentation concerning cropping and the references:
For non-raw streams, any additional per-stream cropping will be done to maximize the final pixel area of the stream.
The whole concept is easy enough to understand, but it's also mentioned that:
Output streams use this rectangle to produce their output, cropping to a smaller region if necessary to maintain the stream's aspect ratio, then scaling the sensor input to match the output's configured resolution.
But I haven't been able to find any info about this scaling. Which method is used(filter based, bicubic, edge-directed, etc)? is there a way to get this info? and is there a way I can actually choose which one is used?
Concerning the deprecated camera, I'm guessing the zoom is just simpler, in the sense that it's probably equivalent to having SCALER_CROPPING_TYPE_CENTER_ONLY with only a limited set of crop regions corresponding to the exposed zoom ratios. But is the image scaling the same as in camera2? If someone could shed some light I'd be happy.
Real life example
Camera sensor: 5312x2988(16:9)
I want a 4x zoom so the crop region should be (1992, 1120, 1328, 747)
(btw what happens to odd sizes? for instance with SCALER_CROPPING_TYPE_CENTER_ONLY devices?)
Now I have a surface of size(1920, 1080), the crop area and the stream ratio fit, but the 1328x747 values must be transformed to fill the 1920x1080 surface. The nature of this transformation is what I want to know.
The scaling algorithm used depends on the device; generally for power efficiency and speed, scaling is done in hardware blocks usually at the end of camera image signal processor (ISP) pipeline.
Therefore, you can't generally rely on it being any particular kind of scaling or filtering. Unfortunately, if you want to understand the entire processing pipeline, you have to start with RAW and implement it yourself.
If you're on the same device, the old camera API and the new camera2 API talk to the same hardware abstraction layer, and the same hardware scalers, so the scaling output will generally match exactly for the same resolution. (with the exception of LEGACY-level devices, where camera2 may need additional GPU-based scaling, which will be bilinear downsampling - but you don't really know when this would apply).

Android Camera2 ImageReader Image Format YUV

I've got an Android application which does motion detection and video recording. It supports both the Camera and Camera2 APIs in order to provide backwards compatibility. I'm using an ImageReader with the Camera2 API in order to do motion detection. I'm currently requesting JPEG format images, which are very slow. I understand that requesting YUV images would be faster, but is it true that the YUV format varies depending on which device is being used? I just wanted to check before I give up on optimizing this.
All devices will support NV21 and YV12 formats for the old camera API (since API 12), and for camera2, all devices will support YUV_420_888.
YUV_420_888 is a flexible YUV format, so it can represent multiple underlying formats (including NV21 and YV12). So you'll need to check the pixel and row strides in the Images from the ImageReader to ensure you're reading through the 3 planes of data correctly.
If you need full frame rate, you need to work in YUV - JPEG has a lot of encoding overhead and generally won't run faster than 2-10fps, while YUV will run at 30fps at least at preview resolutions.
I solved this problem by using the luminance (Y) values only, the format for which doesn't vary between devices. For the purposes of motion detection, a black and white image is fine. This also gets around the problem on API Level 21 where some of the U and V data is missing when using the ImageReader.

Android Camera2 capture image skewed

Update: This looks like it's related to this: Image data from Android camera2 API flipped & squished on Galaxy S5 - I consider this as a bug since Nexus 5/6 works correctly and it makes no sense to need to obtain full sensor size and then cropping manually to reach the desired aspect ratio, might as well not using "supported" output sizes as well!
Problem:
Get characteristics of a camera using Camera2 API, and extract output sizes suitable for a MediaCodec.class
Create a MediaCodec input surface with one of the suitable camera output sizes. Feed the output to some MediaMuxer or whatever, to see the output.
Start camera capture requests using the codec's created surface as the target.
Codec output has the correct size. But the result differs by device:
Nexus 5/6: everything ok on Android 5/6.
Samsung tablet with Android 5.1: for some resolutions, the image is obviously stretched, indicating that the camera output resolution does not match the surface size. Becomes very obvious when starting to rotate the camera - image becomes more and more skewed since it's not aligned with the X/Y axes. For some other resolutions the output is OK. There is no pattern here related to either the size or the aspect ratio.
No problem, one would say. Maybe the surface is not created exactly at the specified width and height, or whatever (even if the output sizes were extracted specifically for a MediaCodec.class target).
So, I created an OpenGL context, generated a texture, a SurfaceTexture for it, set its default buffer size to the camera output size, and created a Surface using the texture. I won't go into the gory details of drawing that to a TextureView or back to the MediaCodec's EGL surface. The result is the same - the Camera2 capture requests outputs a distorted image only for some resolutions.
Digging deeper: calling getTransformMatrix on the SurfaceTexture immediately after updateTexImage - the matrix is always the identity matrix, as expected.
So, the real problem here is that the camera is NOT capturing at the size of the provided target surface. The solution would thereby be to get the actual size the camera is capturing, and the rest is pure GL matrix transforms to draw correctly. But - HOW DO I GET THAT?
Note: using the old Camera API, with exactly the same "preview size" and the same surface as the target (either MediaCodec's or the custom one) - ALL IS FINE! But I can't use the old camera API, since it's both deprecated and also seems to have a max capture size of 1080p, while the Camera2 API goes beyond that, and I need to support 4k recording.
I encounter similar issue, model SM-A7009 with api level 21, legacy camera2 device.
The preview is stretched, surfaceTexture.setDefaultBufferSize not working, the framework will override these value when preview started.
The preview sizes reported from StreamConfigurationMap.getOutputSizes(SurfaceTexture.class) are not all supported.
Only three of them are supported.
$ adb shell dumpsys media.camera |grep preview-size
preferred-preview-size-for-video: 1920x1080
preview-size: 1440x1080
preview-size-values: 1920x1080,1440x1080,1280x720,1056x864,960x720,880x720,800x480,720x480,640x480,528x432,352x288,320x240,176x144
The system dump info list many of the preview sizes, after check all of them, I found only 1440x1080, 640x480, 320x240 are supported.
The supported preview sizes all have 1.33333 ratio. They have the same ratio reported from CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE.
So I thought it's a bug in some samsung devices with legacy camera2 api in API 21.
The solution is making these devices using deprecated camera API.
Hope it would be helpful for anyone reach here.
So yes, this is a bug on those Samsung devices.
Generally this happens when you ask for multiple different aspect ratios on output, and the device-specific camera code trips over itself on cropping and scaling all of them correctly. You may be able to avoid it by ensuring all requested sizes have the same aspect ratio.
The resolution is probably actually what you asked for - but it's been incorrectly scaled (you could test this with an ImageReader at the problematic size, where you get an explicit buffer you can poke at.)
We are adding additional testing to the Android compliance tests to try to ensure these kinds of stretched outputs don't continue to happen.

Categories

Resources