Android OpenGL camera2 texture resolution - android

Intro
I'm using CaptureRequest.Builder and provide two surfaces as a target: a SurfaceTexture for drawing GL and an ImageReader (getSurface()) for processing frame.
I set desired resolution 640x480 by setting surfaceTexture.setDefaultBufferSize and by creating ImageReader.newInstance.
The problem
It appears that SurfaceTexture OpenGL texture is getting image that is of different aspect ration than 640x480 (4:3), more like 2560x1440 (16:9)(SurfaceTexture occupying full screen, see IMG 3). It has to be noted that both phones have same resolution aspect ratio 16:9. (Texture offset from the sides is left intentionally).
Below you can see IMG 1 displayed on a monitor screen, and IMG 2 and IMG 3 are phone screen captures where current activity view is showing camera preview.
The question
How to correctly force SurfaceTexture or GLSurfaceView.Renderer to get specific (4:3 in this case) aspect ratio texture image from camera for arbitrary size View?
Relevant
It has to be noted that there is an app that addresses issue, but for that they change the view size to correspond to desired aspect ratio.
See also:
GitHub repo.
Relevant source code.
Related post
Related post
IMG 1 - Displayed source image:
IMG 2 - Good aspect ratio (Google Nexus 5 - Android 6.0.1):
IMG 3 - Bad texture aspect ratio (LG-D855 - Android 5.0)

"Fixed" the issue by updating LG phone to Android 6.0.

Related

Android CameraX seems to crop analysis images slightly for specific PreviewView sizes

Dependencies used:
implementation 'androidx.camera:camera-camera2:1.1.0-alpha04'
implementation 'androidx.camera:camera-lifecycle:1.1.0-alpha04'
implementation 'androidx.camera:camera-view:1.0.0-alpha24'
Pretend that for simplicity you're using a device with:
a portrait default orientation
1080x1920 screen resolution
with ImageAnalysis configured to use 1080x1920 as frames analysis resolution
When a PreviewView is laid out with sizes close to 1080x1920
and FILL_CENTER as a scale type,
we're getting analysis frames with the same contents as we see in the PreviewView - it's ok.
When our PreviewView becomes a little bit more narrow,
for example 700x1920 - it gets cropped parts of analysis frames:
some parts of the frames on the left and right now look cropped - it's ok too.
But when we change the sizes of the PreviewView to make it look as landscape,
for example - to 1080x500 there is an issue appears.
Our PreviewView makes the frames from the camera look cropped
from top and bottom - it's ok.
But what's not ok - is that our analysis frames become actually cropped veeery slightly
at left and right.
That means I expect to see the same capture bounds horizontally both in PreviewView and frames
(because due to scaling only vertical parts are cropped),
but PreviewView actually shows a little bit more along its width
rather than the corresponding frame, so it looks like the frame looses about 20px horizontally,
what shouldn't occur, because due to frame (1080x1920) and PreviewView (1080x500) sizes,
only vertical parts should be clipped.
Attaching an illustration:
Do anyone encounter the same behaviour in the CameraX?
UPD: How did I find this out? - By saving frames (images) to files.
Frame buffers are converted to cv::Mats with:
void nv21_buffer_to_bgr_mat(JNIEnv* env,
jbyteArray nv21_buffer,
jint width,
jint height,
cv::Mat& mat)
{
// To YUV.
jbyte* nv21_buffer_bytes = env->GetByteArrayElements(nv21_buffer, nullptr);
cv::Mat yuv(height + height / 2, width, CV_8UC1, nv21_buffer_bytes);
// To BGR.
cvtColor(yuv, mat, cv::COLOR_YUV2BGR_NV21, 3);
env->ReleaseByteArrayElements(nv21_buffer, nv21_buffer_bytes, 0);
}
UPD2: Maybe layout inspector images for the described above cases will be helpful:
The underlying camera2 API has a fairly strictly defined set of rules on how having multiple output resolutions works, in terms of cropping.
The docs for the SCALER_CROP_REGION (digital zoom) control have useful diagrams about this; for your case, just assume the CROP_REGION covers the whole active array.
On top of that, CameraX PreviewView applies further cropping to its input, since the camera device itself only supports a few sizes. So PreviewView will pick a supported resolution that has a reasonable aspect ratio, and then crop it if necessary when displaying it to fill your View layout area.
So, the relationship between the ImageAnalysis and Preview use cases and their fields of view depends on both the resolutions CameraX selects for the use cases under the hood, and PreviewView's additional cropping.
What's probably happening here is that your landscape PreviewView is selecting an underlying resolution that's wider than ImageAnalysis is selecting. You could check what's actually selected via adb shell dumpsys media.camera if you have developer access to a device, while your app is running. That command dumps out a lot of info, including exactly what CameraX has configured the camera device to do.
You could try using the setTargetAspectRatio or the setTargetResolution methods on Preview.Builder to match the aspect ratio of the ImageAnalysis; that should ensure that the PreviewView is only ever a crop of what ImageAnalysis receives.

Crop the preview image using Android Camera2 API

The goal is to crop a preview on the surface for frames that come from Camera2 api capture session, but not to crop the video that will be created itself.
For example, I have a streaming resolution of 1920x1080 (16:9), and the screen size (just for instance) 2000x3000 (2:3 -> 6:9 or 16:24, lets pick the second variant for example), so I'd like to have my video in the original streaming resolution - 1920x1080, but my preview to fill all the available space without View resizing - so it should be 5333x3000 (just bumping the size up to fill the rectangle area using the same aspect ratio as the streaming resolution's one), and then to "cut" the "frame data that corresponds to this surface's resolution (which is, I suppose, 5333x2000)" to 2000x3000 (just remove (5333 - 2000) / 2 from both top and bottom).
Is it possible?
P.S.: the bad thing is that google sample for camera2 api resizes the view itself, and these "blank areas" are undesired for me. I haven't found anything that even closely matches my problem.
P.S.S.: AFAIU this SO solution crops the frame that comes from the camera itself, but I need my video to be in the original resolution.
If you're using a TextureView, you can probably adjust its transform matrix to scale up the preview (and cut off the edges in the process). Read the existing matrix, fix up the aspect ratio and scale it up, and then save the new matrix.
But please note that saving a different field of view than what you're showing to the user is probably going to get you negative reactions - please use the preview to frame what they want to record, and if what you're saving has extra stuff, the recorded video won't match expectations. Of course, maybe this isn't a concern for your use case.

Android camera (deprecated) preview resizes when recording

The problem appears on my Nexus 6P device. After going through all the motions to detect optimal previews, aspect ratios, video sizes etc, I end up with the following:
SurfaceView dimens 2392x1440 (full screen)
Once re-measured with the aspect ratio of the camera preview resolution, this changes to 2560x1440 or 2392x1351 depending on the calculation method.
Camera preview size: 1920x1080
This is set on the camera params using setPreviewSize():
params.setPreviewSize(mOptimalPreviewSizes.width, mOptimalPreviewSizes.height);
Media recorder video size: 1920x1080 (forced by the settings)
This is set on the media recorder:
mMediaRecorder.setVideoSize(mOptimalVideoSize.width, mOptimalVideoSize.height);
When I click the record button, the camera preview 'zooms in', i.e. resizes to video size. If I change the video size setting to for example 3840x2160, the preview works fine with no resizing.
I was under impression that it is possible to set video size separatly to the preview size, so I'm a bit confused to why I'm seeing this and how can I work around this.
EDIT:
As an example, OpenCamera seems to be able to separate preview surface resolution from video resolution. https://sourceforge.net/p/opencamera/code/ci/master/tree/src/net/sourceforge/opencamera/
To make sure, I've added a line just before video_recorder.prepare(); to set a custom video size (video_recorder.setVideoSize(640,480);). The preview surface was still measuring near full screen at 2392x1351, and camera preview was still set to 1920x1080. I've also double checked the resulting video and it was 640x480 as expected. Unfortunately I cannot see anything in their code that would indicate how this is achieved.
EDIT 2:
I've also noticed that this 'zooming' action always happens to a specific resolution/value. Regardless of whether I'm recording at 1920x1080 or 320x200, the preview gets zoomed and looses about a cm of the picture that was available before the recording started. The end video has the expected cropping in relation to the resolution.
The problem seems to be related to the video stabilisation. I suppose this actually makes sense as stabilising against shakes usually requires cropping and when the preview resolution is already significantly lower than the surface size, it causes the picture to 'jump' or 'zoom' in. Removing video stabilisation fixes the issue.

How is the camera preview connected with the final image output?

I've always been under the impression that the preview and the final output are not connected in any way; meaning that I can set the preview to be some arbitrary dimension and that the final JPG will be whatever specific resolution I set in to be in the params, but I just ran into a very odd situation where the image data coming back in the byte[] that's in the jpg callback is different, depending on what dimensions I set my preview to.
Can someone enlighten me on what actual relationship the preview has on the final JPG? (or point me to documentation on said relationship).
TIA
[Edit]
As per ravi's answer, this was my assumption as well, however, I see no alternative but to surmise that they are, in fact, directly connected based on the evidence. I'll post code if necessary (though there's a lot of it) but here's what I'm doing.
I have a preview screen where the user takes a photo of themselves. I then display the picture captured (from the jpg callback bitmap data) in a subsequent draw view and allow them to trace a shape over their photo. I then pass the points of their polygon into a class that cuts that shape out of the original image, and gives back the cut image.
All of this works, BUT depending on how I present the PREVIEW, the polygon cutting class crashes on an array out of bounds index as it tries to access pixels on the final image that simply don't exist. This effect is produced EXCLUSIVELY by altering the shape of the preview View's dimensions. I'm not altering ANYTHING else in the code, and yet, just by mis-shaping my preview view, I can reproduce this error 100% of the time.
I can't see an explanation other than that the preview and the final are directly connected somehow, since I'm never operating on the preview's data, I only display it in a SurfaceView and then move on to deal exclusively with the data from the JPG callback following the user having taken their photo.
There is no relation between the preview resolution and the final image that is captured.
They are completely independent (at least for the still image capture). The preview resolution and the aspect ratio are not interrelated with the final image resolution and the aspect ratio in anyway.
In the camera application that I have written, the preview is always VGA but the image I capture varies from 5M to VGA (depending on the device capability)
Perhaps if you can explain the situation it would be more helpful.
We are currently developing a camera application and face very similiar problems. In our case, we want to display a 16:9 preview, while capturing a 4:3 picture. On most devices this works without any problems, but on some (e.g. Galaxy Nexus, LG Optimus 3D), the output camera picture depends on the preview you've chosen. In our case the outcoming pictures on that devices are distorted when the preview ratio is different from the picture ratio.
We tried to fix this, by changing the preview resolution to a better one just before capturing the image. But this does not work on some devices and occure error while starting the preview again after capturing is finished.
We also tried to fix this, by enlarging the SurfaceView to fullscreen-width and "over fullscreen"-height to make a 16:9 preview out of a 4:3 preview. But this does not work, because SurfaceViews can not be higher then screenheight.
So there IS any connection on SOME devices, and we really want to know, how to fix/workaround this.

How does android handle differences between preview size/ratio and actual SufaceView size?

I'm writing a small android app where a user can place an image inside the live preview of the camera and take a picture of this. The app will then combine the two images appropriately -- All of this is working fine.
I understand you can get/set the PreviewSize using Camera.getParameters(), I assume this is related to the size of the realtime "camera feed".
However, the size of my SurfaceView where the camera preview is shown is different from the reported (and used) PreviewSizes. For example, in the emulator my available SurfaceView happens to be 360x215, while the PreviewSize is 320x240. Still, the entire SurfaceView is filled with the preview.
But the picture that's generated in the end is (also?) 320x240. How does android compensate for these differences in size and aspect ratio? Is the image truncated?
Or am I simply misunderstanding what the PreviewSize is about - is this related to the size of the generated pictures, or is it related to the "realtime preview" that's projected on the SurfaceView? Are there any non-trivial Camera examples that deal with this?
I need to know how what transformation takes place to, eventually, copy/scale the image correctly into the photo, hence these questions.
I am trying to figure this out myself. Here is what I found out so far..
The surface view has an internal surface called mSurface which is actually used as camera feed and the encoder feed. So this buffer has to be the actual size at which you want do the recording.
You can set the size of this mSurface to be independent of the SurfaceView by using the setFixedSize method
Now you might want to perform a HD recording so the mSurface needs to 1280x760 resolution but you SurfaceView can't be that big (Assuming you are running it on a phone with a WVGA screen). So you try to set to a smaller resolution than 1280x760 which also maintains the same aspect ratio.
Android now performs a resizing on the HD buffer to get the preview resolution, cropping is not done, it is just resized to the SurfaceView reoslutions
So at this point both the mSurface and the previewSize that you set to the camera is the same resolution and hence the resultant video recording will also be of the same resolution.
That being said I am still struggling to get my VGA recorder to work on Nexus S, it is working on a LG Maha device. :)

Categories

Resources