1) The Camera previews at 1920 x 1080
2) I record at 960 x 540
3) I want to be able to specify what portion of the 1920 x 1080 preview should be saved into the video and change this on-the-fly.
In effect this would give me the ability to do digital zooming as well as digital panning of the Camera. What APIs, code-samples could help me out here?
I've looked at the Camera2 API and samples. Looks like you can only set one viewport for the device, not per output.
You'll have to implement this zooming yourself; the camera API produces the same field of view on all of its outputs, regardless of the resolution of each output (though it does crop different aspect ratios differently, to avoid stretching). The camera2 SCALER_CROP_REGION (used for digital zoom) will zoom/pan all outputs equally.
The simplest way to do this is probably to send the 1080p output to the GPU, and from the GPU, render to the screen with the full FOV, and render to a media recorder with just the region of the image you want to record.
It's not terribly straightforward, since you'll need to write quite a bit of OpenGL code to accomplish this.
Related
Currently, I am following google's sample code in Kotlin for Camera2 API. Everything seems working fine in terms of video recording. However, I do have different requirements for my project as listed below.
I need to record a video in three possible ways 640 x 640(square), Y x 640(portrait), or 640 x Y(landscape) in portrait screen where Y is a number less than 640.
640 x 640(square):
I have Samsung S9+ which supports only one resolution with 1:1 aspect ration which is 384x384, but when I post on Instagram they create a video with 720 x 720 resolution with good quality. So the question is how Instagram is enlarging a low-resolution video without losing quality?
W? x 640(portrait):
I need to find out an equal or high resolution with the closest matching aspect ratio and later on, I can run the FFmpeg command to match with the required size, right?
640 x H?(landscape):
I can follow the same thing as in the portrait use case. However, the real question is how to record a video in the landscape if your screen is in Portrait orientation?
I have already researched a lot on each use case and now open for any possible solutions like FFMpeg, OpenGL, MediaMuxer, MediaCodec or anything else?
Any hint, links or suggestion would be highly appreciated. Thanks in advance.
640 x 640(square): Instagram is likely capturing video at 720p (1280x720) and then cropping to 720x720 in their own code.
Generally, camera has only a few resolutions available, and they all tend to be landscape. If you need portrait resolutions (or landscape resolutions in portrait orientation), you will probably need to do your own cropping.
I want to get image flows with the least distortions possible(no noise reduction, etc) without having to deal with RAW outputs.
I'm working with two streams(one when using the deprecated camera), one for the preview and one for the processing. I understand the camera2 api, but am wondering what kind of upsampling/downsampling is used when fitting the sensor output to the surfaces?
More specifically, I'm working on zoomed images, and according to the camera2 documentation concerning cropping and the references:
For non-raw streams, any additional per-stream cropping will be done to maximize the final pixel area of the stream.
The whole concept is easy enough to understand, but it's also mentioned that:
Output streams use this rectangle to produce their output, cropping to a smaller region if necessary to maintain the stream's aspect ratio, then scaling the sensor input to match the output's configured resolution.
But I haven't been able to find any info about this scaling. Which method is used(filter based, bicubic, edge-directed, etc)? is there a way to get this info? and is there a way I can actually choose which one is used?
Concerning the deprecated camera, I'm guessing the zoom is just simpler, in the sense that it's probably equivalent to having SCALER_CROPPING_TYPE_CENTER_ONLY with only a limited set of crop regions corresponding to the exposed zoom ratios. But is the image scaling the same as in camera2? If someone could shed some light I'd be happy.
Real life example
Camera sensor: 5312x2988(16:9)
I want a 4x zoom so the crop region should be (1992, 1120, 1328, 747)
(btw what happens to odd sizes? for instance with SCALER_CROPPING_TYPE_CENTER_ONLY devices?)
Now I have a surface of size(1920, 1080), the crop area and the stream ratio fit, but the 1328x747 values must be transformed to fill the 1920x1080 surface. The nature of this transformation is what I want to know.
The scaling algorithm used depends on the device; generally for power efficiency and speed, scaling is done in hardware blocks usually at the end of camera image signal processor (ISP) pipeline.
Therefore, you can't generally rely on it being any particular kind of scaling or filtering. Unfortunately, if you want to understand the entire processing pipeline, you have to start with RAW and implement it yourself.
If you're on the same device, the old camera API and the new camera2 API talk to the same hardware abstraction layer, and the same hardware scalers, so the scaling output will generally match exactly for the same resolution. (with the exception of LEGACY-level devices, where camera2 may need additional GPU-based scaling, which will be bilinear downsampling - but you don't really know when this would apply).
Update: This looks like it's related to this: Image data from Android camera2 API flipped & squished on Galaxy S5 - I consider this as a bug since Nexus 5/6 works correctly and it makes no sense to need to obtain full sensor size and then cropping manually to reach the desired aspect ratio, might as well not using "supported" output sizes as well!
Problem:
Get characteristics of a camera using Camera2 API, and extract output sizes suitable for a MediaCodec.class
Create a MediaCodec input surface with one of the suitable camera output sizes. Feed the output to some MediaMuxer or whatever, to see the output.
Start camera capture requests using the codec's created surface as the target.
Codec output has the correct size. But the result differs by device:
Nexus 5/6: everything ok on Android 5/6.
Samsung tablet with Android 5.1: for some resolutions, the image is obviously stretched, indicating that the camera output resolution does not match the surface size. Becomes very obvious when starting to rotate the camera - image becomes more and more skewed since it's not aligned with the X/Y axes. For some other resolutions the output is OK. There is no pattern here related to either the size or the aspect ratio.
No problem, one would say. Maybe the surface is not created exactly at the specified width and height, or whatever (even if the output sizes were extracted specifically for a MediaCodec.class target).
So, I created an OpenGL context, generated a texture, a SurfaceTexture for it, set its default buffer size to the camera output size, and created a Surface using the texture. I won't go into the gory details of drawing that to a TextureView or back to the MediaCodec's EGL surface. The result is the same - the Camera2 capture requests outputs a distorted image only for some resolutions.
Digging deeper: calling getTransformMatrix on the SurfaceTexture immediately after updateTexImage - the matrix is always the identity matrix, as expected.
So, the real problem here is that the camera is NOT capturing at the size of the provided target surface. The solution would thereby be to get the actual size the camera is capturing, and the rest is pure GL matrix transforms to draw correctly. But - HOW DO I GET THAT?
Note: using the old Camera API, with exactly the same "preview size" and the same surface as the target (either MediaCodec's or the custom one) - ALL IS FINE! But I can't use the old camera API, since it's both deprecated and also seems to have a max capture size of 1080p, while the Camera2 API goes beyond that, and I need to support 4k recording.
I encounter similar issue, model SM-A7009 with api level 21, legacy camera2 device.
The preview is stretched, surfaceTexture.setDefaultBufferSize not working, the framework will override these value when preview started.
The preview sizes reported from StreamConfigurationMap.getOutputSizes(SurfaceTexture.class) are not all supported.
Only three of them are supported.
$ adb shell dumpsys media.camera |grep preview-size
preferred-preview-size-for-video: 1920x1080
preview-size: 1440x1080
preview-size-values: 1920x1080,1440x1080,1280x720,1056x864,960x720,880x720,800x480,720x480,640x480,528x432,352x288,320x240,176x144
The system dump info list many of the preview sizes, after check all of them, I found only 1440x1080, 640x480, 320x240 are supported.
The supported preview sizes all have 1.33333 ratio. They have the same ratio reported from CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE.
So I thought it's a bug in some samsung devices with legacy camera2 api in API 21.
The solution is making these devices using deprecated camera API.
Hope it would be helpful for anyone reach here.
So yes, this is a bug on those Samsung devices.
Generally this happens when you ask for multiple different aspect ratios on output, and the device-specific camera code trips over itself on cropping and scaling all of them correctly. You may be able to avoid it by ensuring all requested sizes have the same aspect ratio.
The resolution is probably actually what you asked for - but it's been incorrectly scaled (you could test this with an ImageReader at the problematic size, where you get an explicit buffer you can poke at.)
We are adding additional testing to the Android compliance tests to try to ensure these kinds of stretched outputs don't continue to happen.
I recently started working learning Unity2d from an iOS background. I am using the FWVGA Landscape (854x480) dimensions. I resized a 1136x640 background image (which I use for iOS) to 854x480 and made a sprite with the image but it only took a small portion of the screen. I had to resize the image in Unity. What are the general rules for converting dimensions from iOS to Android on Unity to get the dimensions to fit?
1136x640 is the ratio 16:9, as is 1920x1080 (1080p). Note a quick way to check the ratio: 16/9 = 1.77r. 1920/1080 = 1.77r. 1136/640=1.77r, 854/480 = 1.77r. All the same 16:9 ratio.
So, for Android phones that are also 16:9 you don't "need" to resize your assets unless say you want to up the resolution for quality sake on more powerful Android devices.
If you want to do it quick and easy then you want to draw your game in a camera at 1336x640 and then scale the camera view to match the resolution of the device you are running on (be it 1920x1080, 640x480, etc - all the same ratio).
You get problems with devices that are not 16:9 like tablets or phones that are say 16:10, 5:3, 3:2, or the iPads and Nexus 9 tablets at 4:3. This is a long subject though and there are lots of guides around to help. Try searching for "Unity 2d multiple screen ratios".
I think there are just too many variables to make this work "simply"
854x480 doesn't sound like a 'standard' HxW to me. and Unity is a 3D game engine which means its camera is on the Z axis when you are "looking down upon" a 2D game.
So the camera Z axis(Which apparently Unity tries to 'adjust' to match the phone's size, the phone's physical size, the size of the image you are using, etc. There are just a lot of variables. I would recommend https://stackoverflow.com/a/21375938 to see if that helps.
I'm writing an Android application which makes some use of stereoscopic image data from camera on HTC Evo 3D. I try to access the data using standard Android API with some 3D-specific functions provided by HTC OpenSense API. So far, I can access the camera in stereoscopic mode and I can grab the image data using onPreviewFrame() callback method.
However, the "raw" image data (data[] byte array) available in onPreviewFrame() are not complete. The image I get is a correct side-by-side stereoscopic picture, but its horizontal size is reduced by a factor of two. For example, when set the camera preview size to 1280x720 px, I expect a 2560x720 px image (two images of the desired 1280x720 px resolution). But what I get is a picture of 1280x720 resolution, half of which comes from the right camera and the other half from the left one. I don't know why is the horizontal resolution reduced.
There is a similar thread on this forum, but the answer doesn't really solve the problem. Although the DisplaySetting.setStereoscopic3DFormat() returns true in my program, it doesn't seem to have any effect on display or image data.
Has anyone any experience with this issue?
The resolution halving is by design, the parallax barrier display causes 3D photos to be half the resolution.