Android ImageFormat size comparison - android

Is there any comparison as to the size in memory or on disk of the various ImageFormat types available on Android?
I'm looking for the lowest size format (or alternatively the most compression) with little regard to image quality. I see the JPEG and HEIC formats are the only ones with the word "compressed" so I'd assume the lowest footprint would use one of these, but I'm not seeing any info that would allow me to compare these (or the potential other formats) with regard to memory size.
I was thinking to try generating one of each format and comparing, but I see no easy way to capture or convert to the vast majority of the types listed. For example, using the CameraX ImageAnalysis use case, it seems I can only get images in YUV_420_888 format. There are a few 3rd party or SO answers that could potentially convert to a few other formats, but I'm not seeing any easy way to compare these formats to compare the inherent sizes of each.

Related

Camera2 - most efficient way to obtain a still capture Bitmap

To start with the question: what is the most efficient way to initialize and use ImageReader with the camera2 api, knowing that I am always going to convert the capture into a Bitmap?
I'm playing around with the Android camera2 samples, and everything is working quite nicely. However, for my purposes I always need to perform some post processing on captured still images, for which I require a Bitmap object. Presently I am using BitmapFactory.decodeByteArray(...) using the bytes coming from the ImageReader.acquireNextImage().getPlanes()[0].getBuffer() (I'm paraphrasing). While this works acceptably, I still feel like there should be a way to improve performance. The captures are encoded in ImageFormat.Jpeg and need to be decoded again to get the Bitmap, which seems redundant. Ideally I'd obtain them in PixelFormat.RGB_888 and just copy that to a Bitmap using Bitmap.copyPixelsFromBuffer(...), but it doesn't seem like initializing an ImageReader with that format has reliable device support. YUV_420_888 could be another option, but looking around SO it seems that it requires jumping through some hoops to decode into a Bitmap. Is there a recommended way to do this?
The question is what you are optimizing for.
Jpeg is without doubt the easiest format supported by all devices. Decoding it to bitmap is not redundant as it seems because encoding the picture into jpeg he is usually performed by kind of hardware. This means that uses minimal bandwidth to transmit the image from the sensor to your application. on some devices this is the only way to get maximum resolution. BitmapFactory.decodeByteArray(...) is often performed by special hardware decoder too. The major problem with this call is that may cause out of memory exception, because the output bitmap is too big. So you will find many examples the do subsampled decoding, tuned for the use case where the bitmap must be displayed on the phone screen.
If your device supports required resolution with RGB_8888, go for it: this needs minimal post-processing. But scaling such image down may be more CPU intensive then dealing with Jpeg, and memory consumption may be huge. Anyways, only few devices support this format for camera capture.
As for YUV_420_888 and other YUV formats,
the advantages over Jpeg are even smaller than for RGB.
If you need the best quality image and don't have memory limitations, you should go for RAW images which are supported on most high-end devices these days. You will need your own conversion algorithm, and probably make different adaptations for different devices, but at least you will have full command of the picture acquisition.
After a while I now sort of have an answer to my own question, albeit not a very satisfying one. After much consideration I attempted the following:
Setup a ScriptIntrinsicYuvToRGB RenderScript of the desired output size
Take the Surface of the used input allocation, and set this as the target surface for the still capture
Run this RenderScript when a new allocation is available and convert the resulting bytes to a Bitmap
This actually worked like a charm, and was super fast. Then I started noticing weird behavior from the camera, which happened on other devices as well. As it would turn out, the camera HAL doesn't really recognize this as a still capture. This means that (a) the flash / exposure routines don't fire in this case when they need to and (b) if you have initiated a precapture sequence before your capture auto-exposure will remain locked unless you manage to unlock it using AE_PRECAPTURE_TRIGGER_CANCEL (API >= 23) or some other lock / unlock magic which I couldn't get to work on either device. Unless you're fine with this only working in optimal lighting conditions where no exposure adjustment is necessary, this approach is entirely useless.
I have one more idea, which is to setup an ImageReader with a YUV_420_888 output and incorporating the conversion routine from this answer to get RGB pixels from it. However, I'm actually working with Xamarin.Android, and RenderScript user scripts are not supported there. I may be able to hack around that, but it's far from trivial.
For my particular use case I have managed to speed up JPEG decoding to acceptable levels by carefully arranging background tasks with subsampled decodes of the versions I need at multiple stages of my processing, so implementing this likely won't be worth my time any time soon. If anyone is looking for ideas on how to approach something similar though; that's what you could do.
Change the Imagereader instance using a different ImageFormat like this:
ImageReader.newInstance(width, height, ImageFormat.JPEG, 1)

How to find out maximum bitmap size to be loaded into an ImageView? [duplicate]

I would like to know if there is any kind of limitation on the texture size that can be used in any Android Opengl Es 2.0 projects. I understand that having a huge texture of size 4096x4096 is a bit meaning less as it is rendered on a small screen. But What if the requirement is to switch between many textures at run time? And If I want to have a texture atlas to do a quick single upload instead of multiple smaller texture upload. Please let me know your ideas in this regards.
Also I am sure there has to be a limitation on the size of image that can be processed by a device, as the memory on the device is limited. But I would like to know if it is resolution based or is it size based. I mean if a device has a limitation of 1024x1024 image size can it handle a compressed texture of size 2048x2048 that would be of same size approx as uncompressed 1024x1024.
Also please let me know on an general basis usually how much the limitation on texture size or resolution normal devices running android 2.2 and above would be.
Also please let me know if there are any best practices when handling high resolution images in opengles 2.0 to get best performance in both load time and also run time.
There is a hardware limitation on the texture sizes. To manually look them up, you can go to a site such as glbenchmark.com (Here displaying details about google galaxy nexus).
To automatically find the maximum size from your code, you can use something like:
int[] max = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, max, 0); //put the maximum texture size in the array.
(For GL10, but the same method exists for GLES20)
When it comes to the processing or editing of an image you usually use an instance of Bitmap when working in android. This holds the uncompressed values of your image and is thus resolution dependant. However, it is recommended that you use compressed textures for your openGL applications as this improves the memory-use efficiency (note that you cannot modify these compressed textures).
From the previous link:
Texture compression can significantly increase the performance of your
OpenGL application by reducing memory requirements and making more
efficient use of memory bandwidth. The Android framework provides
support for the ETC1 compression format as a standard feature [...]
You should take a look at this document which contains many good practices and hints about texture loading and usage. The author explicitly writes:
Best practice: Use ETC for texture compression.
Best practice: Make sure your geometry and texture resolutions are
appropriate for the size they're displayed at. Don't use a 1k x 1k
texture for something that's at most 500 pixels wide on screen. The
same for geometry.

SSIM between different image resolutions

I need to compare the quality of streaming between Linux desktop server and an Android client. So I have two images one from Linux server and the other one from Android client and they have different resolution.
My question is how to calculate SSIM between these two images (I do not need CODE just a direction to the solution). I already have SSIM code in c++ but it will compare between similar resolutions.
THANKS
What you are trying to achieve is not entirely straight-forward. Comparing the objective quality of two images of different resoluition using a metric like SSIM is ill-defined. This is due to a plethora of factors, among the foremost being:
a) Sampling-related, where the comparison of a number of samples in the reference image against a subset of those samples in the downscaled (degraded) image is not particularly meaningful (how do you compare something against nothing?)
b) Due to characteristics of the human visual system, most notably the contrast sensitivity function, which in this case can be summed up as that a human observer will perceive the lightness of the samples differently, due to differing spatial frequencies.
What the way forward is here depends on the actual problem formulation, which is not clear. What is it exactly that you are trying to measure? If you are measuring the quality of a resampling algorithm, that is something for which SSIM is not an appropriate metric. If you are trying to measure the relative quality of two different image sets, then the obvious answer is to create two uncompressed images at the different resolutions, encode and measure them separately, and compare their respective SSIM values.
Do note that the original SSIM does not account for various HVS properties which would come into play, any you may be better off trying to find a different metric. Also, in case you mean to measure the quality of a stream (i.e. video), SSIM is not particularly performant.
You could use this workaround:
Scale down the bigger image to match the resolution of the smaller image.
After the scale down you can compare the images with the SSIM

What is the fastest Android camera image format?

I'm grabbing images from an Android camera using onPreviewFrame(). I only really need the grayscale portion of the image, which I'm passing as a byte array to some native code.
On the newer OS versions that I'm supporting, both YV12 and NV21 preview formats are supported, with NV21 as the default.
Is one of these likely to be faster (i.e., matching the hardware format and requiring no extra processing by the OS) on most devices? Is it something entirely dependent on the device manufacturer?
The manufacturer might matter depending on camera type and hardware. The NV21 format is android default. As far as speed, Im doubting you'll notice a difference regardless of format. Personally i would use NV21 YCrCb. Here is a link on different formats - Types of Android Image Formatting
I believe any processing will be orders of magnitude longer than hardware conversion of frame formats. If it isn't, it will be completely device-dependent.

Limitation on texture size? Android Open GL ES 2.0

I would like to know if there is any kind of limitation on the texture size that can be used in any Android Opengl Es 2.0 projects. I understand that having a huge texture of size 4096x4096 is a bit meaning less as it is rendered on a small screen. But What if the requirement is to switch between many textures at run time? And If I want to have a texture atlas to do a quick single upload instead of multiple smaller texture upload. Please let me know your ideas in this regards.
Also I am sure there has to be a limitation on the size of image that can be processed by a device, as the memory on the device is limited. But I would like to know if it is resolution based or is it size based. I mean if a device has a limitation of 1024x1024 image size can it handle a compressed texture of size 2048x2048 that would be of same size approx as uncompressed 1024x1024.
Also please let me know on an general basis usually how much the limitation on texture size or resolution normal devices running android 2.2 and above would be.
Also please let me know if there are any best practices when handling high resolution images in opengles 2.0 to get best performance in both load time and also run time.
There is a hardware limitation on the texture sizes. To manually look them up, you can go to a site such as glbenchmark.com (Here displaying details about google galaxy nexus).
To automatically find the maximum size from your code, you can use something like:
int[] max = new int[1];
gl.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE, max, 0); //put the maximum texture size in the array.
(For GL10, but the same method exists for GLES20)
When it comes to the processing or editing of an image you usually use an instance of Bitmap when working in android. This holds the uncompressed values of your image and is thus resolution dependant. However, it is recommended that you use compressed textures for your openGL applications as this improves the memory-use efficiency (note that you cannot modify these compressed textures).
From the previous link:
Texture compression can significantly increase the performance of your
OpenGL application by reducing memory requirements and making more
efficient use of memory bandwidth. The Android framework provides
support for the ETC1 compression format as a standard feature [...]
You should take a look at this document which contains many good practices and hints about texture loading and usage. The author explicitly writes:
Best practice: Use ETC for texture compression.
Best practice: Make sure your geometry and texture resolutions are
appropriate for the size they're displayed at. Don't use a 1k x 1k
texture for something that's at most 500 pixels wide on screen. The
same for geometry.

Categories

Resources