In terms of RAM utilized by a drawable when it is rendered on on the screen, does it make any difference if the drawable is a vector or a bitmap?
I understand that vectors take less media storage space, but I'm asking about the resident RAM needed in order to render it, since in theory, it is still being drawn onto a canvas with the same amount of pixels in the end.
Thanks!
From the document I read sometime ago (same question with you).
The different between these 2 options is the size of APK file after all when you release. SVG will help you save size of apk.
The initial loading of a vector graphic can cost more CPU cycles than the corresponding raster image. Afterward, memory use and performance are similar between the two. We recommend that you limit a vector image to a maximum of 200 x 200 dp; otherwise, it can take too long to draw.
Being drawn on view will have those 2 options having same RAM (memory) consumed.
My reference source: https://developer.android.com/studio/write/vector-asset-studio.html#about
use vector drawables for simple shapes. Using the same for complex structures will increase the size of the apk rapidly.
I’m building an image-intensive social app where images are sent from the server to the device. When the device has smaller screen resolutions, I need to resize the bitmaps, on device, to match their intended display sizes.
The problem is that using createScaledBitmap causes me to run into a lot of out-of-memory errors after resizing a horde of thumbnail images.
What’s the most memory efficient way to resize bitmaps on Android?
This answer is summarised from Loading large bitmaps Efficiently
which explains how to use inSampleSize to load a down-scaled bitmap
version.
In particular Pre-scaling bitmaps explains the details of various
methods, how to combine them, and which are the most memory efficient.
There are three dominant ways to resize a bitmap on Android which have different memory properties:
createScaledBitmap API
This API will take in an existing bitmap, and create a NEW bitmap with the exact dimensions you’ve selected.
On the plus side, you can get exactly the image size you’re looking for (regardless of how it looks). But the downside, is that this API requires an existing bitmap in order to work. Meaning the image would have to be loaded, decoded, and a bitmap created, before being able to create a new, smaller version. This is ideal in terms of getting your exact dimensions, but horrible in terms of additional memory overhead. As such, this is kind-of a deal breaker for most app developers who tend to be memory conscious
inSampleSize flag
BitmapFactory.Options has a property noted as inSampleSize that will resize your image while decoding it, to avoid the need to decode to a temporary bitmap. This integer value used here will load an image at a 1/x reduced size. For example, setting inSampleSize to 2 returns an image that’s half the size, and Setting it to 4 returns an image that’s 1/ 4th the size. Basically image sizes will always be some power-of-two smaller than your source size.
From a memory perspective, using inSampleSize is a really fast operation. Effectively, it will only decode every Xth pixel of your image into your resulting bitmap. There’s two main issues with inSampleSize though:
It doesn’t give you exact resolutions. It only decreases the size of your bitmap by some power of 2.
It doesn’t produce the best quality resize. Most resizing filters produce good looking images by reading blocks of pixels, and then weighting them to produce the resized pixel in question. inSampleSize avoids all this by just reading every few pixels. The result is quite performant, and low memory, but quality suffers.
If you're only dealing with shrinking your image by some pow2 size, and filtering isn't an issue, then you can't find a more memory efficient (or performance efficient) method than inSampleSize.
inScaled, inDensity, inTargetDensity flags
If you need to scale an image to a dimension that’s not equal to a power of two, then you’ll need the inScaled, inDensity and inTargetDensity flags of BitmapOptions. When inScaled flag has been set, the system will derive the scaling value to apply to your bitmap by dividing the inTargetDensity by the inDensity values.
mBitmapOptions.inScaled = true;
mBitmapOptions.inDensity = srcWidth;
mBitmapOptions.inTargetDensity = dstWidth;
// will load & resize the image to be 1/inSampleSize dimensions
mCurrentBitmap = BitmapFactory.decodeResources(getResources(),
mImageIDs, mBitmapOptions);
Using this method will re-size your image, and also apply a ‘resizing filter’ to it, that is, the end result will look better because some additional math has been taken into account during the resizing step. But be warned: that extra filter step, takes extra processing time, and can quickly add up for big images, resulting in slow resizes, and extra memory allocations for the filter itself.
It’s generally not a good idea to apply this technique to an image that’s significantly larger than your desired size, due to the extra filtering overhead.
Magic Combination
From a memory and performance perspective, you can combine these options for the best results. (setting the inSampleSize, inScaled, inDensity and inTargetDensity flags)
inSampleSize will first be applied to the image, getting it to the next power-of-two LARGER than your target size. Then, inDensity & inTargetDensity are used to scale the result to exact dimensions that you want, applying a filter operation to clean up the image.
Combining these two is a much faster operation, since the inSampleSize step will reduce the number of pixels that the resulting Density-based step will need to apply it’s resizing filter on.
mBitmapOptions.inScaled = true;
mBitmapOptions.inSampleSize = 4;
mBitmapOptions.inDensity = srcWidth;
mBitmapOptions.inTargetDensity = dstWidth * mBitmapOptions.inSampleSize;
// will load & resize the image to be 1/inSampleSize dimensions
mCurrentBitmap = BitmapFactory.decodeFile(fileName, mBitmapOptions);
If you're needing to fit an image to specific dimensions, and some nicer filtering, then this technique is the best bridge to getting the right size, but done in a fast, low-memory footprint operation.
Getting image dimensions
Getting the image size without decoding the whole image
In order to resize your bitmap, you’ll need to know the incoming dimensions. You can use the inJustDecodeBounds flag to help you get the dimensions of the image, w/o needing to actually decode the pixel data.
// Decode just the boundaries
mBitmapOptions.inJustDecodeBounds = true;
BitmapFactory.decodeFile(fileName, mBitmapOptions);
srcWidth = mBitmapOptions.outWidth;
srcHeight = mBitmapOptions.outHeight;
//now go resize the image to the size you want
You can use this flag to decode the size first, and then calculate the proper values for scaling to your target resolution.
As nice (and accurate) as this answer is, it's also very complicated. Rather than re-invent the wheel, consider libraries like Glide, Picasso, UIL, Ion, or any number of others that implement this complex and error prone logic for you.
Colt himself even recommends taking a look at Glide and Picasso in the Pre-scaling Bitmaps Performance Patterns Video.
By using libraries, you can get every bit of efficiency mentioned in Colt's answer, but with vastly simpler APIs that work consistently across every version of Android.
I'm currently facing several performance issues (out-of-memory) when handling a vast amount of bitmaps. As this is just a problem that can be fixed I'm wondering if anybody can explain me the difference in using the following methods.
If I only want to load an image into an ImageView I usually use:
imageView.setImageDrawable(getResources.getDrawable(R.drawable.id));
If I want to sample the drawable beforehand I usually use (here without sampling):
Bitmap bm = BitmapFactory.decodeResource(getResources(), R.drawable.id);
imageView.setImageBitmap(bm);
My question is related to performance optimisation. I'm wondering whether it is better to provide as many drawables as possible using the different drawable folders (so these drawables nearly fit the required resolution for the different devices) or if it is better to sample high-quality drawables? What is setImageDrawable doing internally? Does it decode the resources using the BitmapFactory, just without sampling? There seems to be a trade-off between the actual size of the app and the cpu- and memory-load during runtime.
if you're concerned about apk size, then having as many drawables as possible is not the ideal way to go. but dont forget, when you decode a bitmap, you can pass a sample size so it will scale down to the screen size and only give you the pixels you need, so older phones with smaller screens wont need to decode 8mp images.
check BitmapFactory.Options and here
Firstly I am aware of the recommended approach of using inJustDecodeBounds and inSample size to load bitmaps at a size close to the desired size. This is however a fairly broad approach that only gets an image approximate to the target.
I have though of utilising options.inDensity and options.inTargetDensity to trick the native loader into scaling an image more precisely to the desired target size. Basically I set options.inDensity to the actual width of the image and options.inTargetDensity to the desired width and I do indeed get an image at the desired size (aspect ration happens to remain the same in this case). I then set image.setDensity(DENSITY_NONE) on the resulting image and all appears to work OK.
Anyone know of anything wrong with this approach? Any thoughts on memory efficiency and image quality?
I have always got better image management with Opengl 2.0 and surface views.
Sounds brilliant to me! (Can't believe android devs wrote the code but didn't expose the functionality in a sane and sensible way).
I do have one concern. I have good reason to believe that Android is unable to deal with instantiated bitmaps that are larger than 2048x2048 pixels in either dimension. If the internal code to do the rescaling isn't sufficiently intelligent, it may fail when loading bitmaps larger than 2048x2048.
I was thinking about this my self, using inDensity and inTargetDensity to scale up/down bitmap on decode. It works well, but unfortunately it yields very bad performance (animation) results. I was hoping I could use this as a "universal" aproach to scale up/down on decode, similar to inSampleSize which is unfortunately only for down sampling. Seems like there is different native implementation: inSampleSize performs well, no obvious performance impact, where inDensity/inTargetDensity introduced noticable performance impact (like slow motion).
Or am I missing something?
I am trying to cache big bitmaps for drawing on screen in Android. But now I am facing OutOfMemoryException, say that the bitmap allocation exceeds the VM budget.
I need to minimize the size of the bitmap but I cannot reduce the resolution. For my use case, I need to only save the shape of the bitmap and apply color later when actually drawing, so I am using ALPHA_8 as the bitmap configuration.
I want to know if there is a 1-bit pixel (either completely opaque or completely transparent) configuration in bitmap, or any similar ways to save memory?
Reducing the color depth from 8 bits to 1 would, of course, help a little. However, it doesn't really solve the problem but just postpones it. It only means that you'll get the OOME later but you'll still get it.
Consider moving your cache from RAM to disk and, optionally, add a smaller RAM-based cache on top of it to improve performance.