I’m building an image-intensive social app where images are sent from the server to the device. When the device has smaller screen resolutions, I need to resize the bitmaps, on device, to match their intended display sizes.
The problem is that using createScaledBitmap causes me to run into a lot of out-of-memory errors after resizing a horde of thumbnail images.
What’s the most memory efficient way to resize bitmaps on Android?
This answer is summarised from Loading large bitmaps Efficiently
which explains how to use inSampleSize to load a down-scaled bitmap
version.
In particular Pre-scaling bitmaps explains the details of various
methods, how to combine them, and which are the most memory efficient.
There are three dominant ways to resize a bitmap on Android which have different memory properties:
createScaledBitmap API
This API will take in an existing bitmap, and create a NEW bitmap with the exact dimensions you’ve selected.
On the plus side, you can get exactly the image size you’re looking for (regardless of how it looks). But the downside, is that this API requires an existing bitmap in order to work. Meaning the image would have to be loaded, decoded, and a bitmap created, before being able to create a new, smaller version. This is ideal in terms of getting your exact dimensions, but horrible in terms of additional memory overhead. As such, this is kind-of a deal breaker for most app developers who tend to be memory conscious
inSampleSize flag
BitmapFactory.Options has a property noted as inSampleSize that will resize your image while decoding it, to avoid the need to decode to a temporary bitmap. This integer value used here will load an image at a 1/x reduced size. For example, setting inSampleSize to 2 returns an image that’s half the size, and Setting it to 4 returns an image that’s 1/ 4th the size. Basically image sizes will always be some power-of-two smaller than your source size.
From a memory perspective, using inSampleSize is a really fast operation. Effectively, it will only decode every Xth pixel of your image into your resulting bitmap. There’s two main issues with inSampleSize though:
It doesn’t give you exact resolutions. It only decreases the size of your bitmap by some power of 2.
It doesn’t produce the best quality resize. Most resizing filters produce good looking images by reading blocks of pixels, and then weighting them to produce the resized pixel in question. inSampleSize avoids all this by just reading every few pixels. The result is quite performant, and low memory, but quality suffers.
If you're only dealing with shrinking your image by some pow2 size, and filtering isn't an issue, then you can't find a more memory efficient (or performance efficient) method than inSampleSize.
inScaled, inDensity, inTargetDensity flags
If you need to scale an image to a dimension that’s not equal to a power of two, then you’ll need the inScaled, inDensity and inTargetDensity flags of BitmapOptions. When inScaled flag has been set, the system will derive the scaling value to apply to your bitmap by dividing the inTargetDensity by the inDensity values.
mBitmapOptions.inScaled = true;
mBitmapOptions.inDensity = srcWidth;
mBitmapOptions.inTargetDensity = dstWidth;
// will load & resize the image to be 1/inSampleSize dimensions
mCurrentBitmap = BitmapFactory.decodeResources(getResources(),
mImageIDs, mBitmapOptions);
Using this method will re-size your image, and also apply a ‘resizing filter’ to it, that is, the end result will look better because some additional math has been taken into account during the resizing step. But be warned: that extra filter step, takes extra processing time, and can quickly add up for big images, resulting in slow resizes, and extra memory allocations for the filter itself.
It’s generally not a good idea to apply this technique to an image that’s significantly larger than your desired size, due to the extra filtering overhead.
Magic Combination
From a memory and performance perspective, you can combine these options for the best results. (setting the inSampleSize, inScaled, inDensity and inTargetDensity flags)
inSampleSize will first be applied to the image, getting it to the next power-of-two LARGER than your target size. Then, inDensity & inTargetDensity are used to scale the result to exact dimensions that you want, applying a filter operation to clean up the image.
Combining these two is a much faster operation, since the inSampleSize step will reduce the number of pixels that the resulting Density-based step will need to apply it’s resizing filter on.
mBitmapOptions.inScaled = true;
mBitmapOptions.inSampleSize = 4;
mBitmapOptions.inDensity = srcWidth;
mBitmapOptions.inTargetDensity = dstWidth * mBitmapOptions.inSampleSize;
// will load & resize the image to be 1/inSampleSize dimensions
mCurrentBitmap = BitmapFactory.decodeFile(fileName, mBitmapOptions);
If you're needing to fit an image to specific dimensions, and some nicer filtering, then this technique is the best bridge to getting the right size, but done in a fast, low-memory footprint operation.
Getting image dimensions
Getting the image size without decoding the whole image
In order to resize your bitmap, you’ll need to know the incoming dimensions. You can use the inJustDecodeBounds flag to help you get the dimensions of the image, w/o needing to actually decode the pixel data.
// Decode just the boundaries
mBitmapOptions.inJustDecodeBounds = true;
BitmapFactory.decodeFile(fileName, mBitmapOptions);
srcWidth = mBitmapOptions.outWidth;
srcHeight = mBitmapOptions.outHeight;
//now go resize the image to the size you want
You can use this flag to decode the size first, and then calculate the proper values for scaling to your target resolution.
As nice (and accurate) as this answer is, it's also very complicated. Rather than re-invent the wheel, consider libraries like Glide, Picasso, UIL, Ion, or any number of others that implement this complex and error prone logic for you.
Colt himself even recommends taking a look at Glide and Picasso in the Pre-scaling Bitmaps Performance Patterns Video.
By using libraries, you can get every bit of efficiency mentioned in Colt's answer, but with vastly simpler APIs that work consistently across every version of Android.
Related
Firstly I am aware of the recommended approach of using inJustDecodeBounds and inSample size to load bitmaps at a size close to the desired size. This is however a fairly broad approach that only gets an image approximate to the target.
I have though of utilising options.inDensity and options.inTargetDensity to trick the native loader into scaling an image more precisely to the desired target size. Basically I set options.inDensity to the actual width of the image and options.inTargetDensity to the desired width and I do indeed get an image at the desired size (aspect ration happens to remain the same in this case). I then set image.setDensity(DENSITY_NONE) on the resulting image and all appears to work OK.
Anyone know of anything wrong with this approach? Any thoughts on memory efficiency and image quality?
I have always got better image management with Opengl 2.0 and surface views.
Sounds brilliant to me! (Can't believe android devs wrote the code but didn't expose the functionality in a sane and sensible way).
I do have one concern. I have good reason to believe that Android is unable to deal with instantiated bitmaps that are larger than 2048x2048 pixels in either dimension. If the internal code to do the rescaling isn't sufficiently intelligent, it may fail when loading bitmaps larger than 2048x2048.
I was thinking about this my self, using inDensity and inTargetDensity to scale up/down bitmap on decode. It works well, but unfortunately it yields very bad performance (animation) results. I was hoping I could use this as a "universal" aproach to scale up/down on decode, similar to inSampleSize which is unfortunately only for down sampling. Seems like there is different native implementation: inSampleSize performs well, no obvious performance impact, where inDensity/inTargetDensity introduced noticable performance impact (like slow motion).
Or am I missing something?
I have an application that displays lots of images, and images can be of varying size up to full screen dimensions of device. The images are all downloaded, and I use imagemagick to reduce colors of image to compress the file size, while keeping dimensions the same, to reduce download time. The reduced color space is fine for my application.
The problem is that when I load the image into a Bitmap in android the file size is much larger because of androids Bitmap config of ARGB_8888, I do need the ALPHA channel. Since ARGB_4444 is deprecated and had performance issues I am not using that. I am wondering if there is any other way to reduce the memory footprint of the loaded Bitmap while keeping the dimensions the same as the original?
---- Update ---
After discussions here and lots of other searching/reading it does not appear that there is a straight forward way do this. I could use ARGB_4444 which stores each pixel as 2 bytes, rather than 4 bytes per pixel. The problem there is that ARGB_4444 is deprecated, and I try not to use deprecated parts of the API for obvious reasons. Android recommends use of ARGB_8888 which is the default, since I need alpha I have to use that. Since there are many applications that do not need such a complex color space it would be nice to see ARGB_4444 or something similar become part of the supported API.
If you don't need alfa (transparency) you can use:
Bitmap.Config.RGB_565
which uses 2 bytes per pixel, instead of 4, reducing size by half.
You should also look at BitmapFactory.Options:
inSampleSize
inScaled
These values correctly set for your requirements, may have a very positive effect on Bitmap memory requirements.
Regards.
Read a similar answer I posted here
Basically, bitmap recycle is required for Pre honeycomb devices to reduce memory usage
Can someone suggest me a library that can do simplest operations like scale, crop, rotate without loading image fully into memory?
The situation: I need to scale image down from a very large size, but the scaled down image is still too large to be allocated in memory (if we use standard android tools). Since I only need to upload scaled down version, I thought of scaling it through native library and upload it through FileInputStream.
I've tried to use ImageMagic and it does the job, but performance is very poor (maybe there is a way to speed things up?)
Might want to check out OpenCV for Android
You can use the original Android Bitmap functionality by pulling the image into memory but allowing Android to sample the image before it is loaded.
For example:
BitmapFactory.Options options = new BitmapFactory.Options();
options.inSampleSize = 2;
Bitmap myBitmap = BitmapFactory.decodeStream(inputstream,null,options);
This will load your bitmap into memory with half the memory footprint of the full image. You can experiment with changing the inSampleSize to get a good fit for your application.
You can also calculate the sample size on the fly, if you know the final image size you are aiming for, you can get the current file size of the image before you load it into memory and calculate the sample size using the equation inSampleSize = OriginalSize/RequiredSize. Though sample size is best used when it is a power of 2, so you can make adjustments for this.
Edit:
A great example here https://stackoverflow.com/a/823966/637545
I'm making this 2d TD game for android. In this game i ofc. need textures/graphics for the monsters, for the towers, etc. I have decided to keep all of the pictures, that is included in all of the units attacking cycles in one picture, and all of the the pictures, that is included in the units walking cycle in another.
The problem is that I've got a lot of different units. This results in that if I want each monster-texture to have the resolution of 100x100px, the walking bitmap will end up as a 7000x15000px picture. This ofc. crashes my application, but at the same time I need everything that is inside that picture, and I don't want to reduce the resolution. How can I use these pictures without running out of memory? Or do I need to have my graphics organized in another way - if so is, I would appreciate if you could tell me how.
A little calculation: Your bitmap has 7000x15000 pixels, that is 105000000 pixels. For every pixel you'll need 3 or 4 bytes (depending on whether you have transparency or not). Supposing you're using transparency, this is 4 bytes per pixel, so in total this is 420000000 bytes resp. 400 MB.
So, you'll definitely want to reorganize your setup.
Are you sure you're using all 10000 images? The complete sprite sheet for most games generally range in the hundreds or lower thousands. In a 640x480 screen you can only put 24 different characters without overlapping, having too many different characters of that size in a single screen all moving around is probably going to be too confusing.
Some things you can do to reduce your spritesheet size is to reduce the framerate of the sprites, so that multiple consecutive game frames will be rendered using the same sprite images. Many older games uses 6-8 frames for run cycles and they look great. Simpler creeps can even cut more and only uses 3-4 images.
Another thing you can do is a smarter character and level design so that you don't actually need all characters at the same time. Put each different character is in their own file and you can load them depending on what you need for a particular level. You can also reuse sprites with different colors to indicate stronger version of another sprite, the recolored sprite do not actually exist in the spritesheet as separate character, instead it is composed at runtime. If your characters have visible equipments, you also don't need to have a sprite for every combination, instead compose the equipment sprites into the character images at runtime.
You can also reduce the color depth of your sprites, mosts handsets supports rendering RGB565 pixel format, and in many cases using the full RGB888 is probably more color than you actually needed.
Also, you should use a lower resolution images for lower DPI handsets (which are generally lower powered as well). In those handsets your 100x100 sprites would look grossly oversized.
Also, you probably don't need 100x100 pixels sized sprites for all objects. Many objects probably would probably be much smaller than that, and you can use a smaller-sized sprites cell size for them.
As Lie has suggested you should structure your game properly so that you don't need to use all the resources at once. Your current memory requirement is too much. You could either use RGB565 configuration for your bitmaps or you can sub-sample the image. Just reducing the frame rate won't work as the currently memory requirement in each frame is very large. For sub-sampling the image resource you can use following code sample:
BitmapFactory.Options boundsOp = new BitmapFactory.Options();
boundsOp.inJustDecodeBounds = true;
BitmapFactory.decodeFile(pathToFile, boundsOp);
if(boundsOp.outWidth == -1)
{
Log.i("Error", "error");
}
int width = boundsOp.outWidth;
int height = boundsOp.outHeight;
int inSampleSize = 1;
int temp = Math.max(width, height);
while(temp > MAX_WIDTH)
{
inSampleSize *= 2;
temp /= 2;
}
BitmapFactory.Options resample = new BitmapFactory.Options();
// RGB 565 configuration
resample.inPreferredConfig = Config.RGB_565;
resample.inSampleSize = inSampleSize;
//bmp = BitmapFactory.decodeFile(pathToFile, resample);
bmp = BitmapFactory.decodeFile(pathToFile);
bmp = Bitmap.createScaledBitmap(bmp, MAX_WIDTH, MAX_HEIGHT, true);
Besides all these things you also need to recycle the bitmaps when you are
not using them.
Simple example:
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inSampleSize = scale;
Bitmap bmp = BitmapFactory.decodeStream(is, null, opts);
When I'm passing scale value not equal to power of two, bitmap is still scaled by closest power of 2 value. For example, if scale = 3 then actual scale become 2 for some reason. Maybe it's because I'm using hardware acceleration?
Anyway, how can I scale bitmap by non power-of-2 value without allocation memory for full bitmap?
P.S. I know, that using power of two is much faster, but in my case time isn't such critical and I need image scaled exactly by provided scale (otherway it's become too big or too small) - I'm woking with image processing, so big image itself not such a problem (ImageView scales it to required size), but it takes extra time to apply some filter for example.
If you read the documentation for inSampleSize:
Note: the decoder will try to fulfill this request, but the resulting bitmap may have different dimensions that precisely what has been requested. Also, powers of 2 are often faster/easier for the decoder to honor.
You are not guaranteed exact dimensions. Since memory sounds like it is a concern, I would use your current method to get the image to larger than what you need but something that fits better with your memory requirements than the source image. Then use a different method like Bitmap.createScaledBitmap to get it to your exact dimensions.