Im having some memory issues with my app. It can pick an image from the users personal gallery and store it in a file to be rendered on the screen. The issue is that the limitation on the imagesize is very small. Which I have discovered lead to pretty much every image on my device being to large to handle. So the method itself became useless since it can't handle moderate sized images. I'm experiencing this only on ios devices so far.
Is there a solution? Can I compress/minimize the size of the image to a smaller one in any way? Cutting all images to the same resolution? (Like Instagram's system).
If you want to reduce the image size in bytes, there are at least 3 areas you can work on:
Reduce image dimensions (pixel resolution). This almost always
causes loss of quality, but if your users are viewing on small
screen devices, and you don't resize too much, the loss won't be
significant. You can also use interpolation to minimize the visual
degradation when resizing.
Reduce bit depth (color resolution).If the image is full color (32
or 24 bits per pixel), you can sometimes get away with reducing it
to a lower color count such as making it 8-bit. Again this will
cause quality loss, but you can use dithering to reduce it.
Use better compression. Most images are
already compressed these days, but in some cases you can re-compress
an image to make the file smaller. One example is JPEG which
supports different quality factors. There are also different
sub-types (color sampling frequencies) in JPEG. So if you save an
image as 4:1:1 instead of 4:4:4, it will contain less color content
and become smaller in byte size, but the difference is usually not
noticeable to the human eye. This post has details on changing
JPEG Quality factor on iOS.
Related
Apologies in advance for such a basic question, but this is my first app and I can't quite find a clear answer for my situation. All the images in my app are stored in the drawable folder, I'm NOT downloading any images from the internet. All the information I come across when it comes to multiple image sizes seems to refer to the occasion when the app is fetching images from the internet.
So currently most the images in my app are one size, customized for the largest size - xxxhdpi. However, I understand the app is doing some work to "shrink down" those images for the xxhdpi size screens.
I'm having second thoughts about this one size fits all approach. I'm thinking that perhaps the app doing the work to shrink the image down might take up extra memory and negatively impact performance. I've been looking at the Android Studio Profiler and I've been trying to understand the Graphics Process when I look at the Memory Graph.
More generally speaking, is there a benefit to having the smallest size images possible, even for the xxxhdpi? For example, does it hurt (memory wise or in some other aspect) to use a .png image when I could use a lower quality jpg? Again, just to super clear, this is just in the scenario when the app has all of its images in the drawable folder. My app has options where players can change the game background and other images so I want to be sure I'm optimizing how the images for best performance. Thanks.
Memory. If you load a bitmap of x by y pixels, in memory that takes 4*x*y bytes. For a full screen image, you can expext that to be 4000*1000*4 or 16 MB. That's a good chunk of memory to a small device, which also tends to have less RAM. If instead it needed one at half the resolution, you would have 2000*500*4, or 4 MB.
Obviously this scales with size. The smaller your images, the less memory wasted. I wouldn't argue that you need to provide every size, but if you're using large images I'd provide more than one. Also, for anything that isn't incredibly complex (like icons) I'd consider vector images instead (although that's a CPU time vs memory tradeoff).
You mentioned png vs jpg. There's two things to consider there: apk size and image quality. JPG is smaller, so it will lead to a smaller apk size. PNG is lossless, so it will have higher quality (although whether that matters requires a human visual check- it matters less than you'd think on a lot of things). Interestingly it doesn't effect the amount of memory used at runtime, because both are held in the Bitmap object uncompressed.
The the iOS Mail app used to have a handy feature (I believe they removed it with the advent of Mail Drop) that would give you the option to select a scaled version of an image attachment. The great thing about this feature was that it would actually calculate the file size of each of the scaled images; shown below:
The server I am uploading images to has a small file size limit (10 MB), and I would like to emulate this functionality in order to prevent uploads that exceed this limit.
With the assumption that the image is not actually scaled down three times in order to determine the file size of each of the scaled images; how would I go about doing this?
I have not been able to find any information regarding some type of formula to calculate the file size of a scaled down image based on the size of the original image.
Given the delay i've always seen when I pop that dialog on older devices, which is a non-trivial delay, I challenge your assumption that Apple isn't just doing the dirty-deed here -- ie. writing those JPGs directly to memory/disk and reading the size, rather than calculating this... ie. let data = UIImageJPEGRepresentation(image, 0.6)!
The trick is likely that people are falling down the "it needs to be at least 80% quality to be real!" hole. That's only true if you have a terrible JPG/media library. In reality, if you're writing a reasonably complex UIImage to memory/disk and you don't need transparency, then 60% is plenty.
We started using vector drawables in our Android application.
I have read about performance issues faced while using raster images in android applications.
Can anyone explain the reason why there is a performance issue ?
Is it okay to use plenty of vector drawables in an application ?
Thanks in advance !!
This isn't really android specific. It's more to do with different image formats. A raster image has a "fixed" size, in the sense that it is always comprised of the same number of pixels, which is one of the major factors in file size (and memory footprint once it's loaded). This also affects your ability to transform the image.
If you want to shrink a raster image, you have to drop pixels, which is necessarily a lossy transform (even though the smaller size makes it difficult or impossible to notice the lost data). To enlarge the image, you have to interpolate pixels: add data that wasn't there in the original image, which means the image will start to pixelate.
With a vector image, on the other hand, the data stored is not in terms of pixels. Instead it stores "paths" that instruct the computer on how to draw the image. These paths are size-independent, which means that its size can be increased or decreased with no loss of data or image quality. Since the size doesn't matter, only the data necessary to hold the paths (and other data) is stored in a vector image file. This means that the file is (generally) much smaller than the equivalent raster image and so takes up less memory when loaded.
Using a vector will mean your app takes less memory and is more easily adaptable to different screen sizes because android can shrink/expand your graphics to fit without losing any quality.
Raster graphics have more complexity to support images that cant be easily convert vectors like shapes. The technique behind raster graphics are uses pixels unlike vectors uses lines as we know path in Android.
So that raster images have more path elements that represents pixels. Android generates images by using these elements. Complex vectors are takes more time when trying to be generated instead loading a given bitmap.
As i know, You shouldnt be able to use raster in Android. It only supports vectors.
Good luck
Emre
I am working on an app where I have to download images from internet and display them inside the app, I am using Universal Image Loader so far, but recently I ran into an issue where the app would not display the images which are huge in size for example 700 x 7661, I have read several posts & answers related to it but nothing seems to be a reliable solution, it appears in an hardware accelerated app the image size is limited by the OpenGL texture size limit, answers here on Stackoverflow suggest resizing the image to smaller size. I know disabling hardware acceleration fixes it, but that is not an option because it makes the whole app jittery.
My question is how can we achieve that accurately because devices have different OpenGL texture limit, some devices support 2048 x 2048 and some support 4096 x 4096, some of them support even smaller, if we resize the image by considering the smallest size the image will appear blurred on high resolution devices, so I am sort of out of clues as how to approach this issue, I have tried Picasso, Fresco & Glide and all of them have similar issue.
A sample of the image I am trying to load is http://i.imgur.com/ADpTC2W.jpg?1
Regards
You can read the device resolution and resize your images as for your resolution
for getting resolution see this : How to get screen resolution
You could look up the maximum texture size by using Canvas.getMaximumBitmapWidth() and Canvas.getMaximumBitmapHeight(), and then resize them before putting them into imageViews.
My current project is about android image processing.But if my phone camera is about 1-2 megapixel, will it affect the result of preprocessing like grayscale and binarization?
Your phone camera won't affect any pre-processing you perform in that your pre-processing code will act just the same regardless of the number of megapixels in your camera. Garbage in, garbage out still does apply. If you start with a low quality, poor contrast, blurred picture, you aren't going to be able to turn it in to something fantastic you want to hang on your wall. Additionally, as Mizuki alluded to in his comment, a 1-2 megapixel phone image is far higher resolution than the average image used on the internet, and these can be binarised and greyscaled just fine.
As for the two methods of preprocessing you mentioned in your question:
Binarization
This just converts an image into a two colour version. Normally black and white, though other colours are possible. The number of pixels in the image doesn't matter for this, other than it taking longer if it has more pixels to process. Low quality mobile phone cameras can sometimes produce low contrast photos and this may make it harder for the binarization algorithm to correctly determine the threshold at which pixels should be displayed in either colour.
Greyscale
Converting an image to greyscale is done by manipulating the colours of each pixel so, again, the number of pixels should only increase the preprocessing time, not change the result.