JPEG images have different pixel values across multiple devices - android

I had noticed that when reading in an identical photograph across devices in the JPEG format, the pixel values do not match up. They are close, but different. When converted to PNG files, the pixel values seem to match up.
This would seem that it would be due to the (un)compression algorithms across devices. That's what comes to mind anyways. Is there a way to read in JPEG files so that the same pixels are retrieved from the photograph across devices? I don't see an option within the BitmapFactory Options component.
Currently applying the following to maintain size when working on pixel values of an image across devices:
Options options = new Options();
options.inScaled = false;
options.inPreferQualityOverSpeed = true;
Currently comparing pixels with the following just to look at a few (close matches, but not equal):
int[] pixels = new int[bitmapF.getWidth() * bitmapF.getHeight()];
bitmapF.getPixels(pixels, 0, bitmapF.getWidth(), 0, 0, bitmapF.getWidth(), bitmapF.getHeight());
Log.d("pixel entries", "pixels = " + pixels[233] + " - " + pixels[4002] + " - " + pixels[11391]);
Note: If reading in a PNG version of that same file which is uncompressed, the values are identical as expected.
The Samsung Galaxy S4 for example and the Samsung Galaxy S5 even have different pixels from the same jpeg (running off of the same test activity) stored in the assets folder.
pixel[233] for instance would be -5205635 on s5 but -5336451 on the s4. The pixel[4002] is a little off as well. But pixel[11391] are equal across both devices on this jpeg picture.

The JPEG standard does not require that decoder implementations produce bit-for-bit identical output images. Unfortunately the standards document specifying decoder requirements, ISO 10918-2, is apparently not freely available online but Wikipedia says:
...the JPEG standard (and the similar MPEG standards) includes some precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:
a maximum 1 bit of difference for each pixel component
low mean square error over each 8×8-pixel block
[etc.]
Differences between different decoder outputs using the same input are generally due to differing levels of internal precision, particularly in performing the IDCT. Another possible source of differences is smoothing, which attempts to reduce "blockiness" artifacts.
Like you, I would expect that setting inPreferQualityOverSpeed would produce the same output but nothing actually guarantees that. I can think of at least a couple ways that you could get small variations on two different phones:
The phones may run different versions of Android where the implementation of BitmapFactory changed (e.g. perhaps inPreferQualityOverSpeed was broken and then fixed, or vice versa), or
The phones may provide different hardware features (e.g. vector instruction set, DSP co-processor, etc.) that BitmapFactory leverages. Even differences in scalar floating-point units can cause discrepancies, especially with JIT compilation producing the actual machine instructions.
Given the wiggle room in the standard plus your experimental observations, it appears the only way to guarantee bit-for-bit agreement is to perform decoding within your own application. Perhaps you can find some alternative Android-compatible library.

I suppose you should also check if compressed PNGs appear the same way across devices.
http://pngquant.org/
If the answer is yes, then the only thing remaining would be to figure out how to convert programmatically on the phone those images to that same kind of compressed png.

Much of a JPEG decoder's work involves real number calculations. Usually this is done using fixed point integer arithmetic for performance. This introduces rounding errors. Slight variations are a natural part of working with JPEG.

Yes, the pixel color values are different across devices. This is very annoying especially if you want to compare colors. The solution is to compare visually equal colors (by human perception).
One of the best methods to compare two colors by human perception is CIE76. The difference is called Delta-E. When it is less than 1, the human eye can not recognize the difference.
You can find wonderful color utilities class (ColorUtils), which includes CIE76 comparison methods. It is written by Daniel Strebel,University of Zurich.
From ColorUtils.class I use the method:
static double colorDifference(int r1, int g1, int b1, int r2, int g2, int b2)
r1,g1,b1 - RGB values of the first color
r2,g2,b2 - RGB values ot the second color that you would like to compare
If you work with Android, you can get these values like this:
r1 = Color.red(pixel);
g1 = Color.green(pixel);
b1 = Color.blue(pixel);

Re-size the media to the required size or use HTML attributes to disable scaling of the image.
Another option would be to allow the user to decide after loading a thumbnail representation to save bandwidth.

I was having the same problem.
PNG and JPEG images seems to be rendered with approximation by different devices.
We solved the problem using BMP images (whose dimensions are unfortunately a lot bigger).

Related

inDither = true Android

Could someone explain what is really happening when setting inDither = true in the context of configarating a bitmap in Android?
At Developer.Android one can read about the static variable
Config.RGB_565
This configuration can produce slight visual artifacts depending on the configuration of the source. For instance, without dithering, the result might show a greenish tint. To get better results dithering should be applied
I had this problem until I followed this recommendation, that is:
options.inPreferredConfig = Config.RGB_565;
options.inDither = true;
So my question: how do one understand inDither in Android. Its one thing to know when to use a syntax ... another to fully understand it.
Thanks in advance!
When you are running low on the number of colors supported , then moving from one color to other (gradient) will cause bands to appear (less steps in between).
Dithering reduces this by placing random noise in color steps. With dither, we can use a noise of available colors to give an illusion of unavailable colors:
RGB_565 has low precision (2 bytes) than ARGB_8888 (4 bytes). Due to low color range, RGB_565 bitmaps can show banding and low color range. Hence, dither flag is use to improve perceived image, and give an illusion of more colors.

android bitmap keep dimensions same reduce memory

I have an application that displays lots of images, and images can be of varying size up to full screen dimensions of device. The images are all downloaded, and I use imagemagick to reduce colors of image to compress the file size, while keeping dimensions the same, to reduce download time. The reduced color space is fine for my application.
The problem is that when I load the image into a Bitmap in android the file size is much larger because of androids Bitmap config of ARGB_8888, I do need the ALPHA channel. Since ARGB_4444 is deprecated and had performance issues I am not using that. I am wondering if there is any other way to reduce the memory footprint of the loaded Bitmap while keeping the dimensions the same as the original?
---- Update ---
After discussions here and lots of other searching/reading it does not appear that there is a straight forward way do this. I could use ARGB_4444 which stores each pixel as 2 bytes, rather than 4 bytes per pixel. The problem there is that ARGB_4444 is deprecated, and I try not to use deprecated parts of the API for obvious reasons. Android recommends use of ARGB_8888 which is the default, since I need alpha I have to use that. Since there are many applications that do not need such a complex color space it would be nice to see ARGB_4444 or something similar become part of the supported API.
If you don't need alfa (transparency) you can use:
Bitmap.Config.RGB_565
which uses 2 bytes per pixel, instead of 4, reducing size by half.
You should also look at BitmapFactory.Options:
inSampleSize
inScaled
These values correctly set for your requirements, may have a very positive effect on Bitmap memory requirements.
Regards.
Read a similar answer I posted here
Basically, bitmap recycle is required for Pre honeycomb devices to reduce memory usage

How to avoid outOfMemory Error on a bitmap without reducing the resolution

I'm making this 2d TD game for android. In this game i ofc. need textures/graphics for the monsters, for the towers, etc. I have decided to keep all of the pictures, that is included in all of the units attacking cycles in one picture, and all of the the pictures, that is included in the units walking cycle in another.
The problem is that I've got a lot of different units. This results in that if I want each monster-texture to have the resolution of 100x100px, the walking bitmap will end up as a 7000x15000px picture. This ofc. crashes my application, but at the same time I need everything that is inside that picture, and I don't want to reduce the resolution. How can I use these pictures without running out of memory? Or do I need to have my graphics organized in another way - if so is, I would appreciate if you could tell me how.
A little calculation: Your bitmap has 7000x15000 pixels, that is 105000000 pixels. For every pixel you'll need 3 or 4 bytes (depending on whether you have transparency or not). Supposing you're using transparency, this is 4 bytes per pixel, so in total this is 420000000 bytes resp. 400 MB.
So, you'll definitely want to reorganize your setup.
Are you sure you're using all 10000 images? The complete sprite sheet for most games generally range in the hundreds or lower thousands. In a 640x480 screen you can only put 24 different characters without overlapping, having too many different characters of that size in a single screen all moving around is probably going to be too confusing.
Some things you can do to reduce your spritesheet size is to reduce the framerate of the sprites, so that multiple consecutive game frames will be rendered using the same sprite images. Many older games uses 6-8 frames for run cycles and they look great. Simpler creeps can even cut more and only uses 3-4 images.
Another thing you can do is a smarter character and level design so that you don't actually need all characters at the same time. Put each different character is in their own file and you can load them depending on what you need for a particular level. You can also reuse sprites with different colors to indicate stronger version of another sprite, the recolored sprite do not actually exist in the spritesheet as separate character, instead it is composed at runtime. If your characters have visible equipments, you also don't need to have a sprite for every combination, instead compose the equipment sprites into the character images at runtime.
You can also reduce the color depth of your sprites, mosts handsets supports rendering RGB565 pixel format, and in many cases using the full RGB888 is probably more color than you actually needed.
Also, you should use a lower resolution images for lower DPI handsets (which are generally lower powered as well). In those handsets your 100x100 sprites would look grossly oversized.
Also, you probably don't need 100x100 pixels sized sprites for all objects. Many objects probably would probably be much smaller than that, and you can use a smaller-sized sprites cell size for them.
As Lie has suggested you should structure your game properly so that you don't need to use all the resources at once. Your current memory requirement is too much. You could either use RGB565 configuration for your bitmaps or you can sub-sample the image. Just reducing the frame rate won't work as the currently memory requirement in each frame is very large. For sub-sampling the image resource you can use following code sample:
BitmapFactory.Options boundsOp = new BitmapFactory.Options();
boundsOp.inJustDecodeBounds = true;
BitmapFactory.decodeFile(pathToFile, boundsOp);
if(boundsOp.outWidth == -1)
{
Log.i("Error", "error");
}
int width = boundsOp.outWidth;
int height = boundsOp.outHeight;
int inSampleSize = 1;
int temp = Math.max(width, height);
while(temp > MAX_WIDTH)
{
inSampleSize *= 2;
temp /= 2;
}
BitmapFactory.Options resample = new BitmapFactory.Options();
// RGB 565 configuration
resample.inPreferredConfig = Config.RGB_565;
resample.inSampleSize = inSampleSize;
//bmp = BitmapFactory.decodeFile(pathToFile, resample);
bmp = BitmapFactory.decodeFile(pathToFile);
bmp = Bitmap.createScaledBitmap(bmp, MAX_WIDTH, MAX_HEIGHT, true);
Besides all these things you also need to recycle the bitmaps when you are
not using them.

Single Pixel Color Correction

I've created an android application that produces an image as output. This image has pixel errors that are unavoidable. Images are held in an integer array with a size of the image's length*width. The pixels are in ARGB8888 color configuration. I've been searching for a method to both find and approximate what the correct value of the pixel should be based off the surrounding pixels. Here is an example output that needs to be color corrected.
Median filter is your best friend in this situation. This is called salt-and-pepper noise.
That doesn't look (or sound) like what is normally meant by "color correction". Look for a despeckle algorithm.
Gaussian filter might work, or Crimmins Speckle Removal. You'll probably want to understand how Kernel Filters work.

Android 2.2 distorts picture colors?

I have some .png files in my app. I need to load these during runtime, and get the exact colors of certain pixels from them. It's important, that I do not want to scale these pictures. I don't show them on the UI directly, they serve as maps.
Now, on Android 1.5, there's no problem with this. I put these images in the '/res/drawable' dir, load them with BitmapFactory into a Bitmap object, and use it to get the color of the desired pixels. E.g. pixel (100, 50) has the color RGB(100, 1, 100).
On Android 2.2 tho, the same procedure results varying colors (for the same pixel), so I get RGB(99, 3, 102) / RGB(101, 2, 99) / etc. for the same (100, 50) pixel. I checked the resolution of the Bitmap object, it seems that is didn't get scaled.
Could somebody explain, why I get distorted colour values?
Solved: It appears, that on Android 2.2, I have to set the correct bitmap configuration. Somehow, versions below 2.2 managed to do this (or maybe fewer configs are supported on those, and the system guessed the config correctly, don't know).
Anyways, here's the code I use now:
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inDither=false;
opt.inPreferredConfig = Bitmap.Config.ARGB_8888;
Bitmap mask = BitmapFactory.decodeResource(getResources(), R.drawable.picture, opt);
Go make yourself a bitmap thats entirely the same color of the pixel in question. Make the size of this bitmap the same resolution of the one your currrently using. Load it up and check the RGB values of the same pixel (or any pixel) you are having problems with.
This should tell you whether your problem is either scaling, which is what I think it is, or possibly a problem in the color translation.
If you don't find an answer quickly, my pragmatist streak would ask how hard it is to parse the .png yourself, to get completely deterministic results independent of any changes in the platform.
My 2.3 devices (Nexus One and S) worked fine without setting "opt.inPreferredConfig", but it appears that 2.2 requires it for accurate RGBs.

Categories

Resources