Bitmap pixel values differ after setPixel() and getPixel() on same pixel - android

I'm developing a steganography app for a class project which allows a user to encode a secret message image with in another image. I use Bitmap.getPixel(x,y) to retrieve pixel information after modifying the pixel integer value to contain the message value. I then used Bitmap.setPixel(x,y) to place the modified pixel in the bitmap. After decoding the image and retrieving the hidden message I've noticed some pixels were discolored. I've found that certain pixels do not contain the correct value after being modified. if I use
int before = encoded_value;
bitmap.setPixel(x,y, before);
int after = bitamp.getPixel(x,y);
for most of the pixels before==after however on some before!=after. If I continuously modify before by adding or subtracting one( this only changes the color slightly in the message image ) and then set the pixel again the values of before and after still differ. Having a few pixels off in the decoded image wouldn't be that big of a deal. However, when one of the problem pixels shows up where I encoded the message image information( the images dimensions ) the decoded dimensions will usually contain a non digit value such as "2q3x300". Which will through an exception when the application tries to turn this string into integer values. I've also tried to get and set pixels using a integer buffer however the same pixels cause problem. Also through some debugging I've found that certain pixels cause problems and certain values cause problems if I encode the image with with a doubled dimension string "213213x300300" the string is decoded from the image as "2q32q3x300300". I'm not sure if this is a problem that start when decoding the bitmap from an image file or its a bug in the getPixel() and setPixel() methods. It seems that certain pixels cause a bit to be off. In the above example 1( which contains an ascii value of 49 or 00110001) is decoded as q( which contains an ascii value of 113 or 01110001) only a single bit value differs from the set and get however it can cause bit problems. The Bitmap is in ARGB_8888 and thus should be able to contain any value that can also be contained in an integer value. For the stereography to work I'm modifying the least 2 bits of the alpha, red, green and blue values to store 1 byte. I reduce the message image to RGB_332 so that It can be contained in the least bits. One more example is trying to set a pixel to -53029643.
I put -53029643 = 1111 1100 1101 0110 1101 0100 1111 0101
and get returns -53029387 = 1111 1100 1101 0110 1101 0101 1111 0101
----------------------------------------------------XOR
= 0000 0000 0000 0000 0000 0001 0000 0000 = 256
Although these two only differ in the least bit of the green value. The decoding of this integer value by taking the least 2 bits from each bit yields 00100101 instead of 00100001 the bytes are in the form RRRGGGBB thus green value is changing from 000 to 001 and the resulting bit map will contain a pixel with a green value of 001000 instead of 000000 since the decoded bit map is in RGB_565 and thuse the decimal value from green is 8 instead of 0. Here the 3 png images the first is the carrier image( the image that has the message image encoded into it) the second is the message image( the image that is encoded in the carrier image) and the third is the message image after it has been decoded from the carrier image.
carrier image
message image
decoded message image. This image is in 8 bit color the pixels that are more red or green are the ones affected by this error.

This defect in the android api appears to relate to the same issue i'm having. Since there was no way to solve this issue while creating the bitmap from a pixel array or when decoding into the message image I've had to resort to using image filtering after the image has been decoded. I'm using a median filter that has been modified to target problem pixels here is an example of an image with and without the median filter.
unfiltered
filtered

This looks very much like an issue I had where (only on some devices) bitmaps loaded from the drawables folder would be changed slightly. I solved it by putting the bitmaps in the "raw" folder (e.g. res\raw). Hope that helps.

Your bitmap is probably set up as 565, i.e. 5 bits for red, 6 for green, 5 for blue. As such, you can only have 32 distinct values for red/blue and 64 for green.
If you need more precision, you need to create an 8888 bitmap.
Example:
Bitmap bitmap = Bitmap.createBitmap(128, 128, Bitmap.Config.ARGB_8888);

I know this is an old post, but I was able to solve the issue by alternating adding and subtracting 1 from each incorrect byte in the ARGB integer until the least significant bit of each of those bytes was correct. I believe this is the optimally closest the pixel can get while having the least significant bit of each byte correct.
Additionally, I have reported this bug to google.

Related

Tensorflow: DecodeJpeg method gives different pixel values on desktop and mobile for the same image

I have used Tensorflow's DecodeJpeg to read images while training a model. In order to use the same method on an android device, I compiled Tensorflow with Bazel for android with DecodeJpeg.
I tried reading the same image on my desktop, which is an x86_64 machine that runs windows. I ran the DecodeJpeg method on an image with default values with dct_method set to '', INTEGER_FAST, INTEGER_ACCURATE.
I did the same on an arm64 device, for the same image. But, the pixel values were significantly different for the same image under the same settings.
For instance, at (100,100,1) the value on the desktop is 213, while it is 204 on arm64.
How can I make sure that the pixel values are the same across these two devices?[![This is the image I have used][1]][1]
Update:
On Gimp at (100,100) the pixel values are (179,203,190)
For dct_method set to INTEGER_FAST, the value at (100,100) on x86_64 is (171, 213, 165), on arm it is (180, 204, 191)
For dct_method set to INTEGER_ACCURATE, the value at (100,100) on x86_64 is (170, 212, 164), on arm it is (179, 203, 190)
It is (170, 212, 164) with PIL, which is what I get with cv2.imread as well.
According to tensorflow image decode_jpeg documentation
I expect that it may be relative to some attribute when you decode the jpeg.
Most probably the channels attribute and/or the ratio attribute and/or the fancy_upscaling attr.
Both of them can change the value of the pixel...
Concerning the channels:
The attr channels indicate the desired number of color channels for the decoded image.
Accepted values are:
0: Use the number of channels in the JPEG-encoded image.
1: output a grayscale image.
3: output an RGB image.
Concerning the ratio:
The attr ratio allows downscaling the image by an integer factor during decoding. Allowed values are 1, 2, 4, and 8. This is much faster than downscaling the image later.
Concerning the fancy_upscaling:
fancy_upscaling: An optional bool. Defaults to True. If true use a slower but nicer upscaling of the chroma planes (yuv420/422 only).
Please note that you may also have to explicitly specify a value for the dct_method because according to documentation if you don't specify a value it will use a system-specific default.
And in my opinion, it (the dct_method empty arg) is the most probable reason which explains why you don't have the same result on x86_64 and ARM.
the internal jpeg library changes to a version that does not have that
specific option

Saving Bitmap to PNG file changes Pixel data

I'm trying to modify the LSB in Pixels in order to Store Information in a Picture. The Encoding and Decoding does work how ever when i store the Bitmap to a png file which should be lossless and reload it the Pixel values have changed. This ofcourse leads to wrong values when i put the character back to gether however most of the times this can be fixed by substracting 136 from the Byte before making a char out of it. The Problem has to be the storing and reloading as when i pass the Bitmap directly to the Decoder everything works fine.
Please try with giving quality value when you store image.
bitmap.compress(Bitmap.CompressFormat.PNG,0, imageOut)
Second parameter is quality value which is between 0 to 100.

JPEG images have different pixel values across multiple devices

I had noticed that when reading in an identical photograph across devices in the JPEG format, the pixel values do not match up. They are close, but different. When converted to PNG files, the pixel values seem to match up.
This would seem that it would be due to the (un)compression algorithms across devices. That's what comes to mind anyways. Is there a way to read in JPEG files so that the same pixels are retrieved from the photograph across devices? I don't see an option within the BitmapFactory Options component.
Currently applying the following to maintain size when working on pixel values of an image across devices:
Options options = new Options();
options.inScaled = false;
options.inPreferQualityOverSpeed = true;
Currently comparing pixels with the following just to look at a few (close matches, but not equal):
int[] pixels = new int[bitmapF.getWidth() * bitmapF.getHeight()];
bitmapF.getPixels(pixels, 0, bitmapF.getWidth(), 0, 0, bitmapF.getWidth(), bitmapF.getHeight());
Log.d("pixel entries", "pixels = " + pixels[233] + " - " + pixels[4002] + " - " + pixels[11391]);
Note: If reading in a PNG version of that same file which is uncompressed, the values are identical as expected.
The Samsung Galaxy S4 for example and the Samsung Galaxy S5 even have different pixels from the same jpeg (running off of the same test activity) stored in the assets folder.
pixel[233] for instance would be -5205635 on s5 but -5336451 on the s4. The pixel[4002] is a little off as well. But pixel[11391] are equal across both devices on this jpeg picture.
The JPEG standard does not require that decoder implementations produce bit-for-bit identical output images. Unfortunately the standards document specifying decoder requirements, ISO 10918-2, is apparently not freely available online but Wikipedia says:
...the JPEG standard (and the similar MPEG standards) includes some precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:
a maximum 1 bit of difference for each pixel component
low mean square error over each 8×8-pixel block
[etc.]
Differences between different decoder outputs using the same input are generally due to differing levels of internal precision, particularly in performing the IDCT. Another possible source of differences is smoothing, which attempts to reduce "blockiness" artifacts.
Like you, I would expect that setting inPreferQualityOverSpeed would produce the same output but nothing actually guarantees that. I can think of at least a couple ways that you could get small variations on two different phones:
The phones may run different versions of Android where the implementation of BitmapFactory changed (e.g. perhaps inPreferQualityOverSpeed was broken and then fixed, or vice versa), or
The phones may provide different hardware features (e.g. vector instruction set, DSP co-processor, etc.) that BitmapFactory leverages. Even differences in scalar floating-point units can cause discrepancies, especially with JIT compilation producing the actual machine instructions.
Given the wiggle room in the standard plus your experimental observations, it appears the only way to guarantee bit-for-bit agreement is to perform decoding within your own application. Perhaps you can find some alternative Android-compatible library.
I suppose you should also check if compressed PNGs appear the same way across devices.
http://pngquant.org/
If the answer is yes, then the only thing remaining would be to figure out how to convert programmatically on the phone those images to that same kind of compressed png.
Much of a JPEG decoder's work involves real number calculations. Usually this is done using fixed point integer arithmetic for performance. This introduces rounding errors. Slight variations are a natural part of working with JPEG.
Yes, the pixel color values are different across devices. This is very annoying especially if you want to compare colors. The solution is to compare visually equal colors (by human perception).
One of the best methods to compare two colors by human perception is CIE76. The difference is called Delta-E. When it is less than 1, the human eye can not recognize the difference.
You can find wonderful color utilities class (ColorUtils), which includes CIE76 comparison methods. It is written by Daniel Strebel,University of Zurich.
From ColorUtils.class I use the method:
static double colorDifference(int r1, int g1, int b1, int r2, int g2, int b2)
r1,g1,b1 - RGB values of the first color
r2,g2,b2 - RGB values ot the second color that you would like to compare
If you work with Android, you can get these values like this:
r1 = Color.red(pixel);
g1 = Color.green(pixel);
b1 = Color.blue(pixel);
Re-size the media to the required size or use HTML attributes to disable scaling of the image.
Another option would be to allow the user to decide after loading a thumbnail representation to save bandwidth.
I was having the same problem.
PNG and JPEG images seems to be rendered with approximation by different devices.
We solved the problem using BMP images (whose dimensions are unfortunately a lot bigger).

Single Pixel Color Correction

I've created an android application that produces an image as output. This image has pixel errors that are unavoidable. Images are held in an integer array with a size of the image's length*width. The pixels are in ARGB8888 color configuration. I've been searching for a method to both find and approximate what the correct value of the pixel should be based off the surrounding pixels. Here is an example output that needs to be color corrected.
Median filter is your best friend in this situation. This is called salt-and-pepper noise.
That doesn't look (or sound) like what is normally meant by "color correction". Look for a despeckle algorithm.
Gaussian filter might work, or Crimmins Speckle Removal. You'll probably want to understand how Kernel Filters work.

Image Processing on Android

I'm writing an application for Android.
I need to make some image processing on the picture taken from camera.
I use Camera.PictureCallback to get the photo, and I get picture in byte array.
The problem is I want to make operations on every pixel of photo (some filtering and other stuff) so I guess, have photo in byte array is not a bad idea. But I don't know how interpret information in this byte array... The only way I know to make the processing is use BitmapFactory.decodeByteArray() and then use Bitmap object. Is this a good way to handle a lot of image processing?
Right now I use something look like this:
Bitmap mPhotoPicture mPhotoPicture = BitmapFactory.decodeByteArray(imageData, 0 , imageData.length);
mPhotoPicture = mPhotoPicture.copy(Bitmap.Config.RGB_565, true);
I appreciate any help.
I'm not sure if decoding into a byte array is the best way to do it on Android, but I can offer what I know about image processing in general.
If you're using RGB_565, that means each pixel is 16 bits, or two of those bytes. The first 5 bits are red, the next 6 are green, and the last 5 are blue. Dealing with that is hairy in Java. I suggest you work with an easier format like ARGB_8888, which will mean you have 32 bits, or four bytes per pixel, and each byte is its own value (alpha, red, green, blue).
To test, try setting every fourth byte, like [3], [7], [11], etc., to 0. That should take out all of a particular channel, in this case, all the blue.
[2], [6], [10], etc. would be all the green values for each pixel.
(Note, the four components might go in the opposite order because I'm not sure about endianness! So I might have just told you how to take out the alpha, not the blue…)

Categories

Resources