Hi I am creating an app for android and I encountered a problem with how to color bitmaps
I am using the following simple code
for(int i=0;i<pixels.length;i++){
if(pixels[i] == COLOR.WHITE){
pixels[i]=Color.RED;
}
}
Where pixels is the array of pixels of the bitmap
However the problem is that I'm getting in the edges of the colored area a thin layer of pixels that weren't colored I understand this stems from that this layer of white is somewhat shadowy(not entirely white partially black) how do I get over this?
Hope I made my question clear enough
Right now you are matching and replacing one specific integer value for white. However, in your original bitmap that white color bleeds into other colors and therefore you have white color values around the edges of these white patches that are slightly different.
You need to change your algorithm to take a color matching tolerance into account. For that you'll have to split up your pixel and your key color into their three color channels and check them individually if the differences between them are within a certain tolerance value.
That way you can match these whitish colored pixels around the edges. But even with added tolerance you cannot just replace your matching pixels just with red. You would get aliased hard, red edges and it wouldn't look pretty. I wrote a similar algorithm a while ago and got around that aliasing issue by doing some color blending in HSV color space:
public Bitmap changeColor(Bitmap src, int keyColor,
int replColor, int tolerance) {
Bitmap copy = src.copy(Bitmap.Config.ARGB_8888, true);
int width = copy.getWidth();
int height = copy.getHeight();
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
int sR = Color.red(keyColor);
int sG = Color.green(keyColor);
int sB = Color.blue(keyColor);
int tR = Color.red(replColor);
int tG = Color.green(replColor);
int tB = Color.blue(replColor);
float[] hsv = new float[3];
Color.RGBToHSV(tR, tG, tB, hsv);
float targetHue = hsv[0];
float targetSat = hsv[1];
float targetVal = hsv[2];
for(int i = 0; i < pixels.length; ++i) {
int pixel = pixels[i];
if(pixel == keyColor) {
pixels[i] = replColor;
} else {
int pR = Color.red(pixel);
int pG = Color.green(pixel);
int pB = Color.blue(pixel);
int deltaR = Math.abs(pR - sR);
int deltaG = Math.abs(pG - sG);
int deltaB = Math.abs(pB - sB);
if(deltaR <= tolerance && deltaG <= tolerance
&& deltaB <= tolerance) {
Color.RGBToHSV(pR, pG, pB, hsv);
hsv[0] = targetHue;
hsv[1] = targetSat;
hsv[2] *= targetVal;
int mixTrgColor = Color.HSVToColor(Color.alpha(pixel),
hsv);
pixels[i] = mixTrgColor;
}
}
}
copy.setPixels(pixels, 0, width, 0, 0, width, height);
return copy;
}
keyColor and replColor are ARGB encoded integer values such as Color.WHITE and Color.RED. tolerance is a value from 0 to 255 that specifies the key color matching tolerance per color channel. I had to rewrite that snippet a bit to remove framework specifics. I hope I didn't make any mistakes.
As a word of warning: Java (on Android) is pretty slow with image processing. If it's not fast enough for you, you should rewrite the algorithm in C for example and use the NDK.
UPDATE: Color replace algorithm in C
Here is the implementation of the same algorithm written in C. The last function is the actual algorithm which takes the bitmap's pixel array as argument. You need to create a header file with that function declaration and set up some NDK compilation boilerplate and create an additional Java class with the following method declaration:
native static void changeColor(int[] pixels, int width, int height, int keyColor, int replColor, int tolerance);
C implementation:
#include <math.h>
#define MIN(x,y) ((x < y) ? x : y)
#define MAX(x,y) ((x > y) ? x : y)
int clamp_byte(int val) {
if(val > 255) {
return 255;
} else if(val < 0) {
return 0;
} else {
return val;
}
}
int encode_argb(uint8_t r, uint8_t g, uint8_t b, uint8_t a) {
return ((a & 0xFF) << 24) | ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | (b & 0xFF);
}
int alpha(int c) {
return (c >> 24) & 0xFF;
}
int red(int c) {
return (c >> 16) & 0xFF;
}
int green(int c) {
return (c >> 8) & 0xFF;
}
int blue(int c) {
return c & 0xFF;
}
typedef struct struct_hsv {
uint16_t h;
uint8_t s;
uint8_t v;
} hsv;
// http://www.ruinelli.ch/rgb-to-hsv
hsv rgb255_to_hsv(uint8_t r, uint8_t g, uint8_t b) {
uint8_t min, max, delta;
hsv result;
int h;
min = MIN(r, MIN(g, b));
max = MAX(r, MAX(g, b));
result.v = max; // v, 0..255
delta = max - min; // 0..255, < v
if(delta != 0 && max != 0) {
result.s = ((int) delta) * 255 / max; // s, 0..255
if(r == max) {
h = (g - b) * 60 / delta; // between yellow & magenta
} else if(g == max) {
h = 120 + (b - r) * 60 / delta; // between cyan & yellow
} else {
h = 240 + (r - g) * 60 / delta; // between magenta & cyan
}
if(h < 0) h += 360;
result.h = h;
} else {
// r = g = b = 0
result.h = 0;
result.s = 0;
}
return result;
}
int hsv_to_argb(hsv color, uint8_t alpha) {
int i;
uint8_t r,g,b;
float f, p, q, t, h, s, v;
h = (float) color.h;
s = (float) color.s;
v = (float) color.v;
s /= 255;
if(s == 0) {
// achromatic (grey)
return encode_argb(color.v, color.v, color.v, alpha);
}
h /= 60; // sector 0 to 5
i = floor(h);
f = h - i; // factorial part of h
p = (unsigned char) (v * (1 - s));
q = (unsigned char) (v * (1 - s * f));
t = (unsigned char) (v * (1 - s * (1 - f)));
switch(i) {
case 0:
r = v;
g = t;
b = p;
break;
case 1:
r = q;
g = v;
b = p;
break;
case 2:
r = p;
g = v;
b = t;
break;
case 3:
r = p;
g = q;
b = v;
break;
case 4:
r = t;
g = p;
b = v;
break;
default: // case 5:
r = v;
g = p;
b = q;
break;
}
return encode_argb(r, g, b, alpha);
}
JNIEXPORT void JNICALL Java_my_package_name_ClassName_changeColor(
JNIEnv* env, jclass clazz, jintArray bitmapArray, jint width, jint height,
jint keyColor, jint replColor, jint tolerance) {
jint* pixels = (*env)->GetPrimitiveArrayCritical(env, bitmapArray, 0);
int sR = red(keyColor);
int sG = green(keyColor);
int sB = blue(keyColor);
int tR = red(replColor);
int tG = green(replColor);
int tB = blue(replColor);
hsv cHsv = rgb255_to_hsv(tR, tG, tB);
int targetHue = cHsv.h;
int targetSat = cHsv.s;
int targetVal = cHsv.v;
int i;
int max = width * height;
for(i = 0; i < max; ++i) {
int pixel = pixels[i];
if(pixel == keyColor) {
pixels[i] = replColor;
} else {
int pR = red(pixel);
int pG = green(pixel);
int pB = blue(pixel);
int deltaR = abs(pR - sR);
int deltaG = abs(pG - sG);
int deltaB = abs(pB - sB);
if(deltaR <= tolerance && deltaG <= tolerance
&& deltaB <= tolerance) {
cHsv = rgb255_to_hsv(pR, pG, pB);
cHsv.h = targetHue;
cHsv.s = targetSat;
int newValue = ((int) cHsv.v * targetVal) / 255;
cHsv.v = newValue;
int mixTrgColor = hsv_to_argb(cHsv, alpha(pixel));
pixels[i] = mixTrgColor;
}
}
}
(*env)->ReleasePrimitiveArrayCritical(env, bitmapArray, pixels, 0);
}
Related
I want to change a bitmap loaded with format Bitmap.Config.ARGB_8888 to transparent by jni.
#define MAKE_RGBA(r,g,b,a) (((a) << 24) | ((r) << 16) | ((g) << 8) | (b))
...
AndroidBitmapInfo info;
memset(&info, 0, sizeof(info));
AndroidBitmap_getInfo(env, bitmap, &info)
int res = AndroidBitmap_lockPixels(env, bitmap, &pixels);
int tr = 0, tg = 0, tb = 0, ta = 0;
int width = info.width;
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
void *pixel = ((uint32_t *)pixels) + y * width + x;
*((uint32_t *)pixel) = MAKE_RGBA(tr, tg, tb, ta);
}
}
AndroidBitmap_unlockPixels(env, bitmap);
I read one pixel from app by this:
int value = paintBitmap.getPixel(0, 0);
int r = (value & 0x00ff0000)>>16;
int g = (value & 0x0000ff00)>>8;
int b = value & 0x000000ff;
int a = (value & 0xff000000)>>24;
But I get value -16777216 and alpha -1 and the transformed bitmap is totally black.
How can I solve this problem? Thank you!
I am building a scanner app, and trying to determine the "preview quality" from the preview callback of the camera. I want to customize the camera's AUTO_FLASH_MODE where it will be turned on if the environment is too dark.
How can I detect if there is a high average of dark pixels? This means (in preview) I am getting darkness and therefore need to turn on the camera's flash light.
Either find out how to access pixel values of your image and calculate the average intensity yourself or use any image processing library to do so.
Dark pixels have low values, bright pixels have high values.
You want to calculate the average of all red, green and blue values divided by three times your pixel count.
Define a threshold for when to turn on the flash, but keep in mind that you have to get a new exposure time then.
Prefer flash over exposure time increase as long exposure times yield higher image noise.
I tried this approach but i think it is taking unnecessary time of processing the bitmap and then get an average screen color,
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size cameraResolution = resolution;
PreviewCallback callback = this.callback;
if (cameraResolution != null && callback != null)
{
int format = camera.getParameters().getPreviewFormat();
SourceData source = new SourceData(data, cameraResolution.width, cameraResolution.height, format, getCameraRotation());
callback.onPreview(source);
final int[] rgb = decodeYUV420SP(data, cameraResolution.width, cameraResolution.height);
//Bitmap bmp = decodeBitmap(source.getData());
Bitmap bmp = Bitmap.createBitmap(rgb, cameraResolution.width, cameraResolution.height, Bitmap.Config.ARGB_8888);
if (bmp != null)
{
//bmp = decodeBitmap(source.getData());
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
// bmp.compress(Bitmap.CompressFormat.JPEG, 70, bytes);
Bitmap resizebitmap = Bitmap.createBitmap(bmp,
bmp.getWidth() / 2, bmp.getHeight() / 2, 60, 60);
int color = getAverageColor(resizebitmap);
Log.i("Color Int", color + "");
// int color = resizebitmap.getPixel(resizebitmap.getWidth()/2,resizebitmap.getHeight()/2);
String strColor = String.format("#%06X", 0xFFFFFF & color);
//String colorname = sColorNameMap.get(strColor);
Log.d("strColor", strColor);
Log.i("strColor", color + "");
if(!mIsOn)
{
if (color == -16777216 || color < -16777216)//minimum color code (full dark)
{
mIsOn = true;
setTorch(true);
Log.d("Yahooooo", "" + color);
}
}
Log.i("Pixel Value",
"Top Left pixel: " + Integer.toHexString(color));
}
}
else
{
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
}
}
private int[] decodeYUV420SP(byte[] yuv420sp, int width, int height)
{
final int frameSize = width * height;
int rgb[]=new int[width*height];
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
private int getAverageColor(Bitmap bitmap)
{
int redBucket = 0;
int greenBucket = 0;
int blueBucket = 0;
int pixelCount = 0;
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
int c = bitmap.getPixel(x, y);
pixelCount++;
redBucket += Color.red(c);
greenBucket += Color.green(c);
blueBucket += Color.blue(c);
// does alpha matter?
}
}
int averageColor = Color.rgb(redBucket / pixelCount, greenBucket
/ pixelCount, blueBucket / pixelCount);
return averageColor;
}
I am using Convolution Matrix for my android app for making image Emboss.
i have defined the class for it as:
public class ConvolutionMatrix {
public static final int SIZE = 3;
public double[][] Matrix;
public double Factor = 1;
public double Offset = 1;
public ConvolutionMatrix(int size) {
Matrix = new double[size][size];
}
public void setAll(double value) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = value;
}
}
}
public void applyConfig(double[][] config) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = config[x][y];
}
}
}
public static Bitmap computeConvolution3x3(Bitmap src,
ConvolutionMatrix matrix) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap result = Bitmap.createBitmap(width, height, src.getConfig());
int A, R, G, B;
int sumR, sumG, sumB;
int[][] pixels = new int[SIZE][SIZE];
for (int y = 0; y < height - 2; ++y) {
for (int x = 0; x < width - 2; ++x) {
// get pixel matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
pixels[i][j] = src.getPixel(x + i, y + j);
}
}
// get alpha of center pixel
A = Color.alpha(pixels[1][1]);
// init color sum
sumR = sumG = sumB = 0;
// get sum of RGB on matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
sumR += (Color.red(pixels[i][j]) * matrix.Matrix[i][j]);
sumG += (Color.green(pixels[i][j]) * matrix.Matrix[i][j]);
sumB += (Color.blue(pixels[i][j]) * matrix.Matrix[i][j]);
}
}
// get final Red
R = (int) (sumR / matrix.Factor + matrix.Offset);
if (R < 0) {
R = 0;
} else if (R > 255) {
R = 255;
}
// get final Green
G = (int) (sumG / matrix.Factor + matrix.Offset);
if (G < 0) {
G = 0;
} else if (G > 255) {
G = 255;
}
// get final Blue
B = (int) (sumB / matrix.Factor + matrix.Offset);
if (B < 0) {
B = 0;
} else if (B > 255) {
B = 255;
}
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
}
}
// final image
return result;
}
}
It is giving me proper result but it takes too much time for calculating the result. Is there any way to make calculation faster and work efficiently?
The core of your slowdown is:
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
That's a reasonable amount of work each iteration to set each pixel pixel by pixel, they aren't free in the bitmap class. It's far better to call the getPixels() routine and mess with the raw pixels there then put them back, just one time when you're done.
You could also hardcode the emboss (most of the time you're grabbing a bunch of data and multiplying it by zero with that kernel, you easily cheat and grab like the three pixels you care about.
private static int hardEmboss(int[] pixels, int stride, int index, int[][] matrix, int parts) {
//ignoring the matrix
int p1 = pixels[index];
int p2 = pixels[index + stride + 1];
int p3 = pixels[index + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
return 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
Assuming your emboss kernel is:
int[][] matrix = new int[][]{
{2, 0, 0},
{0, -1, 0},
{0, 0, -1}
};
Also, unbeknownst to most everybody there's a critical flaw in the standard convolution algorithm, where the return the results pixel to the center is in error. If you return it to the upper left hand corner you can simply process all the data in the same memory footprint going left to right, top to bottom in a scanline operation.
public static int crimp(int v) { return (v > 255)?255:((v < 0)?0:v); }
public static void applyEmboss(int[] pixels, int stride) {
//stride should be equal to width here, and pixels.length == bitmap.height * bitmap.width;
int pos;
pos = 0;
try {
while (true) {
int p1 = pixels[pos];
int p2 = pixels[pos + stride + 1];
int p3 = pixels[pos + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
pixels[pos++] = 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
}
catch (ArrayIndexOutOfBoundsException e) { }
}
The disadvantage is that the pixel appears to shift left and up by 1 pixel. Though, if you do another scanline fill backwards you could shift them back. And all the garbage here will end up as 2 rows on the right and bottom sides (some of this will be filled with embossed nonsense because I didn't have it slow down to check for those places). This means that if you want to cut that off when you readd the pixels reduce the height and width by 2, and leave the stride at the size of the original width. Since all the good data will be in the top bit, you don't have to fiddle at all with the offset.
Also, just use renderscript.
Take a look at: Convolution Demo. It is an App, which compares a convolution implementation done in Java vs in C++.
Needless to say C++ variant runs more than 10x faster.
So if you want speed either implement it via NDK or via Shaders.
I want to display the negative of a dicom image , by using bitmap I did so
public static Bitmap getNegativeImage(Bitmap img) {
int w1 = img.getWidth();
int h1 = img.getHeight();
// int value[][] = new int[w1][h1];
Bitmap gray = Bitmap.createBitmap(
w1, h1, img.getConfig());
int value, alpha, r, g, b;
for (int i = 0; i < w1; i++) {
for (int j = 0; j < h1; j++) {
value = img.getPixel(i, j); // store value
alpha = getAlpha(value);
r = 255 - getRed(value);
g = 255 - getGreen(value);
b = 255 - getBlue(value);
value = createRGB(alpha, r, g, b);
gray.setPixel(i, j, value);
}
}
return gray;
}
public static int createRGB(int alpha, int r, int g, int b) {
int rgb = (alpha << 24) + (r << 16) + (g << 8) + b;
return rgb;
}
public static int getAlpha(int rgb) {
return (rgb >> 24) & 0xFF;
}
public static int getRed(int rgb) {
return (rgb >> 16) & 0xFF;
}
public static int getGreen(int rgb) {
return (rgb >> 8) & 0xFF;
}
public static int getBlue(int rgb) {
return rgb & 0xFF;
}
But the inverted(negative ) image gets black out, on clicking again the original image appears but the inverted image is not displayed.
Regards
Sathya U M
I don't see why your code won't work, but this should:
private static final int RGB_MASK = 0x00FFFFFF;
public Bitmap invert(Bitmap original) {
// Create mutable Bitmap to invert, argument true makes it mutable
Bitmap inversion = original.copy(Config.ARGB_8888, true);
// Get info about Bitmap
int width = inversion.getWidth();
int height = inversion.getHeight();
int pixels = width * height;
// Get original pixels
int[] pixel = new int[pixels];
inversion.getPixels(pixel, 0, width, 0, 0, width, height);
// Modify pixels
for (int i = 0; i < pixels; i++)
pixel[i] ^= RGB_MASK;
inversion.setPixels(pixel, 0, width, 0, 0, width, height);
// Return inverted Bitmap
return inversion;
}
It creates a mutable copy of the Bitmap (not all Bitmaps are), inverts the rgb part of all pixels, leaving alpha intact
Edit
I have an idea of why your code isn't working: you're assuming that the pixels are AARRGGBB
i want blur a image,i used :
public Bitmap mohu(Bitmap bmpOriginal,int hRadius,int vRadius) {
int width, height, r,g, b, c,a, gry,c1,a1,r1,g1,b1,red,green,blue;
height = bmpOriginal.getHeight();
width = bmpOriginal.getWidth();
int iterations = 5;
int[] inPixels = new int[width*height];
int[] outPixels = new int[width*height];
Bitmap bmpSephia = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
Canvas canvas = new Canvas(bmpSephia);
Paint paint = new Paint();
int i=0;
canvas.drawBitmap(bmpOriginal, 0, 0, null);
for(int x=0; x < width; x++) {
for(int y=0; y< height; y++) {
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
i++;
}
}
for (int k = 0; k< iterations; k++ ) {
blur( inPixels, outPixels, width, height, hRadius );
blur( outPixels, inPixels, height, width, vRadius );
}
bmpSephia.setPixels(outPixels, 0, width, 0, 0, width, height);
return bmpSephia;
}
public static void blur( int[] in, int[] out, int width, int height, int radius ) {
int widthMinus1 = width-1;
int tableSize = 2*radius+1;
int divide[] = new int[256*tableSize];
for ( int i = 0; i < 256*tableSize; i++ )
divide[i] = i/tableSize;
int inIndex = 0;
for ( int y = 0; y < height; y++ ) {
int outIndex = y;
int ta = 0, tr = 0, tg = 0, tb = 0;
for ( int i = -radius; i <= radius; i++ ) {
int rgb = in[inIndex + ImageMath.clamp(i, 0, width-1)];
ta += (rgb >> 24) & 0xff;
tr += (rgb >> 16) & 0xff;
tg += (rgb >> 8) & 0xff;
tb += rgb & 0xff;
}
for ( int x = 0; x < width; x++ ) {
out[ outIndex ] = (divide[ta] << 24) | (divide[tr] << 16) | (divide[tg] << 8) | divide[tb];
int i1 = x+radius+1;
if ( i1 > widthMinus1 )
i1 = widthMinus1;
int i2 = x-radius;
if ( i2 < 0 )
i2 = 0;
int rgb1 = in[inIndex+i1];
int rgb2 = in[inIndex+i2];
ta += ((rgb1 >> 24) & 0xff)-((rgb2 >> 24) & 0xff);
tr += ((rgb1 & 0xff0000)-(rgb2 & 0xff0000)) >> 16;
tg += ((rgb1 & 0xff00)-(rgb2 & 0xff00)) >> 8;
tb += (rgb1 & 0xff)-(rgb2 & 0xff);
outIndex += height;
}
inIndex += width;
}
}
///
public static float clamp(float x, float a, float b) {
return (x < a) ? a : (x > b) ? b : x;
}
the method for some images is good,for some images .the effect not well,ot looks very rude,can you give me some advice, i have readed http://www.jhlabs.com/ip/blurring.html
and http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 ,but i cannot find good method for android
Your intermediate image is RGB565. That means 16 bits, 5 bits for R, 6 for G and 5 for B. If the original image is RGB888 then it would look bad after blurring. Can you not create an intermediate image in the same format as the original ?
Also, if the original image is RGB888, how is it converted to 565 ? Your code has:
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
It looks like there is no controlled conversion.
Your blur function is for an ARGB image. As well as being inefficient since you've hard coded for 565, if the original image is ARGB8888 then your conversion to RGB565 is going to do strange things with the alpha channel.
If this answer is not enough, it would be helpful to see some "bad" images that this code creates.