Related
I am building a scanner app, and trying to determine the "preview quality" from the preview callback of the camera. I want to customize the camera's AUTO_FLASH_MODE where it will be turned on if the environment is too dark.
How can I detect if there is a high average of dark pixels? This means (in preview) I am getting darkness and therefore need to turn on the camera's flash light.
Either find out how to access pixel values of your image and calculate the average intensity yourself or use any image processing library to do so.
Dark pixels have low values, bright pixels have high values.
You want to calculate the average of all red, green and blue values divided by three times your pixel count.
Define a threshold for when to turn on the flash, but keep in mind that you have to get a new exposure time then.
Prefer flash over exposure time increase as long exposure times yield higher image noise.
I tried this approach but i think it is taking unnecessary time of processing the bitmap and then get an average screen color,
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Size cameraResolution = resolution;
PreviewCallback callback = this.callback;
if (cameraResolution != null && callback != null)
{
int format = camera.getParameters().getPreviewFormat();
SourceData source = new SourceData(data, cameraResolution.width, cameraResolution.height, format, getCameraRotation());
callback.onPreview(source);
final int[] rgb = decodeYUV420SP(data, cameraResolution.width, cameraResolution.height);
//Bitmap bmp = decodeBitmap(source.getData());
Bitmap bmp = Bitmap.createBitmap(rgb, cameraResolution.width, cameraResolution.height, Bitmap.Config.ARGB_8888);
if (bmp != null)
{
//bmp = decodeBitmap(source.getData());
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
// bmp.compress(Bitmap.CompressFormat.JPEG, 70, bytes);
Bitmap resizebitmap = Bitmap.createBitmap(bmp,
bmp.getWidth() / 2, bmp.getHeight() / 2, 60, 60);
int color = getAverageColor(resizebitmap);
Log.i("Color Int", color + "");
// int color = resizebitmap.getPixel(resizebitmap.getWidth()/2,resizebitmap.getHeight()/2);
String strColor = String.format("#%06X", 0xFFFFFF & color);
//String colorname = sColorNameMap.get(strColor);
Log.d("strColor", strColor);
Log.i("strColor", color + "");
if(!mIsOn)
{
if (color == -16777216 || color < -16777216)//minimum color code (full dark)
{
mIsOn = true;
setTorch(true);
Log.d("Yahooooo", "" + color);
}
}
Log.i("Pixel Value",
"Top Left pixel: " + Integer.toHexString(color));
}
}
else
{
Log.d(TAG, "Got preview callback, but no handler or resolution available");
}
}
}
private int[] decodeYUV420SP(byte[] yuv420sp, int width, int height)
{
final int frameSize = width * height;
int rgb[]=new int[width*height];
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0) y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0) r = 0; else if (r > 262143) r = 262143;
if (g < 0) g = 0; else if (g > 262143) g = 262143;
if (b < 0) b = 0; else if (b > 262143) b = 262143;
rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
0xff00) | ((b >> 10) & 0xff);
}
}
return rgb;
}
private int getAverageColor(Bitmap bitmap)
{
int redBucket = 0;
int greenBucket = 0;
int blueBucket = 0;
int pixelCount = 0;
for (int y = 0; y < bitmap.getHeight(); y++) {
for (int x = 0; x < bitmap.getWidth(); x++) {
int c = bitmap.getPixel(x, y);
pixelCount++;
redBucket += Color.red(c);
greenBucket += Color.green(c);
blueBucket += Color.blue(c);
// does alpha matter?
}
}
int averageColor = Color.rgb(redBucket / pixelCount, greenBucket
/ pixelCount, blueBucket / pixelCount);
return averageColor;
}
Hi I am creating an app for android and I encountered a problem with how to color bitmaps
I am using the following simple code
for(int i=0;i<pixels.length;i++){
if(pixels[i] == COLOR.WHITE){
pixels[i]=Color.RED;
}
}
Where pixels is the array of pixels of the bitmap
However the problem is that I'm getting in the edges of the colored area a thin layer of pixels that weren't colored I understand this stems from that this layer of white is somewhat shadowy(not entirely white partially black) how do I get over this?
Hope I made my question clear enough
Right now you are matching and replacing one specific integer value for white. However, in your original bitmap that white color bleeds into other colors and therefore you have white color values around the edges of these white patches that are slightly different.
You need to change your algorithm to take a color matching tolerance into account. For that you'll have to split up your pixel and your key color into their three color channels and check them individually if the differences between them are within a certain tolerance value.
That way you can match these whitish colored pixels around the edges. But even with added tolerance you cannot just replace your matching pixels just with red. You would get aliased hard, red edges and it wouldn't look pretty. I wrote a similar algorithm a while ago and got around that aliasing issue by doing some color blending in HSV color space:
public Bitmap changeColor(Bitmap src, int keyColor,
int replColor, int tolerance) {
Bitmap copy = src.copy(Bitmap.Config.ARGB_8888, true);
int width = copy.getWidth();
int height = copy.getHeight();
int[] pixels = new int[width * height];
src.getPixels(pixels, 0, width, 0, 0, width, height);
int sR = Color.red(keyColor);
int sG = Color.green(keyColor);
int sB = Color.blue(keyColor);
int tR = Color.red(replColor);
int tG = Color.green(replColor);
int tB = Color.blue(replColor);
float[] hsv = new float[3];
Color.RGBToHSV(tR, tG, tB, hsv);
float targetHue = hsv[0];
float targetSat = hsv[1];
float targetVal = hsv[2];
for(int i = 0; i < pixels.length; ++i) {
int pixel = pixels[i];
if(pixel == keyColor) {
pixels[i] = replColor;
} else {
int pR = Color.red(pixel);
int pG = Color.green(pixel);
int pB = Color.blue(pixel);
int deltaR = Math.abs(pR - sR);
int deltaG = Math.abs(pG - sG);
int deltaB = Math.abs(pB - sB);
if(deltaR <= tolerance && deltaG <= tolerance
&& deltaB <= tolerance) {
Color.RGBToHSV(pR, pG, pB, hsv);
hsv[0] = targetHue;
hsv[1] = targetSat;
hsv[2] *= targetVal;
int mixTrgColor = Color.HSVToColor(Color.alpha(pixel),
hsv);
pixels[i] = mixTrgColor;
}
}
}
copy.setPixels(pixels, 0, width, 0, 0, width, height);
return copy;
}
keyColor and replColor are ARGB encoded integer values such as Color.WHITE and Color.RED. tolerance is a value from 0 to 255 that specifies the key color matching tolerance per color channel. I had to rewrite that snippet a bit to remove framework specifics. I hope I didn't make any mistakes.
As a word of warning: Java (on Android) is pretty slow with image processing. If it's not fast enough for you, you should rewrite the algorithm in C for example and use the NDK.
UPDATE: Color replace algorithm in C
Here is the implementation of the same algorithm written in C. The last function is the actual algorithm which takes the bitmap's pixel array as argument. You need to create a header file with that function declaration and set up some NDK compilation boilerplate and create an additional Java class with the following method declaration:
native static void changeColor(int[] pixels, int width, int height, int keyColor, int replColor, int tolerance);
C implementation:
#include <math.h>
#define MIN(x,y) ((x < y) ? x : y)
#define MAX(x,y) ((x > y) ? x : y)
int clamp_byte(int val) {
if(val > 255) {
return 255;
} else if(val < 0) {
return 0;
} else {
return val;
}
}
int encode_argb(uint8_t r, uint8_t g, uint8_t b, uint8_t a) {
return ((a & 0xFF) << 24) | ((r & 0xFF) << 16) | ((g & 0xFF) << 8) | (b & 0xFF);
}
int alpha(int c) {
return (c >> 24) & 0xFF;
}
int red(int c) {
return (c >> 16) & 0xFF;
}
int green(int c) {
return (c >> 8) & 0xFF;
}
int blue(int c) {
return c & 0xFF;
}
typedef struct struct_hsv {
uint16_t h;
uint8_t s;
uint8_t v;
} hsv;
// http://www.ruinelli.ch/rgb-to-hsv
hsv rgb255_to_hsv(uint8_t r, uint8_t g, uint8_t b) {
uint8_t min, max, delta;
hsv result;
int h;
min = MIN(r, MIN(g, b));
max = MAX(r, MAX(g, b));
result.v = max; // v, 0..255
delta = max - min; // 0..255, < v
if(delta != 0 && max != 0) {
result.s = ((int) delta) * 255 / max; // s, 0..255
if(r == max) {
h = (g - b) * 60 / delta; // between yellow & magenta
} else if(g == max) {
h = 120 + (b - r) * 60 / delta; // between cyan & yellow
} else {
h = 240 + (r - g) * 60 / delta; // between magenta & cyan
}
if(h < 0) h += 360;
result.h = h;
} else {
// r = g = b = 0
result.h = 0;
result.s = 0;
}
return result;
}
int hsv_to_argb(hsv color, uint8_t alpha) {
int i;
uint8_t r,g,b;
float f, p, q, t, h, s, v;
h = (float) color.h;
s = (float) color.s;
v = (float) color.v;
s /= 255;
if(s == 0) {
// achromatic (grey)
return encode_argb(color.v, color.v, color.v, alpha);
}
h /= 60; // sector 0 to 5
i = floor(h);
f = h - i; // factorial part of h
p = (unsigned char) (v * (1 - s));
q = (unsigned char) (v * (1 - s * f));
t = (unsigned char) (v * (1 - s * (1 - f)));
switch(i) {
case 0:
r = v;
g = t;
b = p;
break;
case 1:
r = q;
g = v;
b = p;
break;
case 2:
r = p;
g = v;
b = t;
break;
case 3:
r = p;
g = q;
b = v;
break;
case 4:
r = t;
g = p;
b = v;
break;
default: // case 5:
r = v;
g = p;
b = q;
break;
}
return encode_argb(r, g, b, alpha);
}
JNIEXPORT void JNICALL Java_my_package_name_ClassName_changeColor(
JNIEnv* env, jclass clazz, jintArray bitmapArray, jint width, jint height,
jint keyColor, jint replColor, jint tolerance) {
jint* pixels = (*env)->GetPrimitiveArrayCritical(env, bitmapArray, 0);
int sR = red(keyColor);
int sG = green(keyColor);
int sB = blue(keyColor);
int tR = red(replColor);
int tG = green(replColor);
int tB = blue(replColor);
hsv cHsv = rgb255_to_hsv(tR, tG, tB);
int targetHue = cHsv.h;
int targetSat = cHsv.s;
int targetVal = cHsv.v;
int i;
int max = width * height;
for(i = 0; i < max; ++i) {
int pixel = pixels[i];
if(pixel == keyColor) {
pixels[i] = replColor;
} else {
int pR = red(pixel);
int pG = green(pixel);
int pB = blue(pixel);
int deltaR = abs(pR - sR);
int deltaG = abs(pG - sG);
int deltaB = abs(pB - sB);
if(deltaR <= tolerance && deltaG <= tolerance
&& deltaB <= tolerance) {
cHsv = rgb255_to_hsv(pR, pG, pB);
cHsv.h = targetHue;
cHsv.s = targetSat;
int newValue = ((int) cHsv.v * targetVal) / 255;
cHsv.v = newValue;
int mixTrgColor = hsv_to_argb(cHsv, alpha(pixel));
pixels[i] = mixTrgColor;
}
}
}
(*env)->ReleasePrimitiveArrayCritical(env, bitmapArray, pixels, 0);
}
i implement some code for water color effect on image in android but it was to slow(it's take more then 2 minute) now i try to implement this in JNI for batter speed ,
hear is my java code for
the inPixels is pixel of Bitmap .
protected int[] filterPixels( int width, int height, int[] inPixels )
{
int levels = 256;
int index = 0;
int[] rHistogram = new int[levels];
int[] gHistogram = new int[levels];
int[] bHistogram = new int[levels];
int[] rTotal = new int[levels];
int[] gTotal = new int[levels];
int[] bTotal = new int[levels];
int[] outPixels = new int[width * height];
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
for (int i = 0; i < levels; i++)
rHistogram[i] = gHistogram[i] = bHistogram[i] = rTotal[i] = gTotal[i] = bTotal[i] = 0;
for (int row = -range; row <= range; row++)
{
int iy = y+row;
int ioffset;
if (0 <= iy && iy < height)
{
ioffset = iy*width;
for (int col = -range; col <= range; col++)
{
int ix = x+col;
if (0 <= ix && ix < width) {
int rgb = inPixels[ioffset+ix];
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
int ri = r*levels/256;
int gi = g*levels/256;
int bi = b*levels/256;
rTotal[ri] += r;
gTotal[gi] += g;
bTotal[bi] += b;
rHistogram[ri]++;
gHistogram[gi]++;
bHistogram[bi]++;
}
}
}
}
int r = 0, g = 0, b = 0;
for (int i = 1; i < levels; i++)
{
if (rHistogram[i] > rHistogram[r])
r = i;
if (gHistogram[i] > gHistogram[g])
g = i;
if (bHistogram[i] > bHistogram[b])
b = i;
}
r = rTotal[r] / rHistogram[r];
g = gTotal[g] / gHistogram[g];
b = bTotal[b] / bHistogram[b];
outPixels[index] = (inPixels[index] & 0xff000000) | ( r << 16 ) | ( g << 8 ) | b;
index++;
}
}
return outPixels;
}
**OUTPUT image **
and i try to convert this java code to c code but i don't what is the wrong ,
hear the code for C
void filterPixels( int width, int height, int inPixels[] )
{
int levels = 256;
int index = 0;
int rHistogram [levels];
int gHistogram [levels];
int bHistogram [levels];
int rTotal [levels];
int gTotal [levels];
int bTotal [levels];
int outPixels [width * height];
//Loop Variables
int y ;
int x ;
int i ;
int row ;
int col ;
int j ;
int range = 5 ;
for ( y = 0; y < height; y++)
{
for ( x = 0; x < width; x++)
{
for ( i = 0; i < levels; i++)
rHistogram[i] = gHistogram[i] = bHistogram[i] = rTotal[i] = gTotal[i] = bTotal[i] = 0;
for ( row = -range; row <= range; row++)
{
int iy = y+row;
int ioffset;
if (0 <= iy && iy < height)
{
ioffset = iy*width;
for ( col = -range; col <= range; col++)
{
int ix = x+col;
if (0 <= ix && ix < width) {
int rgb = inPixels[ioffset+ix];
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
int ri = r*levels/256;
int gi = g*levels/256;
int bi = b*levels/256;
rTotal[ri] += r;
gTotal[gi] += g;
bTotal[bi] += b;
rHistogram[ri]++;
gHistogram[gi]++;
bHistogram[bi]++;
}
}
}
}
int r = 0, g = 0, b = 0;
for ( j = 1; j < levels; j++)
{
if (rHistogram[j] > rHistogram[r])
r = j;
if (gHistogram[j] > gHistogram[g])
g = j;
if (bHistogram[j] > bHistogram[b])
b = j;
}
r = rTotal[r] / rHistogram[r];
g = gTotal[g] / gHistogram[g];
b = bTotal[b] / bHistogram[b];
outPixels[index] = (inPixels[index] & 0xff000000) | ( r << 16 ) | ( g << 8 ) | b;
index++;
}
}
}
i check the pixel value of java code and c code both are same(for same image)
code for call native function from my android activity .
int[] pix = new int[oraginal.getWidth() * oraginal.getHeight()];
Bitmap bitmap = oraginal.copy(oraginal.getConfig(), true);
bitmap.getPixels(pix, 0, bitmap.getWidth(), 0, 0,bitmap.getWidth(), bitmap.getHeight());
filterPixelsJNI(bitmap.getWidth(), bitmap.getHeight(), pix);
bitmap.setPixels(pix, 0, bitmap.getWidth(), 0, 0,bitmap.getWidth(), bitmap.getHeight());
myView.setImageBitmap(bitmap);
this is my first try for JNI so plz help me in this .
UPDATE
public native void filterPixelsJNI( int width, int height, int inPixels[] );
JNI
JNIEXPORT void JNICALL Java_com_testndk_HelloWorldActivity_filterPixelsJNI (JNIEnv * env, jobject obj , jint width,jint height,jint inPixels[]){
filterPixels( width, height, inPixels);
}
filterPixels method witch is call from c code .
There are several problems with your JNI code. The algorithmic part is probably correct, but you're not dealing with the Java array to C array conversion correctly.
First of all, the last argument of Java_com_testndk_HelloWorldActivity_filterPixelsJNI should be of type jintArray, and not jint []. This is how you pass a Java array to C code.
Once you get this array, you can't process it directly, you'll have to convert it to a C array:
JNIEXPORT void JNICALL Java_com_testndk_HelloWorldActivity_filterPixelsJNI (JNIEnv * env, jobject obj , jint width, jint height, jintArray inPixels) {
int *c_inPixels = (*env)->GetIntArrayElements(env, inPixels, NULL);
filterPixels( width, height, c_inPixels);
// passing 0 as the last argument should copy native array to Java array
(*env)->ReleaseIntArrayElements(env, inPixels, c_inPixels, 0);
}
I advise you to look at the JNI documentation, which explains how to deal with arrays: http://docs.oracle.com/javase/1.5.0/docs/guide/jni/spec/functions.html
Note that there are now easier ways of processing Java Bitmap objects using android NDK. See an other of my answers here for details.
I am using JavaCV in Android.
In my code, I have created a ImageComparator(class of OpenCV CookBook
http://code.google.com/p/javacv/source/browse/OpenCV2_Cookbook/src/opencv2_cookbook/chapter04/ImageComparator.scala?repo=examples
http://code.google.com/p/javacv/wiki/OpenCV2_Cookbook_Examples_Chapter_4) Object and use that object to compare images. If I use file from SD card the comparator is working.
File referenceImageFile = new File(absPath1); // Read an image.
IplImage reference = Util.loadOrExit(referenceImageFile,CV_LOAD_IMAGE_COLOR);
comparator = new ImageComparator(reference);
comparator = new ImageComparator(reference);
But from Camera Preview, when I am creating IplImage it is not working. I am getting the following Exception during comparison "score" calculation.
score = referenceComparator.compare(grayImage) / imageSize;
java.lang.RuntimeException: /home/saudet/android/OpenCV-2.4.2/modules/core/src/convert.cpp:1196: error: (-215) i < src.channels() in function void cvSplit(const void*, void*, void*, void*, void*)
For CameraPreview I am using the code from FacePreview to create IplImage.But it create Image in grayScale.
int f = SUBSAMPLING_FACTOR;
if (grayImage == null || grayImage.width() != width / f
|| grayImage.height() != height / f) {
grayImage = IplImage.create(width / f, height / f, IPL_DEPTH_8U, 1);
}
int imageWidth = grayImage.width();
int imageHeight = grayImage.height();
int dataStride = f * width;
int imageStride = grayImage.widthStep();
ByteBuffer imageBuffer = grayImage.getByteBuffer();
for (int y = 0; y < imageHeight; y++) {
int dataLine = y * dataStride;
int imageLine = y * imageStride;
for (int x = 0; x < imageWidth; x++) {
imageBuffer.put(imageLine + x, data[dataLine + f * x]);
}
}
How to create a Color IplImage from Camera to use with ImageComparator?
The below code seems to be working fine.
public void onPreviewFrame(final byte[] data, final Camera camera) {
try {
Camera.Size size = camera.getParameters().getPreviewSize();
processImage(data, size.width, size.height);
camera.addCallbackBuffer(data);
} catch (RuntimeException e) {
// The camera has probably just been released, ignore.
Log.d("Exception", " " + e);
}
}
protected void processImage(byte[] data, int width, int height) {
score.clear();
// First, downsample our image
int f = SUBSAMPLING_FACTOR;
IplImage _4image = IplImage.create(width, height, IPL_DEPTH_8U, f);
int[] _temp = new int[width * height];
if (_4image != null) {
decodeYUV420SP(_temp, data, width, height);
_4image.getIntBuffer().put(_temp);
}
//bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
//bitmap.copyPixelsFromBuffer(_4image.getByteBuffer());
Log.d("CompareAndroid", "processImage");
int imageSize = _4image.width() * _4image.height();
Iterator<ImageComparator> iterator = reference_List.iterator();
// Compute histogram match and normalize by image size.
// 1 means perfect match.
while(iterator.hasNext()){
score.add(((ImageComparator) iterator.next()).compare(_4image) / imageSize);
}
Log.d("CompareImages", "Score Size "+score.size());
postInvalidate();
}
This code seems to be working fine.
private void decodeYUV420SP(int[] rgb, byte[] yuv420sp, int width,
int height) {
int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
rgb[yp] = 0xff000000 | ((b << 6) & 0xff0000) | ((g >> 2) & 0xff00) | ((r >> 10) & 0xff);
}
}
}
I haven't tested it, but something like this should work:
IplImage yuvimage = IplImage.create(width, height * 3 / 2, IPL_DEPTH_8U, 2);
IplImage rgbimage = IplImage.create(width, height, IPL_DEPTH_8U, 3);
cvCvtColor(yuvimage, rgbimage, CV_YUV2BGR_NV21);
i want blur a image,i used :
public Bitmap mohu(Bitmap bmpOriginal,int hRadius,int vRadius) {
int width, height, r,g, b, c,a, gry,c1,a1,r1,g1,b1,red,green,blue;
height = bmpOriginal.getHeight();
width = bmpOriginal.getWidth();
int iterations = 5;
int[] inPixels = new int[width*height];
int[] outPixels = new int[width*height];
Bitmap bmpSephia = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
Canvas canvas = new Canvas(bmpSephia);
Paint paint = new Paint();
int i=0;
canvas.drawBitmap(bmpOriginal, 0, 0, null);
for(int x=0; x < width; x++) {
for(int y=0; y< height; y++) {
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
i++;
}
}
for (int k = 0; k< iterations; k++ ) {
blur( inPixels, outPixels, width, height, hRadius );
blur( outPixels, inPixels, height, width, vRadius );
}
bmpSephia.setPixels(outPixels, 0, width, 0, 0, width, height);
return bmpSephia;
}
public static void blur( int[] in, int[] out, int width, int height, int radius ) {
int widthMinus1 = width-1;
int tableSize = 2*radius+1;
int divide[] = new int[256*tableSize];
for ( int i = 0; i < 256*tableSize; i++ )
divide[i] = i/tableSize;
int inIndex = 0;
for ( int y = 0; y < height; y++ ) {
int outIndex = y;
int ta = 0, tr = 0, tg = 0, tb = 0;
for ( int i = -radius; i <= radius; i++ ) {
int rgb = in[inIndex + ImageMath.clamp(i, 0, width-1)];
ta += (rgb >> 24) & 0xff;
tr += (rgb >> 16) & 0xff;
tg += (rgb >> 8) & 0xff;
tb += rgb & 0xff;
}
for ( int x = 0; x < width; x++ ) {
out[ outIndex ] = (divide[ta] << 24) | (divide[tr] << 16) | (divide[tg] << 8) | divide[tb];
int i1 = x+radius+1;
if ( i1 > widthMinus1 )
i1 = widthMinus1;
int i2 = x-radius;
if ( i2 < 0 )
i2 = 0;
int rgb1 = in[inIndex+i1];
int rgb2 = in[inIndex+i2];
ta += ((rgb1 >> 24) & 0xff)-((rgb2 >> 24) & 0xff);
tr += ((rgb1 & 0xff0000)-(rgb2 & 0xff0000)) >> 16;
tg += ((rgb1 & 0xff00)-(rgb2 & 0xff00)) >> 8;
tb += (rgb1 & 0xff)-(rgb2 & 0xff);
outIndex += height;
}
inIndex += width;
}
}
///
public static float clamp(float x, float a, float b) {
return (x < a) ? a : (x > b) ? b : x;
}
the method for some images is good,for some images .the effect not well,ot looks very rude,can you give me some advice, i have readed http://www.jhlabs.com/ip/blurring.html
and http://java.sun.com/products/java-media/jai/forDevelopers/jai1_0_1guide-unc/Image-enhance.doc.html#51172 ,but i cannot find good method for android
Your intermediate image is RGB565. That means 16 bits, 5 bits for R, 6 for G and 5 for B. If the original image is RGB888 then it would look bad after blurring. Can you not create an intermediate image in the same format as the original ?
Also, if the original image is RGB888, how is it converted to 565 ? Your code has:
c = bmpOriginal.getPixel(x, y);
inPixels[i]=c;
It looks like there is no controlled conversion.
Your blur function is for an ARGB image. As well as being inefficient since you've hard coded for 565, if the original image is ARGB8888 then your conversion to RGB565 is going to do strange things with the alpha channel.
If this answer is not enough, it would be helpful to see some "bad" images that this code creates.