I am trying to change the alpha value of a bitmap per pixel in a for loop. The Bitmap is created from a createBitmap(source,x,y,w,h) of another bitmap. I've done a little test but I can't seem to alter the alpha. Is it the setPixel command or the fact the bitmap it isn't ARGB?
I want to create a simple fade out effect in the end but for now I am not referencing original pixel colors just green with half alpha. Thanks if you can help :)
_left[1] = Bitmap.createBitmap(TestActivity.photo, 0, 0, 256, 256);
for (int i = 0; i < _left[1].getWidth(); i++)
for (int t = 0; t < _left[1].getHeight(); t++) {
int a = (_left[1].getWidth() / 2) - i;
int b = (_left[1].getHeight() / 2) - t;
double dist = Math.sqrt((a*a) + (b*b));
if (dist > 20) _left[1].setPixel(i, t, Color.argb(128, 0, 255, 0));
}
UPDATE :
Okay this is the result I came up with if anyone wants to take a bitmap and fade out radially. But yes it is VERY SLOW without arrays... Thanks Reuben for a step in the right direction
public void fadeBitmap (Bitmap input, double fadeStartPercent, double fadeEndPercent, Bitmap output) {
Bitmap tempalpha = Bitmap.createBitmap(input.getWidth(), input.getHeight(), Bitmap.Config.ARGB_8888 );
Canvas printcanvas = new Canvas(output);
int radius = input.getWidth() / 2;
double fadelength = (radius * (fadeEndPercent / 100));
double fadestart = (radius * (fadeStartPercent / 100));
for (int i = 0; i < input.getWidth(); i++)
for (int t = 0; t < input.getHeight(); t++) {
int a = (input.getWidth() / 2) - i;
int b = (input.getHeight() / 2) - t;
double dist = Math.sqrt((a*a) + (b*b));
if (dist <= fadestart) {
tempalpha.setPixel(i,t,Color.argb(255, 255, 255, 255));
} else {
int fadeoff = 255 - (int) ((dist - fadestart) * (255/(fadelength - fadestart)));
if (dist > radius * (fadeEndPercent / 100)) fadeoff = 0;
tempalpha.setPixel(i,t,Color.argb(fadeoff, 255, 255, 255));
}
}
Paint alphaP = new Paint();
alphaP.setAntiAlias(true);
alphaP.setXfermode(new PorterDuffXfermode(Mode.DST_IN));
// printcanvas.setBitmap();
printcanvas.drawBitmap(input, 0, 0, null);
printcanvas.drawBitmap(tempalpha, 0, 0, alphaP);
}
The version of Bitmap.createBitmap() you are using returns an immutable bitmap. Bitmap.setPixel() will have no effect.
setPixel is appallingly slow anyway. Aim to use setPixels(), or, best of all, find a better way than manipulating bitmap pixels directly. I expect you could do something clever with a separate alpha-only bitmap and the right PorterDuff mode.
Related
In my app i want to edit images like brightness, contrast, etc. I got some tutorial and i am trying this to change contrast
public static Bitmap createContrast(Bitmap src, double value) {
// image size
int width = src.getWidth();
int height = src.getHeight();
// create output bitmap
Bitmap bmOut = Bitmap.createBitmap(width, height, src.getConfig());
// color information
int A, R, G, B;
int pixel;
// get contrast value
double contrast = Math.pow((100 + value) / 100, 2);
// scan through all pixels
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
// get pixel color
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
// apply filter contrast for every channel R, G, B
R = Color.red(pixel);
R = (int)(((((R / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(R < 0) { R = 0; }
else if(R > 255) { R = 255; }
G = Color.red(pixel);
G = (int)(((((G / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(G < 0) { G = 0; }
else if(G > 255) { G = 255; }
B = Color.red(pixel);
B = (int)(((((B / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(B < 0) { B = 0; }
else if(B > 255) { B = 255; }
// set new pixel color to output bitmap
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
// return final image
return bmOut;
calling it as :
ImageView image = (ImageView)(findViewById(R.id.image));
//image.setImageBitmap(createContrast(bitmap));
But i dont see any offect happening for the image. Can you please help where i am going wrong.
I saw the effectFactory from APi 14 . IS there something similar / any tutorial that can be used for older versions for image processing
There are three basic problems with this approach. The first two are coding issues. First, you are always calling Color.red, and there is no Color.green and Color.blue to be found in your code. The second issue is that this calculation is too repetitive. You assume the colors are in the range [0, 255], so it is much faster to create a array of 256 positions with the contrast calculated for each i in [0, 255].
The third issue is more problematic. Why did you consider this algorithm to improve contrast ? The results are meaningless for RGB, you might get something better in a different color system. Here are the results you should expect, with your parameter value at 0, 10, 20, and 30:
And here is a sample Python code to perform the operation:
import sys
from PIL import Image
img = Image.open(sys.argv[1])
width, height = img.size
cvalue = float(sys.argv[2]) # Your parameter "value".
contrast = ((100 + cvalue) / 100) ** 2
def apply_contrast(c):
c = (((c / 255.) - 0.5) * contrast + 0.5) * 255.0
return min(255, max(0, int(c)))
# Build the lookup table.
ltu = []
for i in range(256):
ltu.append(apply_contrast(i))
# The following "point" method applies a function to each
# value in the image. It considers the image as a flat sequence
# of values.
img = img.point(lambda x: ltu[x])
img.save(sys.argv[3])
Im currently working on a program which applies edge detection to an area of the preview frame. I have used previewcallback and got my cropped bitmap, have converted to grayscale using the following method.
int height1=120;
int width2=120;
final Bitmap resizedBitmap = Bitmap.createBitmap(bmp, 260, 15,
width2, height1);
try {
int bWidth = resizedBitmap.getWidth();
int bHeight = resizedBitmap.getHeight();
int[] pixels = new int[bWidth * bHeight];
resizedBitmap.getPixels(pixels, 0, bWidth, 0, 0, bWidth, bHeight);
for (int y = 0; y < bHeight; y++){
for (int x = 0; x < bWidth; x++){
int index = y * bWidth + x;
int R = (pixels[index] >> 16) & 0xff; //bitwise shifting
int G = (pixels[index] >> 8) & 0xff;
int B = pixels[index] & 0xff;
int gray = (int) (.299 * R + .587 * G + .114 * B);
}
}
I am very new to this, and would like to know whether gray is a 2D array of 120x120 pixels, or whether the value of gray is just being overwritten for each loop.
Apologies if this is very basic
Well, maybe I'm missing something, but as far as I can see gray is overwritten. You'd need something like
int[][] gray = new int[width][height];
// start loop
In the Loop:
gray[x][y] = ...;
I am doing histogram equalization on an image. I first get the RGB image and convert it to YUV. I run the histogram equalization algorithm on Y' of YUV and then convert back to RGB. Is it me, or does the image look weird? I am doing this correctly? this image is pretty bright, other images are a little red.
Here are the before/after images:
The algorithm (the commented values are values that I used previously for conversion. Both yield pretty much the same results) :
public static void createContrast(Bitmap src) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap processedImage = Bitmap.createBitmap(width, height, src.getConfig());
int A = 0,R,G,B;
int pixel;
float[][] Y = new float[width][height];
float[][] U = new float[width][height];
float[][] V = new float [width][height];
int [] histogram = new int[256];
Arrays.fill(histogram, 0);
int [] cdf = new int[256];
Arrays.fill(cdf, 0);
float min = 257;
float max = 0;
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
pixel = src.getPixel(x, y);
//Log.i("TEST","("+x+","+y+")");
A = Color.alpha(pixel);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
// convert to YUV
/*Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.492f * (B-Y[x][y]);
V[x][y] = 0.877f * (R-Y[x][y]);*/
Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.565f * (B-Y[x][y]);
V[x][y] = 0.713f * (R-Y[x][y]);
// create a histogram
histogram[(int) Y[x][y]]+=1;
// get min and max values
if (Y[x][y] < min){
min = Y[x][y];
}
if (Y[x][y] > max){
max = Y[x][y];
}
}
}
cdf[0] = histogram[0];
for (int i=1;i<=255;i++){
cdf[i] = cdf[i-1] + histogram[i];
//Log.i("TESTEST","cdf of: "+i+" = "+cdf[i]);
}
float minCDF = cdf[(int)min];
float denominator = width*height - minCDF;
//Log.i("TEST","Histeq Histeq Histeq Histeq Histeq Histeq");
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
//Log.i("TEST","("+x+","+y+")");
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
Y[x][y] = ((cdf[ (int) Y[x][y]] - minCDF)/(denominator)) * 255;
/*R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.395f * U[x][y] - 0.581f * V[x][y]);
B = minMaxCalc (Y[x][y] + 2.032f * U[x][y]);*/
R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.344f * U[x][y] - 0.714f * V[x][y]);
B = minMaxCalc (Y[x][y] + 1.77f * U[x][y]);
//Log.i("TESTEST","A: "+A);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
processedImage.setPixel(x, y, Color.argb(A, R, G, B));
}
}
}
My next step is to graph the histograms before and after. I just want to get an opinion here.
The question is a little bit old, but let me answer.
The reason is the way histogram equalization works. The algorithm tries to use all of the 0-255 range instead of given image's range.
So if you give it a dark image, it will change relatively brighter pixels to white colors. And relatively darker colors to black colors.
If you give it a bright image, for the same reason it will get darkened.
If I make a picture with mine samsung galaxy s2, the picture is 3264 x 2448 px.
I want to use a color range check on that, but it doesn't work.
However, if I make the picture smaller for example, 2500 x 2500 (so less pixels), then it does work. But I want the use the picture size of the galaxy s2 (3264 x 2448).
I think it is a memory issue?
I don't exactly know anymore what the limit is.
But is their another way to "bypass" this issue?
This is a piece of code, how I do it now:
bmp = BitmapFactory.decodeResource(getResources(),
R.drawable.four_colors);
int width = bmp.getWidth();
int height = bmp.getHeight();
int[] pixels = new int[width * height];
bmp.getPixels(pixels, 0, width, 0, 0, width, height);
for (int y = 0; y < height; y++){
for (int x = 0; x < width; x++){
int index = y * width + x;
int R = (pixels[index] >> 16) & 0xff; //bitwise shifting
int G = (pixels[index] >> 8) & 0xff;
int B = pixels[index] & 0xff;
total++;
if ((G > R)&&(G > B)){
counter++;
}
}
}
It crashes because the picture is to big, smaller pics work.
So is their something to "bypass" this issue? instead of using smaller images :)
I tried some other things, without succes, I try to explain what I tried.
I tried to "cut" the image in two, and then scan it separate (didn't work).
and I tried to only scan the half of it (1632 x 1224) then rotate the image (180 degrees) and scan it again, but also this didn't work out.
Hit me :)
When playing around with huge images you really should be using the BitmapRegionDecoder to process it in chunks.
Edit - now with a simple example:
try {
// Processes file in res/raw/huge.jpg or png
BitmapRegionDecoder decoder = BitmapRegionDecoder.newInstance(getResources().openRawResource(R.raw.huge), false);
try {
final int width = decoder.getWidth();
final int height = decoder.getHeight();
// Divide the bitmap into 1024x768 sized chunks and process it.
int wSteps = (int) Math.ceil(width / 1024.0);
int hSteps = (int) Math.ceil(height / 768.0);
Rect rect = new Rect();
long total = 0L, counter = 0L;
for (int h = 0; h < hSteps; h++) {
for (int w = 0; w < wSteps; w++) {
int w2 = Math.min(width, (w + 1) * 1024);
int h2 = Math.min(height, (h + 1) * 768);
rect.set(w * 1024, h * 768, w2, h2);
Bitmap bitmap = decoder.decodeRegion(rect, null);
try {
int bWidth = bitmap.getWidth();
int bHeight = bitmap.getHeight();
int[] pixels = new int[bWidth * bHeight];
bitmap.getPixels(pixels, 0, bWidth, 0, 0, bWidth, bHeight);
for (int y = 0; y < bHeight; y++){
for (int x = 0; x < bWidth; x++){
int index = y * bWidth + x;
int R = (pixels[index] >> 16) & 0xff; //bitwise shifting
int G = (pixels[index] >> 8) & 0xff;
int B = pixels[index] & 0xff;
total++;
if ((G > R)&&(G > B)){
counter++;
}
}
}
} finally {
bitmap.recycle();
}
}
}
} finally {
decoder.recycle();
}
You can limit the amount of pixel data of the image that you are loading at once by using some of the parameters of the getPixels method. I did something very similar, and I would read them one row at a time, for example.
for(int y = 0; y < height; y++)
{
bitmap.getPixels(pixels, 0, width, 0, y, width, 1);
I have a bitmap whose pixels contain only two argb values: pure black and pure transparent. I then scale the bitmap up in Android, now the bitmap has many argb values: pure black and pure transparent and black with various levels of transparency (i.e half transparent black); this is due to the interpolation done automatically by android. I would like the bitmaps pixels to contain only the original two argb values.
Currently I accomplish this with the following process:
my_bitmap = Bitmap.createScaledBitmap(BitmapFactory
.decodeResource(context.getResources(),
R.drawable.my_resource),
new_width, new height, false);
for (int i = 0; i < my_bitmap.getWidth(); i++) {
for (int j = 0; j < my_bitmap.getWidth(); j++) {
if (my_bitmap.getPixel(i, j) != Color.TRANSPARENT) {
my_bitmap.setPixel(i, j, Color.BLACK);
}
}
}
This is achingly slow on a cheaper phone for even a small bitmap, does anyone know how to either A) do this much faster or B) scale a bitmap up with no new argb values appearing?
I think the answer here is an algorithmic one.
Bitmap operations are very expensive... perhaps you can cache the bitmap somewhere and only draw it when the interpolation is specifically requested?
My other idea would be to group some amount of pixels together and have a flag "hasChanged" or something like that, and set it to true when something is changed so the system knows it has to redraw that pixel group. This way you don't redraw things more often than necessary.
Hope this helps!
Do the scale in code since the built-in algorithm isn't what you want. You'll avoid the interpolation that you don't want and you won't have to undo it. (Excuse any coding errors -- I wrote this without access to an IDE or compiler.)
my_bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.my_resource);
int[] src = new int[my_bitmap.getWidth() * my_bitmap.getHeight()];
my_bitmap.getPixels(src, 0, my_bitmap.getWidth(), 0, 0, my_bitmap.getWidth(), my_bitmap.getHeight());
int[] dst = new int[new_width * new_height];
float scaleX = my_bitmap.getWidth() / new_width;
float scaleY = my_bitmap.getHeight() / new_height;
for (int y = 0; y < new_height; y++) {
for (int x = 0; x < new_width; x++) {
int srcY = (int) (y * scaleY);
int srcX = (int) (x * scaleX);
dst[y*new_height + x] = src[srcY*my_bitmap.getHeight() + srcX];
}
}
Bitmap newBitmap = Bitmap.createBitmap(dst, 0, new_width, new_width, new_height, Bitmap.Config.ARGB_8888);