RGB to XYZ and then XYZ to LAB in android(JAVA) - android

I have calculated R,G and B values of an image and do some calculations to get L,a and b values for LAB Colorspace. Now how can I transfer my RGB image into LAB image using these L,a and b Values in Android studio (Except OpenCV's Builtin function because I want to first convert RGB into XYZ and then finally XYZ to LAB color space)?

Pass RGB values to the function of this class which will return an array containing values of LAB
public class CIELab {
String TAG ="RGB";
public double[] rgbToLab(int R, int G, int B) {
double[] lab=new double[3];
double r, g, b, X, Y, Z, xr, yr, zr;
ColorUtils.RGBToLAB(R,G,B,lab);
//Core.absdiff();
// D65/2°
double Xr = 95.047;
double Yr = 100.0;
double Zr = 108.883;
// --------- RGB to XYZ ---------//
r = R/255.0;
g = G/255.0;
b = B/255.0;
if (r > 0.04045)
r = Math.pow((r+0.055)/1.055,2.4);
else
r = r/12.92;
if (g > 0.04045)
g = Math.pow((g+0.055)/1.055,2.4);
else
g = g/12.92;
if (b > 0.04045)
b = Math.pow((b+0.055)/1.055,2.4);
else
b = b/12.92 ;
r*=100;
g*=100;
b*=100;
Log.d(TAG,"R:"+r+" G:"+g+" B:"+b);
X = 0.4124*r + 0.3576*g + 0.1805*b;
Y = 0.2126*r + 0.7152*g + 0.0722*b;
Z = 0.0193*r + 0.1192*g + 0.9505*b;
// --------- XYZ to Lab --------- //
xr = X/Xr;
yr = Y/Yr;
zr = Z/Zr;
if ( xr > 0.008856 )
xr = (float) Math.pow(xr, 1/3.);
else
xr = (float) ((7.787 * xr) + 16 / 116.0);
if ( yr > 0.008856 )
yr = (float) Math.pow(yr, 1/3.);
else
yr = (float) ((7.787 * yr) + 16 / 116.0);
if ( zr > 0.008856 )
zr = (float) Math.pow(zr, 1/3.);
else
zr = (float) ((7.787 * zr) + 16 / 116.0);
double[] lab = new double[3];
lab[0] = (116*yr)-16;
lab[1] = 500*(xr-yr);
lab[2] = 200*(yr-zr);
return lab;
}
}

Related

Create HSV histogram with RenderScript

I have to create a HSV Histogram from a ARGB array using RenderScript in Android. This is the first time i am using RenderScript and i am not sure if i made a mistake because the performance is not so good. Creating a HSV histogram from an 1920x1080 bitmap takes between 100 and 150 ms.
The RenderScript code:
#pragma version(1)
#pragma rs java_package_name(com.test.renderscript)
#pragma rs_fp_relaxed
uchar3 bins;
rs_allocation histogramAllocation;
void __attribute__((kernel)) process(uchar4 in) {
float r = in.r / 255.0;
float g = in.g / 255.0;
float b = in.b / 255.0;
// convert rgb to hsv
float minRGB = min( r, min( g, b ) );
float maxRGB = max( r, max( g, b ) );
float deltaRGB = maxRGB - minRGB;
float h = 0.0;
float s = maxRGB == 0 ? 0 : (maxRGB - minRGB) / maxRGB;
float v = maxRGB;
if (deltaRGB != 0) {
if (r == maxRGB) {
h = (g - b) / deltaRGB;
}
else {
if (g == maxRGB) {
h = 2 + (b - r) / deltaRGB;
}
else {
h = 4 + (r - g) / deltaRGB;
}
}
h *= 60;
if (h < 0) { h += 360; }
if (h == 360) { h = 0; }
}
// quantize hsv
uint qh = h / (360.0 / bins.s0);
uint qs = (s * 100) / (101.0 / bins.s1);
uint qv = (v * 100) / (101.0 / bins.s2);
// calculate bin index and update the count at that index
// (v * bin size H * bin size S) + (s * bin size H) + h;
uint binIndex = (qv * bins.s0 * bins.s1) + (qs * bins.s0) + qh;
uint count = rsGetElementAt_uint(histogramAllocation, binIndex);
rsSetElementAt_uint(histogramAllocation, (count + 1), binIndex);
}
void init() {
uint histogramSize = bins.s0 * bins.s1 * bins.s2;
for (int i=0; i < histogramSize; i++) {
rsSetElementAt_uint(histogramAllocation, 0, i);
}
}
The Kotlin code:
class RsCreateHsvHistogram {
fun execute(rs: RenderScript, src: ByteArray, bins: HsvHistogram.Bins = HsvHistogram.Bins()): HsvHistogram {
val start = SystemClock.elapsedRealtimeNanos()
val histogramSize = bins.h * bins.s * bins.v
// create input allocation
val typeIn = Type.Builder(rs, Element.U8_4(rs))
.setX(src.size / 4)
.create()
val allocIn = Allocation.createTyped(rs, typeIn)
allocIn.copyFrom(src)
// create output allocation -> the histogram allocation
val typeOut = Type.Builder(rs, Element.I32(rs))
.setX(histogramSize)
.create()
val allocOut = Allocation.createTyped(rs, typeOut)
// run the render script
val script = ScriptC_create_hsv_histogram(rs)
script._bins = Short3(bins.h, bins.s, bins.v)
script._histogramAllocation = allocOut
script.forEach_process(allocIn)
// copy output allocation to histogram array
val histogramData = IntArray(histogramSize)
allocOut.copyTo(histogramData)
val stop = SystemClock.elapsedRealtimeNanos()
Timber.e("duration => ${(stop-start) / 1000000.0} ms")
return HsvHistogram(histogramData, bins)
}
}
I hope you can help me improve the performance. Do you think HSV histogram creation can be done in about 20ms? Is this realistic?

Inaccurate HSV conversion on Android

I am trying to create a wallpaper and am using the HSV conversion in the "android.graphics.color" class. I was very surprised when i realized that a conversion of a created HSV color with a specified hue (0..360) to a rgb color (an integer) and a back conversion to a HSV color will not result in the same hue. This is my code:
int c = Color.HSVToColor(new float[] { 100f, 1, 1 });
float[] f = new float[3];
Color.colorToHSV(c, f);
alert(f[0]);
I am starting with a hue of 100 degree and the result is 99.76471.
I wonder why there is that (in my opinion) relatively big inaccuracy.
But a much bigger problem is, that when you put that value in the code again, the new result decreases again.
int c = Color.HSVToColor(new float[] { 99.76471f, 1, 1 });
float[] f = new float[3];
Color.colorToHSV(c, f);
alert(f[0]);
If I start with 99.76471, I get 99.52941. This is kind of a problem for me.
I did something similar in java with the "java.awt.Color" class where I did not have those problems. Unfortunately, I cannot use this class in android.
This is an interesting problem. It's not avoidable with the android class because of low float precision. However, I found a similar solution written in javascript here.
If it's important enough for you to want to define your own method/class to do the conversions, here is a Java conversion which should give you better precision:
#Size(3)
/** Does the same as {#link android.graphics.Color#colorToHSV(int, float[])} */
public double[] colorToHSV(#ColorInt int color) {
//this line copied vertabim
return rgbToHsv((color >> 16) & 0xFF, (color >> 8) & 0xFF, color & 0xFF);
}
#Size(3)
public double[] rgbToHsv(double r, double g, double b) {
final double max = Math.max(r, Math.max(g, b));
final double min = Math.min(r, Math.min(g, b));
final double diff = max - min;
final double h;
final double s = ((max == 0d)? 0d : diff / max);
final double v = max / 255d;
if (min == max) {
h = 0d;
} else if (r == max) {
double tempH = (g - b) + diff * (g < b ? 6: 0);
tempH /= 6 * diff;
h = tempH;
} else if (g == max) {
double tempH = (b - r) + diff * 2;
tempH /= 6 * diff;
h = tempH;
} else {
double tempH = (r - g) + diff * 4;
tempH /= 6 * diff;
h = tempH;
}
return new double[] { h, s, v };
}
I have to confess ignorance here - I've done quick conversion and not had time to test properly. There might be a more optimal solution, but this should get you started at least.
Don't miss the mirrored procedure from the source link. Next is the translation to the Kotlin lang.
fun hsvToRGB(hsv: DoubleArray): Int {
val i = floor(hsv[0] * 6).toInt()
val f = hsv[0] * 6 - i
val p = hsv[2] * (1 - hsv[1])
val q = hsv[2] * (1 - f * hsv[1])
val t = hsv[2] * (1 - (1 - f) * hsv[1])
val r: Double
val g: Double
val b: Double
when (i % 6) {
0 -> {r = hsv[2]; g = t; b = p}
1 -> {r = q; g = hsv[2]; b = p}
2 -> {r = p; g = hsv[2]; b = t}
3 -> {r = p; g = q; b = hsv[2]}
4 -> {r = t; g = p; b = hsv[2]}
5 -> {r = hsv[2]; g = p; b = q}
else -> {r = 0.0; g = 0.0; b = 0.0}
}
return Color.rgb((r * 255).roundToInt(), (g * 255).roundToInt(), (b * 255).roundToInt())
}
fun rgbToHSV(color: Int, target: DoubleArray) {
val r = Color.red(color).toDouble()
val g = Color.green(color).toDouble()
val b = Color.blue(color).toDouble()
val max = kotlin.math.max(r, kotlin.math.max(g, b))
val min = kotlin.math.min(r, kotlin.math.min(g, b))
val diff = max - min
target[1] = if (max == 0.0) {0.0} else {diff / max}
target[2] = max / 255.0
target[0] = if (min == max) {
0.0
} else if (r == max) {
var tempH = (g - b) + diff * if (g < b) { 6} else {0}
tempH /= 6 * diff
tempH
} else if (g == max) {
var tempH = (b - r) + diff * 2
tempH /= 6 * diff
tempH
} else {
var tempH = (r - g) + diff * 4
tempH /= 6 * diff
tempH
}
}

android : editing images

In my app i want to edit images like brightness, contrast, etc. I got some tutorial and i am trying this to change contrast
public static Bitmap createContrast(Bitmap src, double value) {
// image size
int width = src.getWidth();
int height = src.getHeight();
// create output bitmap
Bitmap bmOut = Bitmap.createBitmap(width, height, src.getConfig());
// color information
int A, R, G, B;
int pixel;
// get contrast value
double contrast = Math.pow((100 + value) / 100, 2);
// scan through all pixels
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
// get pixel color
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
// apply filter contrast for every channel R, G, B
R = Color.red(pixel);
R = (int)(((((R / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(R < 0) { R = 0; }
else if(R > 255) { R = 255; }
G = Color.red(pixel);
G = (int)(((((G / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(G < 0) { G = 0; }
else if(G > 255) { G = 255; }
B = Color.red(pixel);
B = (int)(((((B / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(B < 0) { B = 0; }
else if(B > 255) { B = 255; }
// set new pixel color to output bitmap
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
// return final image
return bmOut;
calling it as :
ImageView image = (ImageView)(findViewById(R.id.image));
//image.setImageBitmap(createContrast(bitmap));
But i dont see any offect happening for the image. Can you please help where i am going wrong.
I saw the effectFactory from APi 14 . IS there something similar / any tutorial that can be used for older versions for image processing
There are three basic problems with this approach. The first two are coding issues. First, you are always calling Color.red, and there is no Color.green and Color.blue to be found in your code. The second issue is that this calculation is too repetitive. You assume the colors are in the range [0, 255], so it is much faster to create a array of 256 positions with the contrast calculated for each i in [0, 255].
The third issue is more problematic. Why did you consider this algorithm to improve contrast ? The results are meaningless for RGB, you might get something better in a different color system. Here are the results you should expect, with your parameter value at 0, 10, 20, and 30:
And here is a sample Python code to perform the operation:
import sys
from PIL import Image
img = Image.open(sys.argv[1])
width, height = img.size
cvalue = float(sys.argv[2]) # Your parameter "value".
contrast = ((100 + cvalue) / 100) ** 2
def apply_contrast(c):
c = (((c / 255.) - 0.5) * contrast + 0.5) * 255.0
return min(255, max(0, int(c)))
# Build the lookup table.
ltu = []
for i in range(256):
ltu.append(apply_contrast(i))
# The following "point" method applies a function to each
# value in the image. It considers the image as a flat sequence
# of values.
img = img.point(lambda x: ltu[x])
img.save(sys.argv[3])

Android Histogram equalization algorithm gives me really bright or red image

I am doing histogram equalization on an image. I first get the RGB image and convert it to YUV. I run the histogram equalization algorithm on Y' of YUV and then convert back to RGB. Is it me, or does the image look weird? I am doing this correctly? this image is pretty bright, other images are a little red.
Here are the before/after images:
The algorithm (the commented values are values that I used previously for conversion. Both yield pretty much the same results) :
public static void createContrast(Bitmap src) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap processedImage = Bitmap.createBitmap(width, height, src.getConfig());
int A = 0,R,G,B;
int pixel;
float[][] Y = new float[width][height];
float[][] U = new float[width][height];
float[][] V = new float [width][height];
int [] histogram = new int[256];
Arrays.fill(histogram, 0);
int [] cdf = new int[256];
Arrays.fill(cdf, 0);
float min = 257;
float max = 0;
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
pixel = src.getPixel(x, y);
//Log.i("TEST","("+x+","+y+")");
A = Color.alpha(pixel);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
// convert to YUV
/*Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.492f * (B-Y[x][y]);
V[x][y] = 0.877f * (R-Y[x][y]);*/
Y[x][y] = 0.299f * R + 0.587f * G + 0.114f * B;
U[x][y] = 0.565f * (B-Y[x][y]);
V[x][y] = 0.713f * (R-Y[x][y]);
// create a histogram
histogram[(int) Y[x][y]]+=1;
// get min and max values
if (Y[x][y] < min){
min = Y[x][y];
}
if (Y[x][y] > max){
max = Y[x][y];
}
}
}
cdf[0] = histogram[0];
for (int i=1;i<=255;i++){
cdf[i] = cdf[i-1] + histogram[i];
//Log.i("TESTEST","cdf of: "+i+" = "+cdf[i]);
}
float minCDF = cdf[(int)min];
float denominator = width*height - minCDF;
//Log.i("TEST","Histeq Histeq Histeq Histeq Histeq Histeq");
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
//Log.i("TEST","("+x+","+y+")");
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
Y[x][y] = ((cdf[ (int) Y[x][y]] - minCDF)/(denominator)) * 255;
/*R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.395f * U[x][y] - 0.581f * V[x][y]);
B = minMaxCalc (Y[x][y] + 2.032f * U[x][y]);*/
R = minMaxCalc(Y[x][y] + 1.140f * V[x][y]);
G = minMaxCalc (Y[x][y] - 0.344f * U[x][y] - 0.714f * V[x][y]);
B = minMaxCalc (Y[x][y] + 1.77f * U[x][y]);
//Log.i("TESTEST","A: "+A);
/*Log.i("TESTEST","R: "+R);
Log.i("TESTEST","G: "+G);
Log.i("TESTEST","B: "+B);*/
processedImage.setPixel(x, y, Color.argb(A, R, G, B));
}
}
}
My next step is to graph the histograms before and after. I just want to get an opinion here.
The question is a little bit old, but let me answer.
The reason is the way histogram equalization works. The algorithm tries to use all of the 0-255 range instead of given image's range.
So if you give it a dark image, it will change relatively brighter pixels to white colors. And relatively darker colors to black colors.
If you give it a bright image, for the same reason it will get darkened.

Android image processing algorithm performance

I have created a method which performs a sobel edge detection.
I use the Camera yuv byte array to perform the detection on.
Now my problem is that I only get 5fps or something, which is really low.
I know it can be done faster because there are other apps on the market who are able to do it at good fps on good quality.
I pass images in a 800x400 resolution.
Can anyone check if my algorithm can be made shorter or more performant?
I already put the algorithm in native code but there seems to be no difference in fps.
public void process() {
progress=0;
index = 0;
// calculate size
// pixel index
size = width*(height-2) - 2;
// pixel loop
while (size>0)
{
// get Y matrix values from YUV
ay = input[index];
by = input[index+1];
cy = input[index+2];
gy = input[index+doubleWidth];
hy = input[index+doubleWidth+1];
iy = input[index+doubleWidth+2];
// get X matrix values from YUV
ax = input[index];
cx = input[index+2];
dx = input[index+width];
fx = input[index+width+2];
gx = input[index+doubleWidth];
ix = input[index+doubleWidth+2];
// 1 2 1
// 0 0 0
// -1 -2 -1
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
// -1 0 1
// -2 0 2
// -1 0 1
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
total[index] = (int) Math.sqrt(sumx*sumx+sumy*sumy);
// Math.atan2(sumx,sumy);
if(max < total[index])
max = total[index];
// sum = - a -(2*b) - c + g + (2*h) + i;
if (total[index] <0)
total[index] = 0;
// clamp to 255
if (total[index] >255)
total[index] = 0;
sum = (int) (total[index]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
size--;
// next
index++;
}
//ratio = max/255;
}
Thx in Advance !
greetings
So I have two things:
I would consider loosing the Math.sqrt() expression: If you
are only interested in edge detection, I see no need for the this,
as the sqrt function is monotonic and it is really costly to
calculate.
I would consider another algorithm, especially I have had good results with a seperated convolution-filter: http://www.songho.ca/dsp/convolution/convolution.html#separable_convolution as this might bring down the number of arithmetic floating-point operations (which is probably your bottleneck).
I hope this helps, or at least sparks some inspiration. Good luck.
If you are using your algorithm in real-time, call it less often, maybe every ~20 frames instead of every frame.
Do more work per iteration, 800x400 in your algorithm is 318,398 iterations. Each iteration is pulling from the input array in a (to the processor) random way which causes issues with caching. Try pulling ay, ay2, by, by2, cy, cy2 and do twice the calculations per loop, you'll notice that the variables in the next iteration will relate to the previous. ay is now ay2 etc...
Here's a rewrite of your algo, doing twice the work per iteration. It saves a bit in redundant memory access, and ignores square root mentioned in another answer.
public void process() {
progress=0;
index = 0;
// calculate size
// pixel index
size = width*(height-2) - 2;
// do FIRST iteration outside of loop
// grab input avoid redundant memory accesses
ay = ax = input[index];
by = ay2 = ax2 = input[index+1];
cy = by2 = cx = input[index+2];
cy2 = cx2 = input[index+3];
gy = gx = input[index+doubleWidth];
hy = gy2 = gx2 = input[index+doubleWidth+1];
iy = hy2 = ix = input[index+doubleWidth+2];
iy2 = ix2 = input[index+doubleWidth+3];
dx = input[index+width];
dx2 = input[index+width+1];
fx = input[index+width+2];
fx2 = input[index+width+3];
//
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
sumy2 = ay2 + (by2*2) + cy2 - gy2 - (2*hy2) - iy2;
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
sumx2 = -ax2 + cx2 -(2*dx2) + (2*fx2) - gx2 + ix2;
// ignore the square root
total[index] = fastSqrt(sumx*sumx+sumy*sumy);
total[index+1] = fastSqrt(sumx2*sumx2+sumy2*sumy2);
max = Math.max(max, Math.max(total[index], total[index+1]));
// skip the test for negative value it can never happen
if(total[index] > 255) total[index] = 0;
if(total[index+1] > 255) total[index+1] = 0;
sum = (int) (total[index]);
sum2 = (int) (total[index+1]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
output[index+1] = 0xff000000 | (sum2 << 16) | (sum2 << 8) | sum2;
size -= 2;
index += 2;
while (size>0)
{
// grab input avoid redundant memory accesses
ay = ax = cy;
by = ay2 = ax2 = cy2;
cy = by2 = cs = input[index+2];
cy2 = cx2 = input[index+3];
gy = gx = iy;
hy = gy2 = gx2 = iy2;
iy = hy2 = ix = input[index+doubleWidth+2];
iy2 = ix2 = input[index+doubleWidth+3];
dx = fx;
dx2 = fx2;
fx = input[index+width+2];
fx2 = input[index+width+3];
//
sumy = ay + (by*2) + cy - gy - (2*hy) - iy;
sumy2 = ay2 + (by2*2) + cy2 - gy2 - (2*hy2) - iy2;
sumx = -ax + cx -(2*dx) + (2*fx) - gx + ix;
sumx2 = -ax2 + cx2 -(2*dx2) + (2*fx2) - gx2 + ix2;
// ignore the square root
total[index] = fastSqrt(sumx*sumx+sumy*sumy);
total[index+1] = fastSqrt(sumx2*sumx2+sumy2*sumy2);
max = Math.max(max, Math.max(total[index], total[index+1]));
// skip the test for negative value it can never happen
if(total[index] >= 65536) total[index] = 0;
if(total[index+1] >= 65536) total[index+1] = 0;
sum = (int) (total[index]);
sum2 = (int) (total[index+1]);
output[index] = 0xff000000 | (sum << 16) | (sum << 8) | sum;
output[index+1] = 0xff000000 | (sum2 << 16) | (sum2 << 8) | sum2;
size -= 2;
index += 2;
}
}
// some faster integer only implementation of square root.
public static int fastSqrt(int x) {
}
Please note, the above code was not tested, it was written inside the browser window and may contain syntax errors.
EDIT You could try using a fast integer only square root function to avoid the Math.sqrt.
http://atoms.alife.co.uk/sqrt/index.html

Categories

Resources