I'm trying to use Android's RenderScript to render a semi-transparent circle behind an image, but things go very wrong when returning a value from the RenderScript kernel.
This is my kernel:
#pragma version(1)
#pragma rs java_package_name(be.abyx.aurora)
// We don't need very high precision floating points
#pragma rs_fp_relaxed
// Center position of the circle
int centerX = 0;
int centerY = 0;
// Radius of the circle
int radius = 0;
// Destination colour of the background can be set here.
float destinationR;
float destinationG;
float destinationB;
float destinationA;
static int square(int input) {
return input * input;
}
uchar4 RS_KERNEL circleRender(uchar4 in, uint32_t x, uint32_t y) {
//Convert input uchar4 to float4
float4 f4 = rsUnpackColor8888(in);
// Check if the current coordinates fall inside the circle
if (square(x - centerX) + square(y - centerY) < square(radius)) {
// Check if current position is transparent, we then need to add the background!)
if (f4.a == 0) {
uchar4 temp = rsPackColorTo8888(0.686f, 0.686f, 0.686f, 0.561f);
return temp;
}
}
return rsPackColorTo8888(f4);
}
Now, the rsPackColorTo8888() function takes 4 floats with a value between 0.0 and 1.0. The resulting ARGB-color is then found by calculating 255 times each float value. So the given floats correspond to the color R = 0.686 * 255 = 175, G = 0.686 * 255 = 175, B = 0.686 * 255 = 175 and A = 0.561 * 255 = 143.
The rsPackColorTo8888() function itself works correctly, but when the found uchar4 value is returned from the kernel, something really weird happens. The R, G and B value changes to respectively Red * Alpha = 56, Green * Alpha = 56 and Blue * Alpha = 56 where Alpha is 0.561. This means that no value of R, G and B can ever be larger than A = 0.561 * 255.
Setting the output manually, instead of using rsPackColorTo8888() yields exact the same behavior. I mean that following code produces the exact same result, which in turn proofs that rsPackColorTo8888() is not the problem:
if (square(x - centerX) + square(y - centerY) < square(radius)) {
// Check if current position is transparent, we then need to add the background!)
if (f4.a == 0) {
uchar4 temp;
temp[0] = 175;
temp[1] = 175;
temp[2] = 175;
temp[3] = 143;
return temp;
}
}
This is the Java-code from which the script is called:
#Override
public Bitmap renderParallel(Bitmap input, int backgroundColour, int padding) {
ResizeUtility resizeUtility = new ResizeUtility();
// We want to end up with a square Bitmap with some padding applied to it, so we use the
// the length of the largest dimension (width or height) as the width of our square.
int dimension = resizeUtility.getLargestDimension(input.getWidth(), input.getHeight()) + 2 * padding;
Bitmap output = resizeUtility.createSquareBitmapWithPadding(input, padding);
output.setHasAlpha(true);
RenderScript rs = RenderScript.create(this.context);
Allocation inputAlloc = Allocation.createFromBitmap(rs, output);
Type t = inputAlloc.getType();
Allocation outputAlloc = Allocation.createTyped(rs, t);
ScriptC_circle_render circleRenderer = new ScriptC_circle_render(rs);
circleRenderer.set_centerX(dimension / 2);
circleRenderer.set_centerY(dimension / 2);
circleRenderer.set_radius(dimension / 2);
circleRenderer.set_destinationA(((float) Color.alpha(backgroundColour)) / 255.0f);
circleRenderer.set_destinationR(((float) Color.red(backgroundColour)) / 255.0f);
circleRenderer.set_destinationG(((float) Color.green(backgroundColour)) / 255.0f);
circleRenderer.set_destinationB(((float) Color.blue(backgroundColour)) / 255.0f);
circleRenderer.forEach_circleRender(inputAlloc, outputAlloc);
outputAlloc.copyTo(output);
inputAlloc.destroy();
outputAlloc.destroy();
circleRenderer.destroy();
rs.destroy();
return output;
}
When alpha is set to 255 (or 1.0 as a float), the returned color-values (inside my application's Java-code) are correct.
Am I doing something wrong, or is this really a bug somewhere in the RenderScript-implementation?
Note: I've checked and verified this behavior on a Oneplus 3T (Android 7.1.1), a Nexus 5 (Android 7.1.2), Android-emulator version 7.1.2 and 6.0
Instead of passing the values with the type:
uchar4 temp = rsPackColorTo8888(0.686f, 0.686f, 0.686f, 0.561f);
Trying creating a float4 and passing that.
float4 newFloat4 = { 0.686, 0.686, 0.686, 0.561 };
uchar4 temp = rsPackColorTo8888(newFloat4);
Related
Based on the discussion I had at Camera2 api Imageformat.yuv_420_888 results on rotated image, I wanted to know how to adjust the lookup done via rsGetElementAt_uchar methods so that the YUV data is rotated by 90 degree.
I also have a project like the HdrViewfinder provided by Google. The problem is that the output is in landscape because the output surface used as target surface is connected to the yuv allocation which does not care if the device is in landscape or portrait mode. But I want to adjust the code so that it is in portrait mode.
Therefore, I took a custom YUVToRGBA renderscript but I do not know what to change to rotate the output.
Can somebody help me to adjust the following custom YUVtoRGBA script by 90 degree because I want to use the output in portrait mode:
// Needed directive for RS to work
#pragma version(1)
// The java_package_name directive needs to use your Activity's package path
#pragma rs java_package_name(net.hydex11.cameracaptureexample)
rs_allocation inputAllocation;
int wIn, hIn;
int numTotalPixels;
// Function to invoke before applying conversion
void setInputImageSize(int _w, int _h)
{
wIn = _w;
hIn = _h;
numTotalPixels = wIn * hIn;
}
// Kernel that converts a YUV element to a RGBA one
uchar4 __attribute__((kernel)) convert(uint32_t x, uint32_t y)
{
// YUV 4:2:0 planar image, with 8 bit Y samples, followed by
// interleaved V/U plane with 8bit 2x2 subsampled chroma samples
int baseIdx = x + y * wIn;
int baseUYIndex = numTotalPixels + (y >> 1) * wIn + (x & 0xfffffe);
uchar _y = rsGetElementAt_uchar(inputAllocation, baseIdx);
uchar _u = rsGetElementAt_uchar(inputAllocation, baseUYIndex);
uchar _v = rsGetElementAt_uchar(inputAllocation, baseUYIndex + 1);
_y = _y < 16 ? 16 : _y;
short Y = ((short)_y) - 16;
short U = ((short)_u) - 128;
short V = ((short)_v) - 128;
uchar4 out;
out.r = (uchar) clamp((float)(
(Y * 298 + V * 409 + 128) >> 8), 0.f, 255.f);
out.g = (uchar) clamp((float)(
(Y * 298 - U * 100 - V * 208 + 128) >> 8), 0.f, 255.f);
out.b = (uchar) clamp((float)(
(Y * 298 + U * 516 + 128) >> 8), 0.f, 255.f); //
out.a = 255;
return out;
}
I have found that custom script at https://bitbucket.org/cmaster11/rsbookexamples/src/tip/CameraCaptureExample/app/src/main/rs/customYUVToRGBAConverter.fs .
Here someone has put the Java code to rotate YUV data. But I want to do it in Renderscript since that is faster.
Any help would be great.
best regards,
I'm assuming you want the output to be in RGBA, as in your conversion script. You should be able to use an approach like that used in this answer; that is, simply modify the x and y coordinates as the first step in the convert kernel:
//Rotate 90 deg clockwise during the conversion
uchar4 __attribute__((kernel)) convert(uint32_t inX, uint32_t inY)
{
uint32_t x = wIn - 1 - inY;
uint32_t y = inX;
//...rest of the function
Note the changes to the parameter names.
This presumes you have set up the output dimensions correctly (see linked answer). A 270 degree rotation can be accomplished in a similar way.
I am trying to floodfill a bitmap using Renderscript. and my renderscript file progress.rs is
#pragma version(1)
#pragma rs java_package_name(com.intel.sample.androidbasicrs)
rs_allocation input;
int width;
int height;
int xTouchApply;
int yTouchApply;
static int same(uchar4 pixel, uchar4 in);
uchar4 __attribute__((kernel)) root(const uchar4 in, uint32_t x, uint32_t y) {
uchar4 out = in;
rsDebug("Process.rs : image width: ", width);
rsDebug("Process.rs : image height: ", height);
rsDebug("Process.rs : image pointX: ", xTouchApply);
rsDebug("Process.rs : image pointY: ", yTouchApply);
if(xTouchApply >= 0 && xTouchApply < width && yTouchApply >=0 && yTouchApply < height){
// getting touched pixel
uchar4 pixel = rsGetElementAt_uchar4(input, xTouchApply, yTouchApply);
rsDebug("Process.rs : getting touched pixel", 0);
// resets the pixel stack
int topOfStackIndex = 0;
// creating pixel stack
int pixelStack[width*height];
// Pushes the touched pixel onto the stack
pixelStack[topOfStackIndex] = xTouchApply;
pixelStack[topOfStackIndex+1] = yTouchApply;
topOfStackIndex += 2;
//four way stack floodfill algorithm
while(topOfStackIndex>0){
rsDebug("Process.rs : looping while", 0);
// Pops a pixel from the stack
int x = pixelStack[topOfStackIndex - 2];
int y1 = pixelStack[topOfStackIndex - 1];
topOfStackIndex -= 2;
while (y1 >= 0 && same(rsGetElementAt_uchar4(input, x, y1), pixel)) {
y1--;
}
y1++;
int spanLeft = 0;
int spanRight = 0;
while (y1 < height && same(rsGetElementAt_uchar4(input, x, y1), pixel)) {
rsDebug("Process.rs : pointX: ", x);
rsDebug("Process.rs : pointY: ", y1);
float3 outPixel = dot(f4.rgb, channelWeights);
out = rsPackColorTo8888(outPixel);
// conditions to traverse skipPixels to check threshold color(Similar color)
if (!spanLeft && x > 0 && same(rsGetElementAt_uchar4(input, x - 1, y1), pixel)) {
// Pixel to the left must also be changed, pushes it to the stack
pixelStack[topOfStackIndex] = x - 1;
pixelStack[topOfStackIndex + 1] = y1;
topOfStackIndex += 2;
spanLeft = 1;
} else if (spanLeft && !same(rsGetElementAt_uchar4(input, x - 1, y1), pixel)) {
// Pixel to the left has already been changed
spanLeft = 0;
}
// conditions to traverse skipPixels to check threshold color(Similar color)
if (!spanRight && x < width - 1 && same(rsGetElementAt_uchar4(input, x + 1, y1), pixel)) {
// Pixel to the right must also be changed, pushes it to the stack
pixelStack[topOfStackIndex] = x + 1;
pixelStack[topOfStackIndex + 1] = y1;
topOfStackIndex += 2;
spanRight = 1;
} else if (spanRight && x < width - 1 && !same(rsGetElementAt_uchar4(input, x + 1, y1), pixel)) {
// Pixel to the right has already been changed
spanRight = 0;
}
y1++;
}
}
}
return out;
}
static int same(uchar4 px, uchar4 inPx){
int isSame = 0;
if((px.r == inPx.r) && (px.g == inPx.g) && (px.b == inPx.b) && (px.a == inPx.a)) {
isSame = 1;
// rsDebug("Process.rs : matching pixel: ", isSame);
} else {
isSame = 0;
}
// rsDebug("Process.rs : matching pixel: ", isSame);
return isSame;
}
and my Activity's code is:
inputBitmap = Bitmap.createScaledBitmap(inputBitmap, displayWidth, displayHeight, false);
// Create an allocation (which is memory abstraction in the RenderScript)
// that corresponds to the inputBitmap.
allocationIn = Allocation.createFromBitmap(
rs,
inputBitmap,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT
);
allocationOut = Allocation.createTyped(rs, allocationIn.getType());
int imageWidth = inputBitmap.getWidth();
int imageHeight = inputBitmap.getHeight();
script.set_width(imageWidth);
script.set_height(imageHeight);
script.set_input(allocationIn);
//....
//....
// and my onTouchEvent Code is
script.set_xTouchApply(xTouchApply);
script.set_yTouchApply(yTouchApply);
// Run the script.
script.forEach_root(allocationIn, allocationOut);
allocationOut.copyTo(outputBitmap);
when I touched bitmap it is showing Application not responding. It is because of root method is calling for every pixels. How can I optimize this code. And how can I compare two uchar4 variables in Renderscript? How can I improve my same method? Or How can I find similar neighbor pixels using threshold value? I got stuck. Please guys help me.
I don't have much knowledge of c99 programming language and Renderscript. Can you guys debug my renderscript code. and please tell me what's wrong in this code. Or can I improve this renderscript code to floodfill the bitmap. Any help will be appreciated And sorry for my poor English ;-) . Thanks
Renderscript is Android's front-end to GPU-instructions. And it is extremely good if you want to perform operations on each pixel because it uses the massive GPU-parallelism-capabilities. So, you can run an operation on each pixel. For this purpose, you start a program in Renderscript with sth like "for all pixels, do the following".
The flood fill algorithm though cannot run in such a parallel environment because you only know which pixel to paint after painting another pixel before it. This is not only true for renderscript but all GPU-related libraries, like CUDA or others.
i use the Renderscript to do the gaussian blur on a image.
but no matter what i did. the ScriptIntrinsicBlur is more more faster.
why this happened? ScriptIntrinsicBlur is using another method?
this id my RS code:
#pragma version(1)
#pragma rs java_package_name(top.deepcolor.rsimage.utils)
//aussian blur algorithm.
//the max radius of gaussian blur
static const int MAX_BLUR_RADIUS = 1024;
//the ratio of pixels when blur
float blurRatio[(MAX_BLUR_RADIUS << 2) + 1];
//the acquiescent blur radius
int blurRadius = 0;
//the width and height of bitmap
uint32_t width;
uint32_t height;
//bind to the input bitmap
rs_allocation input;
//the temp alloction
rs_allocation temp;
//set the radius
void setBlurRadius(int radius)
{
if(1 > radius)
radius = 1;
else if(MAX_BLUR_RADIUS < radius)
radius = MAX_BLUR_RADIUS;
blurRadius = radius;
/**
calculate the blurRadius by Gaussian function
when the pixel is far way from the center, the pixel will not contribute to the center
so take the sigma is blurRadius / 2.57
*/
float sigma = 1.0f * blurRadius / 2.57f;
float deno = 1.0f / (sigma * sqrt(2.0f * M_PI));
float nume = -1.0 / (2.0f * sigma * sigma);
//calculate the gaussian function
float sum = 0.0f;
for(int i = 0, r = -blurRadius; r <= blurRadius; ++i, ++r)
{
blurRatio[i] = deno * exp(nume * r * r);
sum += blurRatio[i];
}
//normalization to 1
int len = radius + radius + 1;
for(int i = 0; i < len; ++i)
{
blurRatio[i] /= sum;
}
}
/**
the gaussian blur is decomposed two steps:1
1.blur in the horizontal
2.blur in the vertical
*/
uchar4 RS_KERNEL horizontal(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int horizontalIndex = x + k;
if(0 > horizontalIndex) horizontalIndex = 0;
if(width <= horizontalIndex) horizontalIndex = width - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(input, horizontalIndex, y);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
uchar4 RS_KERNEL vertical(uint32_t x, uint32_t y)
{
float a, r, g, b;
for(int k = -blurRadius; k <= blurRadius; ++k)
{
int verticalIndex = y + k;
if(0 > verticalIndex) verticalIndex = 0;
if(height <= verticalIndex) verticalIndex = height - 1;
uchar4 inputPixel = rsGetElementAt_uchar4(temp, x, verticalIndex);
int blurRatioIndex = k + blurRadius;
a += inputPixel.a * blurRatio[blurRatioIndex];
r += inputPixel.r * blurRatio[blurRatioIndex];
g += inputPixel.g * blurRatio[blurRatioIndex];
b += inputPixel.b * blurRatio[blurRatioIndex];
}
uchar4 out;
out.a = (uchar) a;
out.r = (uchar) r;
out.g = (uchar) g;
out.b = (uchar) b;
return out;
}
Renderscript intrinsics are implemented very differently from what you can achieve with a script of your own. This is for several reasons, but mainly because they are built by the RS driver developer of individual devices in a way that makes the best possible use of that particular hardware/SoC configuration, and most likely makes low level calls to the hardware that is simply not available at the RS programming layer.
Android does provide a generic implementation of these intrinsics though, to sort of "fall back" in case no lower hardware implementation is available. Seeing how these generic ones are done will give you some better idea of how these intrinsics work. For example, you can see the source code of the generic implementation of the 3x3 convolution intrinsic here rsCpuIntrinsicConvolve3x3.cpp.
Take a very close look at the code starting from line 98 of that source file, and notice how they use no for loops whatsoever to do the convolution. This is known as unrolled loops, where you add and multiply explicitly the 9 corresponding memory locations in the code, thereby avoiding the need of a for loop structure. This is the first rule you must take into account when optimizing parallel code. You need to get rid of all branching in your kernel. Looking at your code, you have a lot of if's and for's that cause branching -- this means the control flow of the program is not straight through from beginning to end.
If you unroll your for loops, you will immediately see a boost in performance. Note that by removing your for structures you will no longer be able to generalize your kernel for all possible radius amounts. In that case, you would have to create fixed kernels for different radii, and this is exactly why you see separate 3x3 and 5x5 convolution intrinsics, because this is just what they do. (See line 99 of the 5x5 intrinsic at rsCpuIntrinsicConvolve5x5.cpp).
Furthermore, the fact that you have two separate kernels doesn't help. If you're doing a gaussian blur, the convolutional kernel is indeed separable and you can do 1xN + Nx1 convolutions as you've done there, but I would recommend putting both passes together in the same kernel.
Keep in mind though, that even doing these tricks will probably still not give you as fast results as the actual intrinsics, because those have probably been highly optimized for your specific device(s).
In my app i want to edit images like brightness, contrast, etc. I got some tutorial and i am trying this to change contrast
public static Bitmap createContrast(Bitmap src, double value) {
// image size
int width = src.getWidth();
int height = src.getHeight();
// create output bitmap
Bitmap bmOut = Bitmap.createBitmap(width, height, src.getConfig());
// color information
int A, R, G, B;
int pixel;
// get contrast value
double contrast = Math.pow((100 + value) / 100, 2);
// scan through all pixels
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
// get pixel color
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
// apply filter contrast for every channel R, G, B
R = Color.red(pixel);
R = (int)(((((R / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(R < 0) { R = 0; }
else if(R > 255) { R = 255; }
G = Color.red(pixel);
G = (int)(((((G / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(G < 0) { G = 0; }
else if(G > 255) { G = 255; }
B = Color.red(pixel);
B = (int)(((((B / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(B < 0) { B = 0; }
else if(B > 255) { B = 255; }
// set new pixel color to output bitmap
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
// return final image
return bmOut;
calling it as :
ImageView image = (ImageView)(findViewById(R.id.image));
//image.setImageBitmap(createContrast(bitmap));
But i dont see any offect happening for the image. Can you please help where i am going wrong.
I saw the effectFactory from APi 14 . IS there something similar / any tutorial that can be used for older versions for image processing
There are three basic problems with this approach. The first two are coding issues. First, you are always calling Color.red, and there is no Color.green and Color.blue to be found in your code. The second issue is that this calculation is too repetitive. You assume the colors are in the range [0, 255], so it is much faster to create a array of 256 positions with the contrast calculated for each i in [0, 255].
The third issue is more problematic. Why did you consider this algorithm to improve contrast ? The results are meaningless for RGB, you might get something better in a different color system. Here are the results you should expect, with your parameter value at 0, 10, 20, and 30:
And here is a sample Python code to perform the operation:
import sys
from PIL import Image
img = Image.open(sys.argv[1])
width, height = img.size
cvalue = float(sys.argv[2]) # Your parameter "value".
contrast = ((100 + cvalue) / 100) ** 2
def apply_contrast(c):
c = (((c / 255.) - 0.5) * contrast + 0.5) * 255.0
return min(255, max(0, int(c)))
# Build the lookup table.
ltu = []
for i in range(256):
ltu.append(apply_contrast(i))
# The following "point" method applies a function to each
# value in the image. It considers the image as a flat sequence
# of values.
img = img.point(lambda x: ltu[x])
img.save(sys.argv[3])
I have hex rgb color and black-white mask. It's two integer arrays:
mColors = new int[] {
0xFFFF0000, 0xFFFF00FF, 0xFF0000FF, 0xFF00FFFF, 0xFF00FF00,
0xFFFFFF00, 0xFFFF0000
};
mColorsMask = new int[] {
0xFFFFFFFF, 0xFF000000, 0xFFFFFFFF, 0xFF000000, 0xFFFFFFFF,
0xFFFFFFFF, 0xFF000000
};
I need to convert my color to black value depending on contrast. Contrast is integer value in a range from 0 to 255:
With white all is fine, I make byte addition:
int newHexColor = (contrast << 16) | (contrast << 8) | contrast | mColors[i];
newColorsArray[i] = mode;
How to convert it to black?
You might look into using the HSB color space. It seems much more suited to what you're trying to do. In particular, you see those angles that end up black in your "what i want" image? Those correspond to "hues" at 60, 180, and 300 degrees (1.0/6, 3.0/6, and 5.0/6 in Java). The white corresponds to 0, 120, and 240 degrees (0, 1.0/3, and 2.0/3 in Java) -- and not coincidentally, the colors at those angles are primary colors (that is, two of the three RGB components are zero).
What you'd do is find the difference between your color's hue and the nearest primary color. (Should be less than 1/6.) Scale it up (multiplying by 6 should do it), to give you a value between 0 and 1.0. That will give you an "impurity" value, which is basically the deviation from the nearest primary color. Of course, that number subtracted from 1.0 gives you the "purity", or the closeness to a primary color.
You can create a greyscale color based on the impurity or purity by using the respective value as the R, G, and B, with an alpha of 1.0f.
public Color getMaskColor(Color c) {
float[] hsv = Color.RGBtoHSB(c.getRed(), c.getGreen(), c.getBlue(), null);
float hue = hsv[0];
// 0, 1/3, and 2/3 are the primary colors. Find the closest one to c,
// by rounding c to the nearest third.
float nearestPrimaryHue = Math.round(hue * 3.0f) / 3.0f;
// difference between hue and nearestPrimaryHue <= 1/6
// Multiply by 6 to get a value between 0 and 1.0
float impurity = Math.abs(hue - nearestPrimaryHue) * 6.0f;
float purity = 1.0f - impurity;
// return a greyscale color based on the "purity"
// (for #FF0000, would return white)
// using impurity would return black instead
return new Color(purity, purity, purity, 1.0f);
}
You could either use a color component of the returned color as the "contrast" value, or change the function so that it returns the "purity" or "impurity" as needed.
Note, the math gets wonky with greyscale colors. (The way Java calculates HSB, pure greys are just reds (hue=0) with no tint (saturation=0). The only component that changes is the brightness.) But since your color wheel doesn't have greyscale colors...
You can make the image black n white using contrast.
See the code..
public static Bitmap createContrast(Bitmap src, double value) {
// image size
int width = src.getWidth();
int height = src.getHeight();
// create output bitmap
Bitmap bmOut = Bitmap.createBitmap(width, height, src.getConfig());
// color information
int A, R, G, B;
int pixel;
// get contrast value
double contrast = Math.pow((100 + value) / 100, 2);
// scan through all pixels
for(int x = 0; x < width; ++x) {
for(int y = 0; y < height; ++y) {
// get pixel color
pixel = src.getPixel(x, y);
A = Color.alpha(pixel);
// apply filter contrast for every channel R, G, B
R = Color.red(pixel);
R = (int)(((((R / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(R < 0) { R = 0; }
else if(R > 255) { R = 255; }
G = Color.red(pixel);
G = (int)(((((G / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(G < 0) { G = 0; }
else if(G > 255) { G = 255; }
B = Color.red(pixel);
B = (int)(((((B / 255.0) - 0.5) * contrast) + 0.5) * 255.0);
if(B < 0) { B = 0; }
else if(B > 255) { B = 255; }
// set new pixel color to output bitmap
bmOut.setPixel(x, y, Color.argb(A, R, G, B));
}
}
return bmOut;
}
Set the double value to 50 on method call. For Example createContrast(Bitmap src, 50)