am working with an android project , in my project , i want to store the pixel values of an image into an array, i used getPixels() function for it and store it in an array named as pixels, but when i tried to print it in a TextView , am getting some negetive values sometinh like -1623534 and so on . Why it is like that. ?
Here is my code :-
Bitmap result = BitmapFactory.decodeFile(filePath);
TextView resultText=(TextView)findViewById(R.id.txtResult);
try
{
int pich=(int)result.getHeight();
int picw=(int)result.getWidth();
int[] pixels = new int[pich*picw];
result.getPixels(pixels, 0, picw, 0, 0, picw, pich);
//To convert into String
StringBuffer buff = new StringBuffer();
for (int i = 0; i <100; i++)
{
// getting values from array
buff.append(pixels[i]).append(" ");
}
//To save the binary in newString
String newString=new String(buff.toString());
resultText.setText(newString);
And i found in some other post like
int R, G, B;
for (int y = 0; y < pich; y++)
{
for (int x = 0; x < picw; x++)
{
int index = y * picw + x;
R = (pixels[index] >> 16) & 0xff; //bitwise shifting
G = (pixels[index] >> 8) & 0xff;
B = pixels[index] & 0xff;
//R,G.B - Red, Green, Blue
//to restore the values after RGB modification, use
//next statement
pixels[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}
}
So i modified my code as :-
Bitmap result = BitmapFactory.decodeFile(filePath);
TextView resultText=(TextView)findViewById(R.id.txtResult);
try
{
int pich=(int)result.getHeight();
int picw=(int)result.getWidth();
int[] pixels = new int[pich*picw];
result.getPixels(pixels, 0, picw, 0, 0, picw, pich);
int R, G, B;
for (int y = 0; y < pich; y++)
{
for (int x = 0; x < picw; x++)
{
int index = y * picw + x;
R = (pixels[index] >> 16) & 0xff; //bitwise shifting
G = (pixels[index] >> 8) & 0xff;
B = pixels[index] & 0xff;
//R,G.B - Red, Green, Blue
//to restore the values after RGB modification, use
//next statement
pixels[index] = 0xff000000 | (R << 16) | (G << 8) | B;
}
}
//To convert into String
StringBuffer buff = new StringBuffer();
for (int i = 0; i <100; i++)
{
// getting values from array
buff.append(pixels[i]).append(" ");
}
//To save the binary in newString
String newString=new String(buff.toString());
resultText.setText(newString);
Is it correct ? Am getting the negetive values even after the modification , pleasehelp me , Thanks in advance
The reason why you are getting negative values is the following:
Each pixle of an image contains of 4 values(red, green, blue, alpha). Each value has 8 bit (one byte). All 4 together have exactly 32 bit, which is the size of an integer value. But when you print a (signed-)integer, the first bit is interpreted as sign-flag, so you can get negative values, if this bit is set to 1 (happens when the first channel is >= 128).
To get the RGB values from a pixel I normally use this:
int pixel = bmp.getPixel(x, y);
int red = Color.red(pixel);
int green = Color.green(pixel);
int blue = Color.blue(pixel);
I have two bitmaps, namely bm1 and bm2, and I'd like to create (as quick as possible) another bitmap which is a fading mix between bm1 and bm2 where bm1 is weighted with weight and bm2 with 1-weight.
My current implementation is as follows:
private Bitmap Fade(Bitmap bm1, Bitmap bm2, double weight)
{
int width = bm1.getWidth();
int height = bm1.getHeight();
if (width != bm2.getWidth() || height != bm2.getHeight())
return null;
Bitmap bm = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
{
int pix_s = bm1.getPixel(x, y);
int pix_d = bm2.getPixel(x, y);
int r_s = (pix_s >> 16) & 0xFF;
int g_s = (pix_s >> 8) & 0xFF;
int b_s = pix_s & 0xFF;
int r_d = (pix_d >> 16) & 0xFF;
int g_d = (pix_d >> 8) & 0xFF;
int b_d = pix_d & 0xFF;
int r = (int)((1-weight) * r_s + weight * r_d);
int g = (int)((1-weight) * g_s + weight * g_d);
int b = (int)((1-weight) * b_s + weight * b_d);
int pix = 0xff000000 | (r << 16) | (g << 8) | b;
bm.setPixel(x, y, pix);
}
return bm;
}
As you can see, it simply sets the RGB components of each pixel of the generated image with an interpolation between the corresponding RGB components of pixels in bm1 and bm2.
However, this function is very slow as it scans all the pixels of the two input bitmaps.
Is there a more efficient way to do the same?
For instance, by someway acting onto the transparency attributes?
Here it shows you how to change the opacity of your bitmap, it changes the alpha channel of the RGB color.
and Here it shows you how to overlay in a canvas two bitmaps
Im currently working on a program which applies edge detection to an area of the preview frame. I have used previewcallback and got my cropped bitmap, have converted to grayscale using the following method.
int height1=120;
int width2=120;
final Bitmap resizedBitmap = Bitmap.createBitmap(bmp, 260, 15,
width2, height1);
try {
int bWidth = resizedBitmap.getWidth();
int bHeight = resizedBitmap.getHeight();
int[] pixels = new int[bWidth * bHeight];
resizedBitmap.getPixels(pixels, 0, bWidth, 0, 0, bWidth, bHeight);
for (int y = 0; y < bHeight; y++){
for (int x = 0; x < bWidth; x++){
int index = y * bWidth + x;
int R = (pixels[index] >> 16) & 0xff; //bitwise shifting
int G = (pixels[index] >> 8) & 0xff;
int B = pixels[index] & 0xff;
int gray = (int) (.299 * R + .587 * G + .114 * B);
}
}
I am very new to this, and would like to know whether gray is a 2D array of 120x120 pixels, or whether the value of gray is just being overwritten for each loop.
Apologies if this is very basic
Well, maybe I'm missing something, but as far as I can see gray is overwritten. You'd need something like
int[][] gray = new int[width][height];
// start loop
In the Loop:
gray[x][y] = ...;
I am capturing image using SurfaceView and getting Yuv Raw preview data in public void onPreviewFrame4(byte[] data, Camera camera)
I have to perform some image preprocessing in onPreviewFrame so i need to convert Yuv preview data to RGB data than image preprocessing and back to Yuv data.
I have used both function for encoding and decoding Yuv data to RGB as following :
public void onPreviewFrame(byte[] data, Camera camera) {
Point cameraResolution = configManager.getCameraResolution();
if (data != null) {
Log.i("DEBUG", "data Not Null");
// Preprocessing
Log.i("DEBUG", "Try For Image Processing");
Camera.Parameters mParameters = camera.getParameters();
Size mSize = mParameters.getPreviewSize();
int mWidth = mSize.width;
int mHeight = mSize.height;
int[] mIntArray = new int[mWidth * mHeight];
// Decode Yuv data to integer array
decodeYUV420SP(mIntArray, data, mWidth, mHeight);
// Converting int mIntArray to Bitmap and
// than image preprocessing
// and back to mIntArray.
// Encode intArray to Yuv data
encodeYUV420SP(data, mIntArray, mWidth, mHeight);
}
}
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
static public void encodeYUV420SP_original(byte[] yuv420sp, int[] rgba,
int width, int height) {
final int frameSize = width * height;
int[] U, V;
U = new int[frameSize];
V = new int[frameSize];
final int uvwidth = width / 2;
int r, g, b, y, u, v;
for (int j = 0; j < height; j++) {
int index = width * j;
for (int i = 0; i < width; i++) {
r = (rgba[index] & 0xff000000) >> 24;
g = (rgba[index] & 0xff0000) >> 16;
b = (rgba[index] & 0xff00) >> 8;
// rgb to yuv
y = (66 * r + 129 * g + 25 * b + 128) >> 8 + 16;
u = (-38 * r - 74 * g + 112 * b + 128) >> 8 + 128;
v = (112 * r - 94 * g - 18 * b + 128) >> 8 + 128;
// clip y
yuv420sp[index++] = (byte) ((y < 0) ? 0 : ((y > 255) ? 255 : y));
U[index] = u;
V[index++] = v;
}
}
The problem is that encoding and decoding Yuv data might have some mistake because if i skip the preprocessing step than also encoded Yuv data are differ from original data of PreviewCallback.
Please help me to resolve this issue. I have to used this code in OCR scanning so i need to implement this type of logic.
If any other way of doing same thing than please provide me.
Thanks in advance. :)
Although the documentation suggests that you can set which format the image data should arrive from the camera in, in practice you often have a choice of one: NV21, a YUV format. For lots of information on this format see http://www.fourcc.org/yuv.php#NV21 and for information on the theory behind converting it to RGB see http://www.fourcc.org/fccyvrgb.php. There is a picture based explanation at Extract black and white image from android camera's NV21 format. There is an android specific section on a wikipedia page about the subject (thanks #AlexCohn): YUV#Y'UV420sp (NV21) to RGB conversion (Android).
However, once you've set up your onPreviewFrame routine, the mechanics of going from the byte array it sends you to useful data is somewhat, ummmm, unclear. From API 8 onwards, the following solution is available, to get to a ByteStream holiding a JPEG of the image (compressToJpeg is the only conversion option offered by YuvImage):
// pWidth and pHeight define the size of the preview Frame
ByteArrayOutputStream out = new ByteArrayOutputStream();
// Alter the second parameter of this to the actual format you are receiving
YuvImage yuv = new YuvImage(data, ImageFormat.NV21, pWidth, pHeight, null);
// bWidth and bHeight define the size of the bitmap you wish the fill with the preview image
yuv.compressToJpeg(new Rect(0, 0, bWidth, bHeight), 50, out);
This JPEG may then need to be converted into the format you want. If you want a Bitmap:
byte[] bytes = out.toByteArray();
Bitmap bitmap= BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
If, for whatever reason, you are unable to do this, you can do the conversion manually. Some problems to be overcome in doing this:
The data arrives in a byte array. By definition, bytes are signed numbers, meaning that they go from -128 to 127. However, the data is actually unsigned bytes (0 to 255). If this isn't dealt with, the outcome is doomed to have some odd clipping effects.
The data is in a very specific order (as per the previously mentioned web pages) and each pixel needs to be extracted carefully.
Each pixel needs to be put into the right place on a bitmap, say. This also requires a rather messy (in my view) approach of building a buffer of the data and then filling a bitmap from it.
In principle, the values should be stored [16..240], but it appears that they are stored [0..255] in the data sent to onPreviewFrame
Just about every web page on the matter proposes different coefficients, even allowing for [16..240] vs [0..255] options.
If you've actually got NV12 (another variant on YUV420), then you will need to swap the reads for U and V.
I present a solution (which seems to work), with requests for corrections, improvements and ways of making the whole thing less costly to run. I have set it out to hopefully make clear what is happening, rather than to optimise it for speed. It creates a bitmap the size of the preview image:
The data variable is coming from the call to onPreviewFrame
// Define whether expecting [16..240] or [0..255]
boolean dataIs16To240 = false;
// the bitmap we want to fill with the image
Bitmap bitmap = Bitmap.createBitmap(imageWidth, imageHeight, Bitmap.Config.ARGB_8888);
int numPixels = imageWidth*imageHeight;
// the buffer we fill up which we then fill the bitmap with
IntBuffer intBuffer = IntBuffer.allocate(imageWidth*imageHeight);
// If you're reusing a buffer, next line imperative to refill from the start,
// if not good practice
intBuffer.position(0);
// Set the alpha for the image: 0 is transparent, 255 fully opaque
final byte alpha = (byte) 255;
// Holding variables for the loop calculation
int R = 0;
int G = 0;
int B = 0;
// Get each pixel, one at a time
for (int y = 0; y < imageHeight; y++) {
for (int x = 0; x < imageWidth; x++) {
// Get the Y value, stored in the first block of data
// The logical "AND 0xff" is needed to deal with the signed issue
float Y = (float) (data[y*imageWidth + x] & 0xff);
// Get U and V values, stored after Y values, one per 2x2 block
// of pixels, interleaved. Prepare them as floats with correct range
// ready for calculation later.
int xby2 = x/2;
int yby2 = y/2;
// make this V for NV12/420SP
float U = (float)(data[numPixels + 2*xby2 + yby2*imageWidth] & 0xff) - 128.0f;
// make this U for NV12/420SP
float V = (float)(data[numPixels + 2*xby2 + 1 + yby2*imageWidth] & 0xff) - 128.0f;
if (dataIs16To240) {
// Correct Y to allow for the fact that it is [16..235] and not [0..255]
Y = 1.164*(Y - 16.0);
// Do the YUV -> RGB conversion
// These seem to work, but other variations are quoted
// out there.
R = (int)(Yf + 1.596f*V);
G = (int)(Yf - 0.813f*V - 0.391f*U);
B = (int)(Yf + 2.018f*U);
}
else {
// No need to correct Y
// These are the coefficients proposed by #AlexCohn
// for [0..255], as per the wikipedia page referenced
// above
R = (int)(Yf + 1.370705f*V);
G = (int)(Yf - 0.698001f*V - 0.337633f*U);
B = (int)(Yf + 1.732446f*U);
}
// Clip rgb values to 0-255
R = R < 0 ? 0 : R > 255 ? 255 : R;
G = G < 0 ? 0 : G > 255 ? 255 : G;
B = B < 0 ? 0 : B > 255 ? 255 : B;
// Put that pixel in the buffer
intBuffer.put(alpha*16777216 + R*65536 + G*256 + B);
}
}
// Get buffer ready to be read
intBuffer.flip();
// Push the pixel information from the buffer onto the bitmap.
bitmap.copyPixelsFromBuffer(intBuffer);
As #Timmmm points out below, you could do the conversion in int by multiplying the scaling factors by 1000 (ie. 1.164 becomes 1164) and then dividng the end results by 1000.
Why not specify that camera preview should provide RGB images?
i.e. Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);
You can use RenderScript -> ScriptIntrinsicYuvToRGB
Kotlin Sample
val rs = RenderScript.create(CONTEXT_HERE)
val yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs))
val yuvType = Type.Builder(rs, Element.U8(rs)).setX(byteArray.size)
val inData = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT)
val rgbaType = Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height)
val outData = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT)
inData.copyFrom(byteArray)
yuvToRgbIntrinsic.setInput(inData)
yuvToRgbIntrinsic.forEach(outData)
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
outData.copyTo(bitmap)
After some tests on Samsung S4 mini fastest code is (120% faster then Neil's [floats!] and 30% faster then original Hitesh's):
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
// define variables before loops (+ 20-30% faster algorithm o0`)
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
// Java's functions are faster then 'IFs'
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
Speed is comparable to YuvImage.compressToJpeg() with ByteArrayOutputStream as output (30-50 ms for 640x480 image).
Result: Samsung S4 mini (2x1.7GHz) can't compress to JPEG/convert YUV to RGB in real time (640x480#30fps)
Java implementation is 10 times slow than the c version, I suggest you use GPUImage library or just move this part of code.
There is a android version of GPUImage:
https://github.com/CyberAgent/android-gpuimage
You can include this library if you use gradle, and call the method:
GPUImageNativeLibrary.YUVtoRBGA( inputArray, WIDTH, HEIGHT, outputArray);
I compare the time, for a NV21 image which is 960x540, use above java code, it cost 200ms+, with GPUImage version, just 10ms~20ms.
You can use ColorHelper library for this:
using ColorHelper;
YUV yuv = new YUV(0.1, 0.1, 0.2);
RGB rgb = ColorConverter.YuvToRgb(yuv);
Links:
Github
Nuget
Fixup the above code snippet
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
int r, g, b, y1192, y, i, uvp, u, v;
for (int j = 0, yp = 0; j < height; j++) {
uvp = frameSize + (j >> 1) * width;
u = 0;
v = 0;
for (i = 0; i < width; i++, yp++) {
y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
// above answer is wrong at the following lines. just swap ***u*** and ***v***
u = (0xff & yuv420sp[uvp++]) - 128;
v = (0xff & yuv420sp[uvp++]) - 128;
}
y1192 = 1192 * y;
r = (y1192 + 1634 * v);
g = (y1192 - 833 * v - 400 * u);
b = (y1192 + 2066 * u);
r = Math.max(0, Math.min(r, 262143));
g = Math.max(0, Math.min(g, 262143));
b = Math.max(0, Math.min(b, 262143));
// combine ARGB
rgba[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) & 0xff00)
| ((b >> 10) | 0xff);
}
}
}
Try RenderScript ScriptIntrinsicYuvToRGB, which comes with JellyBean 4.2 (Api 17+).
https://developer.android.com/reference/android/renderscript/ScriptIntrinsicYuvToRGB.html
On Nexus 7 (2013, JellyBean 4.3) a 1920x1080 image conversion (full HD camera preview) takes about 7 ms.
You can get the bitmap directly from the TextureView. Which is really fast.
Bitmap bitmap = textureview.getBitmap()
After reading many suggested links, articles, etc. I found the following great Android example app which captures the YUV Image from the camera and converts it into RGB Bitmap:
https://github.com/android/camera-samples/tree/main/CameraXTfLite
Nice things about this:
It uses the aforementioned RenderScript framework and the code can be easily reused - check out the YuvToRgbConverter.kt class
according to their documentation, this code achieves " ~30 FPS # 640x480 on a Pixel 3 phone"
After switching to this code (especially the YUV to RGB conversion part) my framerate doubled! I am not quite reaching 30 FPS overall since I am doing a bit more things after capturing the image, but the speed-up is remarkable!
I have Black and White picture - RGB 565, 200x50.
as I can calculate the intensity 0..255 of each pixel?
That's what I meant, thanks. ma be this can someone help. I get Intensity 0..255 of each pixel and get the average.
Bitmap cropped = Bitmap.createBitmap(myImage, 503, 270,myImage.getWidth() - 955, myImage.getHeight() - 550);
Bitmap cropped2 = Bitmap.createBitmap(cropped, 0, 0,cropped.getWidth() , cropped.getHeight() / 2 );
final double GS_RED = 0.35;
final double GS_GREEN = 0.55;
final double GS_BLUE = 0.1;
int R, G, B;
int result = 0;
int g = 0;
int ff;
for(int x = 0; x < cropped2.getWidth(); x++)
{
int ff_y = 0;
for(int y = 0; y < cropped2.getHeight(); y++)
{
Pixel = cropped.getPixel(x, y);
R = Color.red(Pixel);
G = Color.green(Pixel);
B = Color.blue(Pixel);
ff = (int)(GS_RED * R + GS_GREEN * G + GS_BLUE * B) ;
ff_y += ff;
}
result += ff_y;
g = result / (cropped2.getWidth()*cropped2.getHeight());
}
Toast.makeText(this, "00" + g, Toast.LENGTH_LONG).show();
You could try to convert it using a color model with a luminance and two chrominance components. The luminance component accounts for the brightness while the two chrominance components represent the colors. You might want to check out http://en.wikipedia.org/wiki/YUV.
Otherwise: If I'm correct, the white over gray to black colors have equal values in a RGB format which has the same number of bits for each channel (e.g. from (0, 0, 0) to (255, 255, 255)). Assuming this is true you could just take one of the channels to represent the intensity as you could determine the other values from that. No guarantee if this works.
Edit:
I wrote a snippet demonstrating the idea described above. I used RGB888 but it should also work with RGB 565 after dropping the assertion and modifying the maximum intensity of a pixel as described in the comments. Mind that there are only 2^5 different intensity levels per pixel. Hence you might want to use a scaled version of the average intensity.
I tested it using images from http://www.smashingmagazine.com/2008/06/09/beautiful-black-and-white-photography/. I hope it will work out porting this to android for you.
// 2^5 for RGB 565
private static final int MAX_INTENSITY = (int) Math.pow(2, 8) - 1;
public static int calculateIntensityAverage(final BufferedImage image) {
long intensitySum = 0;
final int[] colors = image.getRGB(0, 0, image.getWidth(),
image.getHeight(), null, 0, image.getWidth());
for (int i = 0; i < colors.length; i++) {
intensitySum += intensityLevel(colors[i]);
}
final int intensityAverage = (int) (intensitySum / colors.length);
return intensityAverage;
}
public static int intensityLevel(final int color) {
// also see Color#getRed(), #getBlue() and #getGreen()
final int red = (color >> 16) & 0xFF;
final int blue = (color >> 0) & 0xFF;
final int green = (color >> 8) & 0xFF;
assert red == blue && green == blue; // doesn't hold for green in RGB 565
return MAX_INTENSITY - blue;
}