Huge negative values extracted by using getPixel() method - android

I am having a problem with an image processing app I am developing (newbie here). I am trying to extract the value of specific pixels by using the getPixel() method.
I am having a problem though. The number I get from this method is a huge negative number, something like -1298383. Is this normal? How can I fix it?
Thanks.

I'm not an expert, but to me it looks like you are getting the hexadecimal value. Perhaps you want something more understandable like the value of each RGB layer.
To unpack a pixel into its RGB values you should do something like:
private short[][] red;
private short[][] green;
private short[][] blue;
/**
* Map each intensity of an RGB colour into its respective colour channel
*/
private void unpackPixel(int pixel, int row, int col) {
red[row][col] = (short) ((pixel >> 16) & 0xFF);
green[row][col] = (short) ((pixel >> 8) & 0xFF);
blue[row][col] = (short) ((pixel >> 0) & 0xFF);
}
And after changes in each channel you can pack the pixel back.
/**
* Create an RGB colour pixel.
*/
private int packPixel(int red, int green, int blue) {
return (red << 16) | (green << 8) | blue;
}
Sorry if it is not what you are looking for.

You can get the pixel from the view like this:
ImageView imageView = ((ImageView)v);
Bitmap bitmap = ((BitmapDrawable)imageView.getDrawable()).getBitmap();
int pixel = bitmap.getPixel(x,y);
Now you can get each channel with:
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);

getPixel() returns the Color at the specified location. Throws an exception if x or y are out of bounds (negative or >= to the width or height respectively).
The returned color is a non-premultiplied ARGB value.

Related

Getting value of A in ARGB background color in android programmatically

Suppose i have background color set as #AF000000(AARRGGBB) in android.
I want value of alpha channel(AA) in decimal (0-255) which is going to be 175.
How to accomplish that programatically?
Here is a pure Java solution that doesn't use the Android-specific getAlpha() method.
Do you have this value stored in a String or an int? If you have it in a String, first get rid of the # character then convert it to an int:
String hexString = "#05000000";
int color = Integer.parseInt(hexString.replaceAll("#", ""), 16);
Then we need to make some bit manipulation. This hex color representation means (in ARGB mode) you have the values #AARRGGBB. That is 2 bytes for each channel, including the alpha one. To get just the alpha channel (the AA part of the hex value), we need to "push it 6 bytes to the right" (Java is a Big Endian languange) so we can end up with something like #000000AA. Since each byte is comprised of 8 bits, we have to "push" the alpha values 6 * 8 = 24 bits "to the right":
int alpha = color >> 24;
This process is called Bit Shifting. All the rightmost RGB values are discarded and we then have the alpha value stored in an int with a decimal value between 0 and 255.
EDIT: If you already have the alpha value as returned from getAlpha(), you can always multiply it by 255 and floor it:
int alpha = Math.floor(myView.getAlpha() * 255);
You may convert your HEX to Decimal Simply use,
int i= Integer.parseInt(HEX_STR,16);
or if you need to pass long hex value means use like,
public static long hexToLong(String hex) {
return Long.parseLong(hex, 16);
}
To get each individual int value:
int argb = Color.parseColor("#00112233");
int alpha = 0xFF & (argb >> 24);
int red = 0xFF & (argb >> 16);
int green = 0xFF & (argb >> 8);
int blue = 0xFF & (argb >> 0);

VideoSurfaceView - render to file

I'm using VideoSurfaceView to render filtered video. I'm doing it buy changing the fragment shader according to my needs. Now I would like to save/render the video after the changes to a file of the same format(Ex. mp4 - h264) but couldn't find how to do it.
PS - saving texture as bitmap and the bitmap to a file is easy but I could find how to do it with videos..
Any experts here?
As you already found out and said in the comments, OpenGL can't export multiple frames as a video.
Though if you simply want to filter/process each frame of a video, then you don't need OpenGL at all, and you don't need a Fragment Shader, you can simply loop through all the pixels yourself.
Now let's say that you process your video one frame at a time, and each frame is a BufferedImage, you can of course use whatever you want or get provided with, as long as you have the option to get and set pixels.
I'm simply supplying you with a way of calculating and applying a filter, you will have to do the decoding and encoding of the video file yourself.
But back to the BufferedImage, first we want to get all the pixels in our BufferedImage, we do that using the following.
BufferedImage bi = ...; // Here you would get a frame from the video
int width = bi.getWidth();
int height = bi.getHeight();
int[] pixels = ((DataBufferInt) bi.getRaster().getDataBuffer()).getData();
Be aware that depending on the type of image and if the image contains transparency, the DataBuffer might vary between a DataBufferInt to DataBufferByte, etc. You can read about the different DataBuffers in the Oracle Docs, click here.
Now simply by looping through the pixels from the image, then we can apply and create any kind of effect and filtering.
Let's say we want to create a grayscale effect also called a black-and-white effect, you would then do that by the following.
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
final int index = x + y * width;
final int pixel = pixels[index];
final int alpha = (pixel >> 24) & 0xFF;
final int red = (pixel >> 16) & 0xFF;
final int green = (pixel >> 8) & 0xFF;
final int blue = pixel & 0xFF;
final int gray = (red + green + blue) / 3;
pixels[index] = alpha << 24 | gray << 16 | gray << 8 | gray;
}
}
Now you can simply save the image again, or do anything else you would like to do. Though you can also use and draw the BufferedImage, because the pixel array provided by the BufferedImage will of course change the BufferedImage as well.
Important if you want to perform a blur effect, then after you calculate each pixel store it into another array, because performing a blur effect, requires the surrounding pixels. Therefore it you replace the old once while you calculate all the pixels, some of the pixels will use the calculated values instead of the actual value.
The above code also works for images as well of course.
Extra
If you want to get RGBA values which is stored in a single int then you can do the following.
int pixel = 0xFFFF8040; // This is a random testing value
int alpha = (pixel >> 24) & 0xFF; // Would equal 255 using the testing value
int red = (pixel >> 16) & 0xFF; // ... 255 ...
int green = (pixel >> 8) & 0xFF; // ... 128 ...
int blue = pixel & 0xFF; // ... 64 ...
Then if you have the RGBA values and want to combine them to a single int then you can do the following.
int alpha = 255;
int red = 255;
int green = 128;
int blue = 64;
int pixel = alpha << 24 | red << 16 | green << 8 | blue;
If you only have the RGB values then you just do, either red << 16 | green << 8 | blue or you do 255 << 24 | red << 16 | green << 8 | blue

Color detection in a static image - OpenCV Android

I have an image with four squares red, green, blue and yellow. I need to get the rgb values of each squares. I'm able to get the rgb of the whole image, but i want it for a specific section.
The image which i am going to get will be from the camera and stored onto the SDCard
I don't know if I understand you exactly, but here it comes.
You need to create BufferedImage object to get RGB value:
File f = new File(yourFilePath);
BufferedImage img = ImageIO.read(f);
You can get RGB Color values from the image from then. You have 4 squares; to check their RGB values, you can check the corner pixels' RGB values:
Color leftTop = new Color(img.getRGB(0, 0));
Color rightTop = new Color(img.getRGB(img.getWidth - 1, 0));
Color leftBottom = new Color(img.getRGB(0, img.getHeight - 1));
Color rightBottom = new Color(img.getRGB(img.getWidth - 1, img.getHeight - 1));
After that it's easy to get red, green and blue values individually:
int red = leftTop.getRed();
int green = leftTop.getGreen();
int blue = leftTop.getBlue();
EDIT:
I'm really sorry, I didn't see it's for Android. As you said, Android doesn't have ImageIO class. To accomplish the task in Android, first initialize the image:
Bitmap img = BitmapFactory.decodeFile(yourFilePath);
From then the operation is pretty much the same:
int leftTop = img.getPixel(0, 0);
...
int red = Color.red(pixel);
int blue = Color.blue(pixel);
int green = Color.green(pixel);
Use this to crop your image.
Now to detect the color of the image take a pixel from the square and detect it's color with this.
After finding the RGB value use a simple conditional statement to see if the square is red blue or green.
I got it this way
int topLeftIndex = squareImage.getPixel(0, 0);
int R1 = (topLeftIndex >> 16) & 0xff;
int G1 = (topLeftIndex >> 8) & 0xff;
int B1 = topLeftIndex & 0xff;
and same way with
int bottomLeftIndex=squareImage.getPixel(0, picHeight - 1);
int topRightIndex=squareImage.getPixel(picWidth -1 , 0);
int bottomRightIndex=squareImage.getPixel(picWidth -1, picHieght - 1);

Android NDK set RGB bitmap pixels

I am trying to do some simple image filtering using androids ndk and seem to be having some issues with getting and setting the rgb values of the bitmap.
I have stripped out all the actual processing and am just trying to set every pixel of the bitmap to red, but I end up with a blue image instead. I assume there is something simple that I have overlooked but any help is appreciated.
static void changeIt(AndroidBitmapInfo* info, void* pixels){
int x, y, red, green, blue;
for (y=0;y<info->height;y++) {
uint32_t * line = (uint32_t *)pixels;
for (x=0;x<info->width;x++) {
//get the values
red = (int) ((line[x] & 0xFF0000) >> 16);
green = (int)((line[x] & 0x00FF00) >> 8);
blue = (int) (line[x] & 0x0000FF);
//just set it to all be red for testing
red = 255;
green = 0;
blue = 0;
//why is the image totally blue??
line[x] =
((red << 16) & 0xFF0000) |
((green << 8) & 0x00FF00) |
(blue & 0x0000FF);
}
pixels = (char *)pixels + info->stride;
}
}
How should I both get and then set the rgb values for each pixel??
Update with answer
As pointed out below it seems that little endian is used, so in my original code I just had to switch the red and blue variables:
static void changeIt(AndroidBitmapInfo* info, void* pixels){
int x, y, red, green, blue;
for (y=0;y<info->height;y++) {
uint32_t * line = (uint32_t *)pixels;
for (x=0;x<info->width;x++) {
//get the values
blue = (int) ((line[x] & 0xFF0000) >> 16);
green = (int)((line[x] & 0x00FF00) >> 8);
red = (int) (line[x] & 0x0000FF);
//just set it to all be red for testing
red = 255;
green = 0;
blue = 0;
//why is the image totally blue??
line[x] =
((blue<< 16) & 0xFF0000) |
((green << 8) & 0x00FF00) |
(red & 0x0000FF);
}
pixels = (char *)pixels + info->stride;
}
}
It depends on the pixel format. Presumably your bitmap is in RGBA. So 0x00FF0000 corresponds to the byte sequence 0x00, 0x00, 0xFF, 0x00 (little endian), that is blue with transparency 0.
I am not an Android developer so I don't know if there are helper functions to get/set color components or if you have to do it yourself, based on the AndroidBitmapInfo.format field. You'll have to read the API documentation.

How to apply, converting image from colored to grayscale algorithm to Android?

I am trying to use one of these algorithms to convert a RGB image to grayscale:
The lightness method averages the most prominent and least prominent colors:
(max(R, G, B) + min(R, G, B)) / 2.
The average method simply averages the values: (R + G + B) / 3.
The formula for luminosity is 0.21 R + 0.71 G + 0.07 B.
But I get very weird results! I know there are other ways to acheive this but is it possible to do this way?
Here is the code:
for(int i = 0 ; i < eWidth*eHeight;i++){
int R = (pixels[i] >> 16) ; //bitwise shifting
int G = (pixels[i] >> 8) ;
int B = pixels[i] ;
int gray = (R + G + B )/ 3 ;
pixels[i] = (gray << 16) | (gray << 8) | gray ;
}
You need to strip off the bits that aren't part of the component you're getting, especially if there's any sign extension going on in the shifts.
int R = (q[i] >> 16) & 0xff ; //bitwise shifting
int G = (q[i] >> 8) & 0xff ;
int B = q[i] & 0xff ;
What you made looks allright to me..
I once did this, in java, in much the same way.
Getting the average of the 0-255 color values of RGB, to get grayscale, and it looks alot like yours.
public int getGray(int row, int col) throws Exception
{
checkInImage(row,col);
int[] rgb = this.getRGB(row,col);
return (int) (rgb[0]+rgb[1]+rgb[2])/3;
}
I understand you are not asking for hoe to code this, but for algorithm?
There is no "correct" algorithm as per http://www.dfanning.com/ip_tips/color2gray.html
They use
Y = 0.3*R + 0.59*G + 0.11*B
You can certainly modify each pixel in Java, but that's very inefficient. If you have the option, I would use a ColorMatrix. See the Android documentation for details: http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/graphics/ColorMatrixSample.html
You could set the matrix' saturation to 0 to make it grayscale.
IF you really want to do it in Java, you can do it the way you did it, but you'll need to mask out each element first, i.e. apply & 0xff to it.

Categories

Resources