I am new to android ndk.I have started learning through the image processing example by
ruckus and by IBM blog. I am not getting few lines below.Can any one please help me to understand the code snippet?`
static void brightness(AndroidBitmapInfo* info, void* pixels, float brightnessValue){
int xx, yy, red, green, blue;
uint32_t* line;
for(yy = 0; yy < info->height; yy++){
line = (uint32_t*)pixels;
for(xx =0; xx < info->width; xx++){
//extract the RGB values from the pixel
red = (int) ((line[xx] & 0x00FF0000) >> 16);
green = (int)((line[xx] & 0x0000FF00) >> 8);
blue = (int) (line[xx] & 0x00000FF );
//manipulate each value
red = rgb_clamp((int)(red * brightnessValue));
green = rgb_clamp((int)(green * brightnessValue));
blue = rgb_clamp((int)(blue * brightnessValue));
// set the new pixel back in
line[xx] =
((red << 16) & 0x00FF0000) |
((green << 8) & 0x0000FF00) |
(blue & 0x000000FF);
}
pixels = (char*)pixels + info->stride;
}
}
`
static int rgb_clamp(int value) {
if(value > 255) {
return 255;
}
if(value < 0) {
return 0;
}
return value;
}
How the RGB value are getting extracted and wht does this rgb_clamp do.Why are we setting new Pixell back and how does pixels = (char*)pixels + info->stride; works?
I am not a c/c++ guys and not having much knowledge of Image processing.
Thanks
At first lets talk about one pixel. As far as i can see, it is a composition of at least 3 channels: r,g and b, which are all stored in one uint32_t value and has the format 0x00RRGGBB(32bit / 4 channels = 8bit per channel and thus a value range from 0..255). So, to get the separated r, g and b-values you need to mask them out, which is done in the three lines below //extract the RGB values. For example the red component... With the mask 0x00FF0000 and the & operator, you set every bit to 0 except the bits that are set in the red channel. But when you just mask them out with 0x00RRGGBB & 0x00FF0000 = 0x00RR0000, you would get a very big number. To get a value between 0 and 255 you also have to shift the bits to the right and that is what is done with the >>-operator. So for the latter example: After applying the mask, you get 0x00RR0000, and shifting this 16 bit right (>> 16)gives you 0x000000RR, which is a number between 0 and 255. The same happens with the green channel, but with an 8bit right shift and since the blue value is already on the "right" bit position, there is no need to shift.
Second question: What rgb_clamp does is easy to explain. It ensures, that your r,g or b-value, multiplied with your brightness factor, never exceeds the value range 0..255.
After the multiplication with the brightness factor, the new values are written back into memory, which happens in the reverse order of the above described extraction, this time shifting them leftwards and removing bits, that we don't want with the mask.
After one line of your image is processed, the info->stride is added, since for optimization purposes, the memory probably is aligned to fill 32byte boundaries and thus a single line can be longer than only image width and thus the "rest" of bytes are added to the pointer.
First and foremost I suggest you read the C book here: http://publications.gbdirect.co.uk/c_book/
Next I'll go through your questions.
How are the RGB values extracted
line is set to point to pixels parameter:
line = (uint32_t*)pixels;
That is pixels is an array of 32 bit unsigned integers
Then for the height and width of the bitmap the RGB values are extracted using a combination of bitwise ANDing (&) and bit shifting right (>>).
Lets see how you get red:
red = (int) ((line[xx] & 0x00FF0000) >> 16);
Here we get the current line, then AND with 0x00FF0000 as mask, this gets the bits 24-16 from the line. So using RGB code #123456 as an example we will be left with 0x00120000 in the red variable. But it's still in the 24-16 bit position so we right shift 16 bits to shift the bits down to 0x00000012.
We do this for the green and blue values, adjusting the AND mask and number of bits to shift right.
More information on binary arithmetic can be found here: http://publications.gbdirect.co.uk/c_book/chapter2/expressions_and_arithmetic.html
What does rgb_clamp do
This function simply ensures the red, green, or blue values are 0 or above or 255 and below.
If the parameter to rbg_clamp is -20 the it will return 0, which will be used to set the RGB value. If the parameter is rbg_clamp is 270 it will return 255.
RGB values for each colour must not exceed 225 or be below 0. In this example 255 being the brightest and 0 being the darkest value.
Why are we setting pixel back
It appears we are changing the brightness of the pixel, and setting the value back ready to be displayed.
How does pixels = (char*)pixels + info->stride; work?
Without knowing the structure of info variable of AndroidBitmapInfo type, I would guess info->stride refers to the width of the bitmap in bytes so line will become the next line on the next loop iteration.
Hope that helps
Related
I am kind of stuck with this problem, and I know there are so many questions about it on stack overflow but in my case. Nothing gives the expected result.
The Context:
Am using Android OpenCV along with Tesseract so I can read the MRZ area in the passport. When the camera is started I pass the input frame to an AsyncTask, the frame is processed, the MRZ area is extracted succesfully, I pass the extracted MRZ area to a function prepareForOCR(inputImage) that takes the MRZ area as gray Mat and Will output a bitmap with the thresholded image that I will pass to Tesseract.
The problem:
The problem is while thresholding the Image, I use adaptive thresholding with blockSize = 13 and C = 15, but the result given is not always the same depending on the lighting of the image and the conditions in general from which the frame is taken.
What I have tried:
First I am resizing the image to a specific size (871,108) so the input image is always the same and not dependant on which phone is used.
After resizing, I try with different BlockSize and C values
//toOcr contains the extracted MRZ area
Bitmap toOCRBitmap = Bitmap.createBitmap(bitmap);
Mat inputFrame = new Mat();
Mat toOcr = new Mat();
Utils.bitmapToMat(toOCRBitmap, inputFrame);
Imgproc.cvtColor(inputFrame, inputFrame, Imgproc.COLOR_BGR2GRAY);
TesseractResult lastResult = null;
for (int B = 11; B < 70; B++) {
for (int C = 11; C < 70; C++){
if (IsPrime(B) && IsPrime(C)){
Imgproc.adaptiveThreshold(inputFrame, toOcr, 255, Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C, Imgproc.THRESH_BINARY, B ,C);
Bitmap toOcrBitmap = OpenCVHelper.getBitmap(toOcr);
TesseractResult result = TesseractInstance.extractFrame(toOcrBitmap, "ocrba");
if (result.getMeanConfidence()> 70) {
if (MrzParser.tryParse(result.getText())){
Log.d("Main2Activity", "Best result with " + B + " : " + C);
return result;
}
}
}
}
}
Using the code below, the thresholded result image is a black on white image which gives a confidence greater than 70, I can't really post the whole image for privacy reasons, but here's a clipped one and a dummy password one.
Using the MrzParser.tryParse function which adds checks for the character position and its validity within the MRZ, am able to correct some occurences like a name containing a 8 instead of B, and get a good result but it takes so much time, which is normal because am thresholding almost 255 images in the loop, adding to that the Tesseract call.
I already tried getting a list of C and B values which occurs the most but the results are different.
The question:
Is there a way to define a C and blocksize value so that it s always giving the same result, maybe adding more OpenCV calls so The input image like increasing contrast and so on, I searched the web for 2 weeks now I can't find a viable solution, this is the only one that is giving accurate results
You can use a clustering algorithm to cluster the pixels based on color. The characters are dark and there is a good contrast in the MRZ region, so a clustering method will most probably give you a good segmentation if you apply it to the MRZ region.
Here I demonstrate it with MRZ regions obtained from sample images that can be found on the internet.
I use color images, apply some smoothing, convert to Lab color space, then cluster the a, b channel data using kmeans (k=2). The code is in python but you can easily adapt it to java. Due to the randomized nature of the kmeans algorithm, the segmented characters will have label 0 or 1. You can easily sort it out by inspecting cluster centers. The cluster-center corresponding to characters should have a dark value in the color space you are using.
I just used the Lab color space here. You can use RGB, HSV or even GRAY and see which one is better for you.
After segmenting like this, I think you can even find good values for B and C of your adaptive-threshold using the properties of the stroke width of the characters (if you think the adaptive-threshold gives a better quality output).
import cv2
import numpy as np
im = cv2.imread('mrz1.png')
# convert to Lab
lab = cv2.cvtColor(cv2.GaussianBlur(im, (3, 3), 1), cv2.COLOR_BGR2Lab)
im32f = np.array(im[:, :, 1:3], dtype=np.float32)
k = 2 # 2 clusters
term_crit = (cv2.TERM_CRITERIA_EPS, 30, 0.1)
ret, labels, centers = cv2.kmeans(im32f.reshape([im.shape[0]*im.shape[1], -1]),
k, None, term_crit, 10, 0)
# segmented image
labels = labels.reshape([im.shape[0], im.shape[1]]) * 255
Some results:
I am using the Android Philips Hue SDK and I am currently having an issue with converting the light bulbs XY value to RGB.
I have looked at this code provided in a forum on Philips Hue website and the code has been provided by someone from Hue Support.
I have the following function using this code from the forum:
public static int[] convertXYToRGB(float[] xy, String lightModel)
{
int color = PHUtilities.colorFromXY(xy, lightModel);
int r = Color.red(color);
int g = Color.green(color);
int b = Color.blue(color);
return new int[] {r, g, b};
}
And I am calling it like:
int hue = lightState.getHue();
float[] xy = PHUtilities.calculateXY(hue, item.light.getModelNumber());
int[] rgb = Utilities.convertXYToRGB(xy, item.light.getModelNumber());
Looking at the RGB value I get back it seems to be the wrong colour. For example, using the official app, I set one of my light bulbs to red. When I run my app, the RGB value that comes back is a pale yellow.
Has anyone else experienced this or know how to resolve this issue?
I had a similar issue while programming a desktop application using the same Java SDK (login required). Interestingly, a plain red turned into a fade yellow, exactly how you describe it. A possible solution is to use the xy-values directly instead of the conversion from hue-values. That finally solved the problem for me. You can get the xy-values from the PHLightState object using the methods .getX() and .getY(). After that, use colorFromXY as in your code to get the RGB-values (as android color value = int).
PHLightState s = light.getLastKnownLightState();
float xy[] = new float[] {s.getX(), s.getY()};
int combRGB = PHUtilities.colorFromXY(xy, light.getModelNumber());
On Android, convert combRGB as you already do. Make sure to include android.graphics.Color. If you are testing on non-Android systems you can use the following code:
Color theColor = new Color(combRGB);
int[] sepRGB = {theColor.getRed(), theColor.getGreen(), theColor.getBlue()};
Note: The lights can only address a certain color gamut depending on the type. This is explained into detail here. The 'normal' bulbs with a color gamut B have quite some limitations. For example: most greens turn into yellows and the blues contain a certain amount of red.
Example values: The following overall conversions are tested on my live system with LCT001-blubs. I used PHUtilities.calculateXYFromRGB() to convert the input, then I set the xy-values of the new light state with .setX() and .setY() and finally sent it to the bridge. The values are then extracted from the light cache in the application as soon as it gets the next update.
255 0 0 -> 254 0 0
0 255 0 -> 237 254 0
0 0 255 -> 90 0 254
200 0 200 -> 254 0 210
255 153 0 -> 254 106 0
255 153 153 -> 254 99 125
I am working on an app that is expected to remove image backgrounds using opencv, at first I tried using grabcut but it was too slow and the results were not always accurate, then I tried using threshold, although the results are not yet close th grabcut, its very fast and looks like a better, So my code is first looking at the image hue and analying which portion of it appears more, that portion is taken in as the background, the issue is at times its getting the foreground as background below is my code:
private Bitmap backGrndErase()
{
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.skirt);
Log.d(TAG, "bitmap: " + bitmap.getWidth() + "x" + bitmap.getHeight());
bitmap = ResizeImage.getResizedBitmap(bitmap, calculatePercentage(40, bitmap.getWidth()), calculatePercentage(40, bitmap.getHeight()));
Mat frame = new Mat();
Utils.bitmapToMat(bitmap, frame);
Mat hsvImg = new Mat();
List<Mat> hsvPlanes = new ArrayList<>();
Mat thresholdImg = new Mat();
// int thresh_type = Imgproc.THRESH_BINARY_INV;
//if (this.inverse.isSelected())
int thresh_type = Imgproc.THRESH_BINARY;
// threshold the image with the average hue value
hsvImg.create(frame.size(), CvType.CV_8U);
Imgproc.cvtColor(frame, hsvImg, Imgproc.COLOR_BGR2HSV);
Core.split(hsvImg, hsvPlanes);
// get the average hue value of the image
double threshValue = this.getHistAverage(hsvImg, hsvPlanes.get(0));
Imgproc.threshold(hsvPlanes.get(0), thresholdImg, threshValue, mThresholdValue, thresh_type);
// Imgproc.adaptiveThreshold(hsvPlanes.get(0), thresholdImg, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
Imgproc.blur(thresholdImg, thresholdImg, new Size(5, 5));
// dilate to fill gaps, erode to smooth edges
Imgproc.dilate(thresholdImg, thresholdImg, new Mat(), new Point(-1, -1), 1);
Imgproc.erode(thresholdImg, thresholdImg, new Mat(), new Point(-1, -1), 3);
Imgproc.threshold(thresholdImg, thresholdImg, threshValue, mThresholdValue, Imgproc.THRESH_BINARY);
//Imgproc.adaptiveThreshold(thresholdImg, thresholdImg, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 2);
// create the new image
Mat foreground = new Mat(frame.size(), CvType.CV_8UC3, new Scalar(255, 255, 255));
frame.copyTo(foreground, thresholdImg);
Utils.matToBitmap(foreground,bitmap);
//return foreground;
alreadyRun = true;
return bitmap;
}
the method responsible for Hue:
private double getHistAverage(Mat hsvImg, Mat hueValues)
{
// init
double average = 0.0;
Mat hist_hue = new Mat();
// 0-180: range of Hue values
MatOfInt histSize = new MatOfInt(180);
List<Mat> hue = new ArrayList<>();
hue.add(hueValues);
// compute the histogram
Imgproc.calcHist(hue, new MatOfInt(0), new Mat(), hist_hue, histSize, new MatOfFloat(0, 179));
// get the average Hue value of the image
// (sum(bin(h)*h))/(image-height*image-width)
// -----------------
// equivalent to get the hue of each pixel in the image, add them, and
// divide for the image size (height and width)
for (int h = 0; h < 180; h++)
{
// for each bin, get its value and multiply it for the corresponding
// hue
average += (hist_hue.get(h, 0)[0] * h);
}
// return the average hue of the image
average = average / hsvImg.size().height / hsvImg.size().width;
return average;
}
A sample of the input and output:[
Input Image 2 and Output:
Input Image 3 and Output:
Indeed, as others have said you are unlikely to get good results just with a threshold on hue. You can use something similar to GrabCut, but faster.
Under the hood, GrabCut calculates foreground and background histograms, then calculates the probability of each pixel being FG/BG based on these histograms, and then optimizes the resulting probability map using graph cut to obtain a segmentation.
Last step is most expensive, and it may be ignored depending on the application. Instead, you may apply the threshold to the probability map to obtain a segmentation. It may (and will) be worse than GrabCut, but will be better than your current approach.
There are some points to consider for this approach. The choice of histogram model would be very important here. You can either consider 2 channels in some space like YUV or HSV, consider 3 channels of RGB, or consider 2 channels of normalized RGB. You also have to select an appropriate bin size for those histograms. Too small bins would lead to 'overtraining', while too large will reduce the precision. The tradeoffs between those are a topic for a separate discussion, in brief - I would advice using RGB with 64 bins per channel for start and then see what changes are better for your data.
Also, you can get better results for coarse binning if you use interpolation to get values between bins. In past I have used trilinear interpolation and it was kind of good, compared to no interpolation at all.
But remember that there are no guarantees that your segmentation will be correct without prior knowledge on object shape, either with GrabCut, thresholding or this approach.
I would try again Grabcut, it is one of the best segmentation methods available. This is the result I get
cv::Mat bgModel,fgModel; // the models (internally used)
cv::grabCut(image,// input image
object_mask,// segmentation result
rectang,// rectangle containing foreground
bgModel,fgModel, // models
5,// number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
// Get the pixels marked as likely foreground
cv::compare(object_mask,cv::GC_PR_FGD,object_mask,cv::CMP_EQ);
cv::threshold(object_mask, object_mask, 0,255, CV_THRESH_BINARY); //ensure the mask is binary
The only problem of Grabcut is that you have to give as an input a rectangle containing the object you want to extract. Apart from that it works pretty well.
Your method of finding average hue is WRONG! As you most probably know, hue is expressed as angle and takes value in [0,360] range. Therefore, a pixel with hue 360 essentially has same colour as a pixel with hue 0 (both are pure red). In the same way, a pixel with hue 350 is actually closer to a pixel with hue 10 than a pixel with hue, say for example, 300.
As for opencv, cvtColor function actually divides calculated hue value by 2 to fit it in 8 bit integer. Thus, in opencv, hue values wrap after 180. Now, consider we have two red(ish) pixels with hues 10 and 170. If we take their average, we will get 90 — hue of pure cyan, the exact opposite of red — which is not our desired value.
Therefore, to correctly find the average hue, you need to first find average pixel value in RGB colour space, then calculate the hue from this RGB value. You can create 1x1 matrix with average RGB pixel and convert it to HSV/HSL.
Following the same reasoning, applying threshold to hue image doesn't work flawlessly. It does not consider wrapping of hue values.
If I understand correctly, you want to find pixels with similar hue as the background. Assuming we know the colour of background, I would do this segmentation in RGB space. I would introduce some tolerance variable. I would use the background pixel value as centre and this tolerance as radius and thus define a sphere in RGB colour space. Now, rest is inspecting each pixel value, if it falls inside this sphere, then classify as background; otherwise, regard it as foreground pixel.
I have been following this guide on how to use Render-script on Android.
http://www.jayway.com/2014/02/11/renderscript-on-android-basics/
My code is this (I got a wrapper class for the script):
public class PixelCalcScriptWrapper {
private Allocation inAllocation;
private Allocation outAllocation;
RenderScript rs;
ScriptC_pixelsCalc script;
public PixelCalcScriptWrapper(Context context){
rs = RenderScript.create(context);
script = new ScriptC_pixelsCalc(rs, context.getResources(), R.raw.pixelscalc);
};
public void setInAllocation(Bitmap bmp){
inAllocation = Allocation.createFromBitmap(rs,bmp);
};
public void setOutAllocation(Bitmap bmp){
outAllocation = Allocation.createFromBitmap(rs,bmp);
};
public void forEach_root(){
script.forEach_root(inAllocation, outAllocation);
}
}
This methods calls the script:
public Bitmap processBmp(Bitmap bmp, Bitmap bmpCopy) {
pixelCalcScriptWrapper.setInAllocation(bmp);
pixelCalcScriptWrapper.setOutAllocation(bmpCopy);
pixelCalcScriptWrapper.forEach_root();
return bmpCopy;
};
and here is my script:
#pragma version(1)
#pragma rs java_package_name(test.foo)
void root(const uchar4 *in, uchar4 *out, uint32_t x, uint32_t y) {
float3 pixel = convert_float4(in[0]).rgb;
if(pixel.z < 128) {
pixel.z = 0;
}else{
pixel.z = 255;
}
if(pixel.y < 128) {
pixel.y = 0;
}else{
pixel.y = 255;
}
if(pixel.x < 128) {
pixel.x = 0;
}else{
pixel.x = 255;
}
out->xyz = convert_uchar3(pixel);
}
Now where can I find some documentation about this ?
For example, I have these questions:
1) What does this convert_float4(in[0]) do ?
2) What does the rgb return here convert_float4(in[0]).rgb;?
3) What is float3 ?
4) I don't know where to start with this line out->xyz = convert_uchar3(pixel);
5) I am assuming in the parameters, in and out are the Allocations passed?
what are x and y?
http://developer.android.com/guide/topics/renderscript/reference/rs_convert.html#android_rs:convert
What does this convert_float4(in[0]) do?
convert_float4 will convert from a uchar4 to a float4;
.rgb turns it into a float3 of the first 3 elements.
What does the rgb return?
RenderScript vector types have .r .g .b .a or
.x .y .z .w representing the first, second, third and forth element respectively. You can use any combination (e.g. .xy or .xwy)
What is float3?
float3 is a "vector type" sort of like a float but 3 of them.
There are float2, float3 and float4 vector types of float.
(there are uchar4, int4 etc.)
http://developer.android.com/guide/topics/renderscript/reference/overview.html might be helpful
I hope this helps.
1) In the kernel, the in pointer is a 4-element unsigned char, that is, it represents a pixel color with R, G, B and A values in the 0-255 range. So convert_float4 simply casts each of the four uchar as a float. In this particular code you are using, it probably doesn't make much sense to work with floats, since you're doing a simple threshold, and you could just as well had worked with the uchar data directly. Using floats is better aimed when doing other types of image processing algorithms where you do need to have the extra precision (example: blurring an image).
2) The .rgb suffix is a shorthand to return only the first three values of the float4, i.e. the R, G, and B values. If you had used only .r it would give you the first value as a regular float, if you had used .g it would give you the second value as a float, etc... These three values are then assigned to that float3 variable, which now represents the pixel with only three color channels (that is, no A alpha channel).
3) See #2.
4) Now convert_uchar3 is again another cast that converts the float3 pixel variable back to a uchar3 variable. You are assigning the three values to each of the x, y, and z elements in that order. This is probably a good time to mention that X, Y and Z are completely interchangeable with R, G and B. That statement could just as well have used out->rgb, and it would actually have been more readable that way. Note that out is a uchar4, and by doing this, you are assigning only the first three "rgb" or "xyz" elements in that pointer, the fourth element is left undefined here.
5) Yes, in is the input pixel, out is the output pixel. Then x and y are the x, and y coordinates of the pixel in the overall image. This kernel function is going to be called once for every pixel in the image/allocation you're working with, and so it's usually good to know what coordinate you're at when processing an image. In this particular example since it's only thresholding all pixels in the same way, the coordinates are irrelevant.
Good documentation on RenderScript is very hard to find. I would greatly recommend you take a look at these two videos though, as they will give you a much better sense of how RenderScript works:
AnDevCon: A Deep Dive into RenderScript
Google I/O 2013 - High Performance Applications with RenderScript
Keep in mind that both videos are a couple years old, so some minor details may have changed on the recent APIs, but overall, those are probably the best sources of information for RS.
I am writing an Android application that must paint determined parts of a loaded bitmap image according to received events.
I need to paint (or change the current color) of a single part of a bitmap image, without changing the rest of the image.
Let's say I have a car, which is divided by many parts: door, windows, wheels, etc.
Each time an event (received from the network) arrives, I need to change the color of that particular part with the color specified by the event data.
What would be the best technique to achieve that?
I first thought on FloodFill, as suggested on many threads in SO, but given that the messages are received quite fast (several per second) I fear it would drag performance down, as it seem to be very CPU intensive algorithm.
I also thought about having multiple segments of the same image, each colored with a different color and show the right one at the right time, but the car has at least 10 different parts and each one could be painted with 4-6 colors, so I would end up with dozens of images and that would be impractical to handle, not to mention the waste of memory.
So, is there any other approach?
The fastest way to do it is with a shader. You'll need to use OpenGL ES 2 for that (some Androids only support ES 1). You'll need a temporary bitmap the same size as the image you want to change. Set it as the target. In the shader, retrieve a pixel from the sampler which is bound to the image you want to change. If it's within a small tolerance of the colour you want to change, set gl_FragColor to the new colour, otherwise just set gl_FragColor to the colour you retrieved from the sampler. You'll need to pass the desired colour and the new colour into the shader as vec4s with al_set_shader_float_vector. The fastest way to do this is to keep 2 bitmaps and swap between them as the "main one" that you're using each time a colour changes.
If you can't use a shader, then you'll have to lock the bitmap and replace the colour. Use al_lock_bitmap to lock it, then you can use al_get_pixel and al_put_pixel to change colours. Then al_unlock_bitmap when you're done. You can also avoid using al_get_pixel/al_put_pixel and access the memory manually which will be faster. If you lock the bitmap with the format ALLEGRO_PIXEL_FORMAT_ABGR_8888_LE then the memory is laid out like so:
int w = al_get_bitmap_width(bitmap);
int h = al_get_bitmap_height(bitmap);
for (int y = 0; y < h; y++) {
unsigned char *p = locked_region->data + locked_region->pitch * y;
for (int x = 0; x < w; x++) {
unsigned char r = p[0];
unsigned char g = p[1];
unsigned char b = p[2];
unsigned char a = p[3];
/* change r, g, b, a here if they match */
p[0] = r;
p[1] = g;
p[2] = b;
p[3] = a;
p += 4;
}
}
It's recommended that you lock the image in the format it was created in. That means pick an easy one like the one I mentioned, or else the inner part of the loop gets more complicated. The ABGR_8888 part of the pixel format describes the layout of the data. ABGR tells the order of the components. If you were to read a pixel into a single storage unit (an int in this case but it works the same with a short) then the bit pattern would be AAAAAAAABBBBBBBBGGGGGGGGRRRRRRRR. However, when you're reading a byte at a time, most machine are little endian so that means the small end comes first. That's why in my sample code p[0] is red. The 8888 part tells how many bits per component.