I wanted to plot the RGB intensity graph of an image like sinusoidal waves for 3 colours. Can anyone please suggest any idea to do this ?
There are (in general) 256 levels (8-bits) for each color component. If the image has an alpha channel then the overall image bits per pixel will be 32, else for an RGB only image it will be 24.
I will push this out algorithmically to get the image histogram, it will be up to you to write the drawing code.
// Arrays for the histogram data
int histoR[256]; // Array that will hold the counts of how many of each red VALUE the image had
int histoG[256]; // Array that will hold the counts of how many of each green VALUE the image had
int histoB[256]; // Array that will hold the counts of how many of each blue VALUE the image had
int histoA[256]; // Array that will hold the counts of how many of each alpha VALUE the image had
// Zeroize all histogram arrays
for(num = 0 through 255){
histoR[num] = 0;
histoG[num] = 0;
histoB[num] = 0;
histoA[num] = 0;
}
// Move through all image pixels counting up each time a pixel color value is used
for(x = 0 through image width){
for(y = 0 through image height){
histoR[image.pixel(x, y).red] += 1;
histoG[image.pixel(x, y).green] += 1;
histoB[image.pixel(x, y).blue] += 1;
histoA[image.pixel(x, y).alpha] += 1;
}
}
You now have the histogram data, it is up to you to plot it. PLEASE REMEMBER THE ABOVE IS ONLY AN ALGORITHMIC DESCRIPTION, NOT ACTUAL CODE
Related
I am currently developing an app where I need to apply dynamic thresholding to a grey scale image.
=======================================================================
I am new to both thresholds and flutter as such my current understanding is:
Get all the pixel values from the grey scale image which is going to be within the range (0-255)
Set a threshold e.g. 110 with the range
Convert any value less than the threshold to 0 (BLACK)
Convert any value more than the threshold to 255 (WHITE)
Display the modified array of pixel values as an Image widget
Now I have come across several packages:
Image
Bitmap
Extended Image
I am not sure exactly what approach I should take with Flutter to apply thresholding to a grey scale image where I can change the threshold value using a slider widget.
GREY SCALE IMAGE ----> Threshold: 110 ----> B/W IMAGE
Please do let me know, if I need to clarify any part of my question or if it is too vague. I will try my best to further your understanding.
HIGHLY APPRECIATE ANY HELP! Cheers!
Future<Widget> binarizeImage() async
{
var image1 = await _picker.pickImage(
source: ImageSource.gallery);
// Load the image into Flutter
img.Image? image = img.decodeImage(File(image1!.path).readAsBytesSync());
print("Image data before ${image!.data.toString()}");
// Set the threshold value
final threshold = 4286578687;
// Iterate over each pixel in the image and apply the threshold
for (var x = 0; x < image.width; x++) {
for (var y = 0; y < image.height; y++) {
final pixel = image.getPixel(x, y);
if (pixel <= threshold) {
image.setPixel(x, y, 4278190080); // Black
} else {
image.setPixel(x, y, 4294967295); // White
}
}
}
print("Image data ${image.data.toString()}");
return Image.memory(
Uint8List.fromList(img.encodePng(image)),
);
}
I am using the openCV library 3.20 for java programming in android, and am trying to do adaptive gamma correction of a live image being captured by the JavaCameraView camera object, and hence I am experiencing major lag issues.
Currently the code is using a nested for loop to access each pixel value and apply the gamma correction to it. Is there a method by which I can apply the same function to each pixel of an image in a more efficient way?
Research paper I am following to do the adaptive gamma correction- https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-016-0138-1
My code for the same-
List<Mat> channels = new ArrayList<Mat>();
MatOfDouble mean=new MatOfDouble();
MatOfDouble stdDev=new MatOfDouble();
//convert image to HSV format, going to work with the V channel
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_BGR2HSV);
//split into channels
Core.split(mRgba,channels);
//find mean and standard deviation of values channel and map to 0-1 scale
Core.meanStdDev(channels.get(2),mean,stdDev);
double stdDevVal=stdDev.get(0,0)[0]/255.0;
double meanVal=mean.get(0,0)[0]/255.0;
/*parameter initialization for adaptive gamma filtering
gamma, heaviside, c, k and newPixelIntensity
c and gamma are parameters that control shape of transformation curve (gamma transformation)
k and heaviside are parameters that depict the value of c, newPixelIntensity is the final
intensity value of each separate pixel
Relation between c, k and heaviside.
c=1/(1+Heaviside(0.5-standard deviation)*(k-1))
Where k is defined by
k=(oldPixelIntensity)^gamma + (1-(oldPixelIntensity^gamma))*mean^gamma
newPixelIntensity=c*(oldPixelIntensity)^gamma
*/
double gamma, heaviside,c,newPixelIntensity,k=0;
//based on the standard deviation of the pixel intensities, classify the image as low/high contrast
if(4*stdDevVal<=1/3)
{
//since it is low contrast, gamma=-log{base 2} (std. dev) as we need large increase of constrast through gamma
gamma=-Math.log10(stdDevVal)/Math.log10(2);
//get heaviside value
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
//according to k and heaviside, calculate c
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//high contrast image
else
{
//for high contrast images gamma= exp((1-(mean+std dev))/2) rest remains the same
gamma=Math.exp((1-(meanVal+stdDevVal))/2);
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to the range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//Merge the updated channels to form the image and convert image back to RGB form
Core.merge(channels,mRgba);
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_HSV2BGR);
Implementation of the heaviside function-
public static double heaviside(double sigma) {
if(0.5-sigma>0)
return 1;
else
return 0;
}
}
I'll be creating an Android Application that determines the freshness of meat using it's RGB. The flow of the application would be:
Capture an image
Determine the freshness by clicking one button
Displaying the result based from it's RGB.
I have a problem in the RGB part. Hope somebody can help me with this matter.
If you are using Processing this could be useful.
Use Processing to scan an image pixel by pixel, so they could be compare to find pattern's in the image.
PImage sample;
void setup() {
size(300,300);
sample = loadImage("sample.jpg");
sample.loadPixels();
image(sample,0,0,300,300);
}
void draw() {
//Loop to scan the image pixel by pixel
for (int x=0; x < sample.width; x++){
for (int y=0; y < sample.height; y++){
int location = x + (y * sample.width);
// Whit the location you can get the current color
color currentColor = sample.pixels[loc];
//Do something
}
}
}
I'm writing a scratch card like app, and I use a SurfaceView for that.
I fill it with some kind of color and I draw some Path on it with PorterDuff.Mode.CLEAR PorterDuffXfermode. I have to identify when the user fully scratched it (the SurfaceView's canvas is fully transparent). Can anybody give me some advice, how to identify it?
I tried it with saving the coordinates of the paths, but because of the drawing stroke width I can't calculate the covered area well.
I tried to get a Bitmap from the SurfaceView's getDrawingCache method and iterate on its pixels and use the getPixel method. It doesn't work and i think it would be not an efficient way to examine the canvas.
Assuming the canvas will not be large or scalable to an arbitrary size I think looping over the pixels would be effective.
Given a canvas of large or arbitrary size I would create an array representation of the canvas and mark pixels as you go, keeping a count of how many the user has hit at least once. Then test that number against a threshold value that determines how much of the ticket must be scratched for it to be considered "scratched off". Incoming pseudo-code
const int count = size_x * size_y; // pixel count
const int threshhold = 0.8 * count // user must hit 80% of the pixels to uncover
const int finger_radius = 2; // the radias of our finger in pixels
int scratched_pixels = 0;
bit [size_x][size_y] pixels_hit; // array of pixels all initialized to 0
void OnMouseDown(int pos_x, int pos_y)
{
// calculates the mouse position in the canvas
int canvas_pos_x, canvas_pos_y = MousePosToCanvasPos(pos_x, pos_y);
for(int x = canvas_pos_x - finger_rad; x < canvas_pos_x + brush_rad; ++x)
{
for(int y = canvas_pos_y - finger_rad; y < canvas_pos_y + brush_rad; ++y)
{
int dist_x = x - canvas_pos_x;
int dist_y = y - canvas_pos_y;
if((dist_x * dist_x + dist_y * dist_y) <= brush_rad * brush_rad
&& pixels_hit[x][y] == 0)
{
++scratched_pixels;
pixels_hit[x][y] = 1;
}
}
}
}
bool IsScratched()
{
if(scratched_pixels > threshhold)
return true;
else
return false;
}
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android? Also, what is the format of the data for BGR (which is the default, I think) and grayscale?
edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
If you just want to access some pixels do it by using double[] get(int row, int col) and writing using put(int row, int col, double... data). If you are thinking in accessing the whole image or iterate the data of the image in a loop the best thing you should do is copy the Mat data into a Java primitive data type. When you're done with the data operations just copy the data back into a Mat structure.
Images use CV_8U, if you have a grayscale image it will use CV_8UC1 if you have a RGB image it will use a Mat of CV_8UC3 (3 channels of CV_8U). CV_8U is the equivalent of byte in java. :)
I can give you and example of a method I use in Java (Android platform) to binarize a grayscale image:
private Mat featuresVectorBinarization(Mat fv){
int size = (int) fv.total() * fv.channels();
double[] buff = new double[size];
fv.get(0, 0, buff);
for(int i = 0; i < size; i++)
{
buff[i] = (buff[i] >= 0) ? 1 : 0;
}
Mat bv = new Mat(fv.size(), CvType.CV_8U);
bv.put(0, 0, buff);
return bv;
}
Hope that helps.
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
A Mat is the thing we call a "matrix" in Mathematics - a rectangular array of quantities set out by rows and columns. Those "quantities" represent pixels in the case of an image Mat, (e.g. every element of a matrix can be the color of each pixel in an image Mat). From this tutorial:
in the above image you can see that the mirror of the car is nothing
more than a matrix containing all the intensity values of the pixel
points.
So how would you go about iterating through a matrix? How about this:
for (int row=0; row<mat.rows(); row++) {
for (int col=0; col<mat.cols(); col++ ) {
//...do what you want..
//e.g. get the value of the 3rd element of 2nd row
//by mat.get(2,3);
}
}
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
You get the value of an element of Mat by using its function get(x)(y), where x is the first coordinate (row number) and y is the second coordinate (column number) of the element. For example to get the 4th element of the 7th row of a BGR image Mat named bgrImageMat, use the get method of Mat to get an array of type double, which will have a size of 3, each array element representing each of the Blue, Green, and Red channels of the BGR image format.
double [] bgrColor = bgrImageMat.get();
Also, what is the format of the data for BGR (which is the default, I think) and grayscale? edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
You can read about BGR color format and grayscale from the web. e.g. BGR and Grayscale.
In short, BGR color format has 3 channels: Blue, Green and Red. So the double array returned by mat.get(row, col), when mat is a BGR image, is an array of size 3, and each of its elements contains the values of each of the Blue, green and red channels respectively.
Likewise, Grayscale format is a 1 channel color format, so the double returned will have a size of 1.
What I could understand from learning about OpenCV Mat object is that Mat is object that can represent any image of W x H pixels.
Now lets say you want to access center pixel of your image then
X = W/2
Y = H/2
Then you can access pixel data as follows
double[] data = matObject.get(x,y);
Now what does data represent and what is size of data array.That depends on image type.
if image is grayscale then data.length = 1 , since there is one channel only , and data[0] represents color value of that pixel ie. 0(black) - 255(white)
if image is color image then data.length = 4(rgba) , since there are four channels , and data[0-n] represent color value of that pixel