I am currently developing an app where I need to apply dynamic thresholding to a grey scale image.
=======================================================================
I am new to both thresholds and flutter as such my current understanding is:
Get all the pixel values from the grey scale image which is going to be within the range (0-255)
Set a threshold e.g. 110 with the range
Convert any value less than the threshold to 0 (BLACK)
Convert any value more than the threshold to 255 (WHITE)
Display the modified array of pixel values as an Image widget
Now I have come across several packages:
Image
Bitmap
Extended Image
I am not sure exactly what approach I should take with Flutter to apply thresholding to a grey scale image where I can change the threshold value using a slider widget.
GREY SCALE IMAGE ----> Threshold: 110 ----> B/W IMAGE
Please do let me know, if I need to clarify any part of my question or if it is too vague. I will try my best to further your understanding.
HIGHLY APPRECIATE ANY HELP! Cheers!
Future<Widget> binarizeImage() async
{
var image1 = await _picker.pickImage(
source: ImageSource.gallery);
// Load the image into Flutter
img.Image? image = img.decodeImage(File(image1!.path).readAsBytesSync());
print("Image data before ${image!.data.toString()}");
// Set the threshold value
final threshold = 4286578687;
// Iterate over each pixel in the image and apply the threshold
for (var x = 0; x < image.width; x++) {
for (var y = 0; y < image.height; y++) {
final pixel = image.getPixel(x, y);
if (pixel <= threshold) {
image.setPixel(x, y, 4278190080); // Black
} else {
image.setPixel(x, y, 4294967295); // White
}
}
}
print("Image data ${image.data.toString()}");
return Image.memory(
Uint8List.fromList(img.encodePng(image)),
);
}
I am fairly new to openCV libraries and I am trying to do real time object detection for a school project on an android app. followed this tutorial (https://www.youtube.com/watch?v=bSeFrPrqZ2A) and I am able to detect object by color on my android phone. Now I am trying to map out the trajectory of the object just like in this video (https://www.youtube.com/watch?v=QTYSRZD4vyI).
Following is some of the source code provided in the first youtube video.
void searchForMovement(int& x, int& y, Mat& mRgb1, Mat& threshold){
morphOps(threshold);
Mat temp;
threshold.copyTo(temp);
//these two vectors needed for output of findContours
vector< vector<Point> > contours;
vector<Vec4i> hierarchy;
//find contours of filtered image using openCV findContours function
//In OpenCV, finding contours is like finding white object from black background.
// So remember, object to be found should be white and background should be black.
//CV_CHAIN_APPROX_SIMPLE to draw 4 points of the contour
findContours(temp,contours,hierarchy,CV_RETR_CCOMP,CV_CHAIN_APPROX_SIMPLE );
double refArea = 0;
bool objectFound = false;
if (hierarchy.size() > 0) {
int numObjects = hierarchy.size();
//if number of objects greater than MAX_NUM_OBJECTS we have a noisy filter
if(numObjects<MAX_NUM_OBJECTS){
for (int index = 0; index >= 0; index = hierarchy[index][0]) {
Moments moment = moments((cv::Mat)contours[index]);
double area = moment.m00;
//if the area is less than 20 px by 20px then it is probably just noise
//if the area is the same as the 3/2 of the image size, probably just a bad filter
//we only want the object with the largest area so we safe a reference area each
//iteration and compare it to the area in the next iteration.
if(area>MIN_OBJECT_AREA && area<MAX_OBJECT_AREA && area>refArea){
x = moment.m10/area;
y = moment.m01/area;
objectFound = true;
refArea = area;
}else objectFound = false;
}
//let user know you found an object
if(objectFound ==true){
putText(mRgb1,"Tracking Object",Point(0,50),2,1,Scalar(0,255,0),2);
//draw object location on screen
drawObject(x,y,mRgb1);}
}else putText(mRgb1,"TOO MUCH NOISE! ADJUST FILTER",Point(0,50),1,2,Scalar(0,0,255),2);
}
}
void drawObject(int x, int y,Mat &frame){
Mat traj;
traj = frame;
//use some of the openCV drawing functions to draw crosshairs
//on your tracked image!
//UPDATE:JUNE 18TH, 2013
//added 'if' and 'else' statements to prevent
//memory errors from writing off the screen (ie. (-25,-25) is not within the window!)
circle(frame,Point(x,y),20,Scalar(0,255,0),2);
if(y-25>0)
line(frame,Point(x,y),Point(x,y-25),Scalar(0,255,0),2);
else line(traj,Point(x,y),Point(x,0),Scalar(0,255,0),2);
if(y+25<FRAME_HEIGHT)
line(frame,Point(x,y),Point(x,y+25),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(x,FRAME_HEIGHT),Scalar(0,255,0),2);
if(x-25>0)
line(traj,Point(x,y),Point(x-25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(0,y),Scalar(0,255,0),2);
if(x+25<FRAME_WIDTH)
line(frame,Point(x,y),Point(x+25,y),Scalar(0,255,0),2);
else line(frame,Point(x,y),Point(FRAME_WIDTH,y),Scalar(0,255,0),2);
// add(traj, frame, frame);
putText(frame,intToString(x)+","+intToString(y),Point(x,y+30),1,1,Scalar(0,255,0),2);
}
How can I add onto this code to get the trajectory of an object showed in the 2nd video? Any suggestion would be much appreciated. Thank you.
http://opencv-srf.blogspot.co.uk/2010/09/object-detection-using-color-seperation.html
Found it. When doing it in android, need to make sure the lastX and lastY are updating as well.
I am using the openCV library 3.20 for java programming in android, and am trying to do adaptive gamma correction of a live image being captured by the JavaCameraView camera object, and hence I am experiencing major lag issues.
Currently the code is using a nested for loop to access each pixel value and apply the gamma correction to it. Is there a method by which I can apply the same function to each pixel of an image in a more efficient way?
Research paper I am following to do the adaptive gamma correction- https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-016-0138-1
My code for the same-
List<Mat> channels = new ArrayList<Mat>();
MatOfDouble mean=new MatOfDouble();
MatOfDouble stdDev=new MatOfDouble();
//convert image to HSV format, going to work with the V channel
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_BGR2HSV);
//split into channels
Core.split(mRgba,channels);
//find mean and standard deviation of values channel and map to 0-1 scale
Core.meanStdDev(channels.get(2),mean,stdDev);
double stdDevVal=stdDev.get(0,0)[0]/255.0;
double meanVal=mean.get(0,0)[0]/255.0;
/*parameter initialization for adaptive gamma filtering
gamma, heaviside, c, k and newPixelIntensity
c and gamma are parameters that control shape of transformation curve (gamma transformation)
k and heaviside are parameters that depict the value of c, newPixelIntensity is the final
intensity value of each separate pixel
Relation between c, k and heaviside.
c=1/(1+Heaviside(0.5-standard deviation)*(k-1))
Where k is defined by
k=(oldPixelIntensity)^gamma + (1-(oldPixelIntensity^gamma))*mean^gamma
newPixelIntensity=c*(oldPixelIntensity)^gamma
*/
double gamma, heaviside,c,newPixelIntensity,k=0;
//based on the standard deviation of the pixel intensities, classify the image as low/high contrast
if(4*stdDevVal<=1/3)
{
//since it is low contrast, gamma=-log{base 2} (std. dev) as we need large increase of constrast through gamma
gamma=-Math.log10(stdDevVal)/Math.log10(2);
//get heaviside value
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
//according to k and heaviside, calculate c
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//high contrast image
else
{
//for high contrast images gamma= exp((1-(mean+std dev))/2) rest remains the same
gamma=Math.exp((1-(meanVal+stdDevVal))/2);
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to the range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//Merge the updated channels to form the image and convert image back to RGB form
Core.merge(channels,mRgba);
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_HSV2BGR);
Implementation of the heaviside function-
public static double heaviside(double sigma) {
if(0.5-sigma>0)
return 1;
else
return 0;
}
}
I wanted to plot the RGB intensity graph of an image like sinusoidal waves for 3 colours. Can anyone please suggest any idea to do this ?
There are (in general) 256 levels (8-bits) for each color component. If the image has an alpha channel then the overall image bits per pixel will be 32, else for an RGB only image it will be 24.
I will push this out algorithmically to get the image histogram, it will be up to you to write the drawing code.
// Arrays for the histogram data
int histoR[256]; // Array that will hold the counts of how many of each red VALUE the image had
int histoG[256]; // Array that will hold the counts of how many of each green VALUE the image had
int histoB[256]; // Array that will hold the counts of how many of each blue VALUE the image had
int histoA[256]; // Array that will hold the counts of how many of each alpha VALUE the image had
// Zeroize all histogram arrays
for(num = 0 through 255){
histoR[num] = 0;
histoG[num] = 0;
histoB[num] = 0;
histoA[num] = 0;
}
// Move through all image pixels counting up each time a pixel color value is used
for(x = 0 through image width){
for(y = 0 through image height){
histoR[image.pixel(x, y).red] += 1;
histoG[image.pixel(x, y).green] += 1;
histoB[image.pixel(x, y).blue] += 1;
histoA[image.pixel(x, y).alpha] += 1;
}
}
You now have the histogram data, it is up to you to plot it. PLEASE REMEMBER THE ABOVE IS ONLY AN ALGORITHMIC DESCRIPTION, NOT ACTUAL CODE
I'am actually working in a project on android in which I want to extract some specific pixel values (according to a condition, using the threshold method , I have set some pixels to a certain values and all the others to zero) I want to extract all these values different from zero in a vector. I will use this vector of the chosen pixels to do some operation ( the mean value for exemple ) , Is there a method in OpenCV that can help me doing this ?
Thank you :)
I don't know such function but actually it's not hard to implement it (c++):
//'in' should be CV_8UC3
vector<int>& getNonZero(const Mat& in)
{
//get size of result vector
//this is all non-zero pixels:
int count = countNonZero(in);
vector<int>& result(count);
int k = 0;
for (int r = 0; r < in.rows; ++r)
{
for (int c = 0; c < in.cols; ++c)
{
if (in.at<int>(r, c))
{
result[k++] = in.at<int>(r, c);
}
}
}
return result;
}
And also many OpenCV functions have InputArray mask parameter so use it! For example:
void meanStdDev(InputArray src, OutputArray mean, OutputArray stddev, InputArray mask=noArray())