What is the standard way to access and modify individual elements of a Mat in OpenCV4Android? Also, what is the format of the data for BGR (which is the default, I think) and grayscale?
edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
If you just want to access some pixels do it by using double[] get(int row, int col) and writing using put(int row, int col, double... data). If you are thinking in accessing the whole image or iterate the data of the image in a loop the best thing you should do is copy the Mat data into a Java primitive data type. When you're done with the data operations just copy the data back into a Mat structure.
Images use CV_8U, if you have a grayscale image it will use CV_8UC1 if you have a RGB image it will use a Mat of CV_8UC3 (3 channels of CV_8U). CV_8U is the equivalent of byte in java. :)
I can give you and example of a method I use in Java (Android platform) to binarize a grayscale image:
private Mat featuresVectorBinarization(Mat fv){
int size = (int) fv.total() * fv.channels();
double[] buff = new double[size];
fv.get(0, 0, buff);
for(int i = 0; i < size; i++)
{
buff[i] = (buff[i] >= 0) ? 1 : 0;
}
Mat bv = new Mat(fv.size(), CvType.CV_8U);
bv.put(0, 0, buff);
return bv;
}
Hope that helps.
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
A Mat is the thing we call a "matrix" in Mathematics - a rectangular array of quantities set out by rows and columns. Those "quantities" represent pixels in the case of an image Mat, (e.g. every element of a matrix can be the color of each pixel in an image Mat). From this tutorial:
in the above image you can see that the mirror of the car is nothing
more than a matrix containing all the intensity values of the pixel
points.
So how would you go about iterating through a matrix? How about this:
for (int row=0; row<mat.rows(); row++) {
for (int col=0; col<mat.cols(); col++ ) {
//...do what you want..
//e.g. get the value of the 3rd element of 2nd row
//by mat.get(2,3);
}
}
What is the standard way to access and modify individual elements of a Mat in OpenCV4Android?
You get the value of an element of Mat by using its function get(x)(y), where x is the first coordinate (row number) and y is the second coordinate (column number) of the element. For example to get the 4th element of the 7th row of a BGR image Mat named bgrImageMat, use the get method of Mat to get an array of type double, which will have a size of 3, each array element representing each of the Blue, Green, and Red channels of the BGR image format.
double [] bgrColor = bgrImageMat.get();
Also, what is the format of the data for BGR (which is the default, I think) and grayscale? edit: Let's make this more specific. mat.get(row, col) returns a double array. What is in this array?
You can read about BGR color format and grayscale from the web. e.g. BGR and Grayscale.
In short, BGR color format has 3 channels: Blue, Green and Red. So the double array returned by mat.get(row, col), when mat is a BGR image, is an array of size 3, and each of its elements contains the values of each of the Blue, green and red channels respectively.
Likewise, Grayscale format is a 1 channel color format, so the double returned will have a size of 1.
What I could understand from learning about OpenCV Mat object is that Mat is object that can represent any image of W x H pixels.
Now lets say you want to access center pixel of your image then
X = W/2
Y = H/2
Then you can access pixel data as follows
double[] data = matObject.get(x,y);
Now what does data represent and what is size of data array.That depends on image type.
if image is grayscale then data.length = 1 , since there is one channel only , and data[0] represents color value of that pixel ie. 0(black) - 255(white)
if image is color image then data.length = 4(rgba) , since there are four channels , and data[0-n] represent color value of that pixel
Related
I am trying to process multiple RAW DNG images by stacking them to produce 1 stacked RAW DNG image. First, I get the DNG pixel data into byte array, since DNG is digital negative, I then flipped the byte values using "~" and convert them into unsigned integer, now I calculate the average. With the average result, I flipped it back with "~" and save in "newData" byte array.
Below is the snippet on averaging 2 DNG images. The images are shot from OnePlus 3 in RAW (16MP DNG).
byte[] previousData //from previous DNG image
ByteBuffer rawByteBuffer = mImage.getPlanes()[0].getBuffer();
byte[] data = new byte[rawByteBuffer.remaining()];
rawByteBuffer.get(data); // from current DNG image
for(int i=0; i<data.length; i++){
int currentInt = Byte.toUnsignedInt((byte) ~data[i]);
int previousInt = Byte.toUnsignedInt((byte) ~previousData[i]);
int sumInt = currentInt + previousInt;
int averageInt = sumInt/2;
newData[i] = (byte) (~averageInt); // store average into newData
}
// save newData into storage in DNG format
However, the result image(link below) always show green tint on white color portion. Any ideas what went wrong in the process?
Here is the preview of the stacked DNG image
Here is the preview of original single DNG for reference
Update 1
I have an idea what inRange function does. But I don't want to apply mask and show the new image with skin color. What I want to do is to know if the image contains skin color and cover larger area.
What I want to do
I want to capture a picture whenever finger is detected inside a boundary. Its dimensions are known.
Struggling points
Manipulate image data in native code.
Detecting skin in live camera, so whenever that particular area is focused and skin is detected, snap should be taken
What I have done
I am using JNI Layer to perform the operation. I am able to get Mat from image data using this tutorial, but don't know how to manipulate poutPixels. The format is NV21 and I am not sure how to do operations on it.
I need to crop image and then detect if there's skin present in the image. I have successfully cropped the image to the desired dimension, but has no clue to move forward to detect skin. I want this method to return true or false.
Here is the code:
jbyte * pNV21FrameData = env->GetByteArrayElements(NV21FrameData, 0);
jint * poutPixels = env->GetIntArrayElements(outPixels, 0);
Mat mNV(height, width, CV_8UC3, (unsigned char*)pNV21FrameData);
Mat finalImage(height, width, CV_8UC3, (unsigned char*) poutPixels);
jfloat wScale = (float) width/screenWidth;
jfloat hScale = (float) height/screenHeight;
float temp = rectX * wScale;
int x = (int) temp;
temp = rectY * hScale;
int y = (int) temp;
int cW = (int) (width * wScale);
int cH = (int) (height * hScale);
cH = cH/2;
Rect regionToCrop(x, y, cW, cH);
mNV = mNV(regionToCrop);
finalImage = finalImage(regionToCrop);
//detect skin and return true or false
I have read about inRange function, but I don't know how to check whether there's skin or not.
Questions
Am I on the right path to proceed further?
The image format I am getting is NV21. Is it a 8UC1 or it can be 8UC3 too?
How to proceed from here to start detecting skin?
Any help is appreciated.
I have solved my problem by extracting skin color range and making all pixels equal to zero. Below are the steps.
Convert the image to HSV
First convert image to HSV.
Mat mHsv = new Mat(rows, cols, CvType.CV_8UC3);
Imgproc.cvtColor(mRgba, mHsv, Imgproc.COLOR_RGB2HSV);
Get range of skin color
Skin color range may vary, but this one is working fine for me.
Mat output = new Mat();
Core.inRange(mHsv, new Scalar(0, 0.18*255, 0), new Scalar(25, 0.68*255, 255), output);
Extract this Skin Range channel
Now extract this channel while making skin pixels equal to zero
Mat mExtracted = new Mat();
Core.extractChannel(output, mExtracted, 0);
Now you have mExtracted matrix, in which skin colored pixels are 0 and rests are 255 (or skin color, I am not sure).
Get count of zeros
Since 0 now is actually skin color area, what you can do is to define a threshold which suits your need. According to my need, I want skin to cover more than half of the area, so I made my logic accordingly.
int n = Core.countNonZero(mExtracted);
int check = (mExtracted.rows() * mExtracted.cols())/2;
if(n >= check && isFocused) {
//Take picture
}
The following code reads the pixel in the center and returned three values which i assumed was H = data[0], S data[1], V = data[2], how do I get the upper and lower bounds HSV value?
Note: The color pixel I'm reading is Green.
E/data: H:90.0 S:113.0 V:144.0
if (getIntent().hasExtra("byteArray")) {
bitmap = BitmapFactory.decodeByteArray(getIntent().getByteArrayExtra("byteArray"), 0, getIntent().getByteArrayExtra("byteArray").length);
int width= bitmap.getWidth();
int height=bitmap.getHeight();
int centerX=width/2;
int centerY=height/2;
srcMat = new Mat();
Utils.bitmapToMat(bitmap, srcMat);
Imgproc.cvtColor(srcMat, srcMat, Imgproc.COLOR_BGR2HSV);
srcMat.convertTo(srcMat, CvType.CV_64FC3); //http://answers.opencv.org/question/14961/using-get-and-put-to-access-pixel-values-in-java/
double[] data = srcMat.get(centerX, centerY);
Log.e("data", String.valueOf("H:"+data[0]+" S:"+data[1]+" V:"+data[2]));
Log.e("dlength", String.valueOf(data.length));
Mat matHSV = new Mat(0,0,CvType.CV_64FC3);
Also by adding the following three lines of code, i'll receive an error saying bitmap == null, so im not really sure if the pixel reading worked or not.
matHSV.put(0,0,data);
Utils.matToBitmap(matHSV, bb);
imgDisplay.setImageBitmap(bb);
Image I'm Reading:
Just convert Mat to HSV model with Imgproc.cvtColor() method:
Imgproc.cvtColor(hsvMat, hsvMat, Imgproc.COLOR_RGB2HSV);
Than find min/max Hue value over each pixel (with Mat.get(int,int) method) - and thats min and max is will be the answer: lower and upper bounds for a color in HSV model.
NB! Hue for OpenCV 8-bit images is Hue/2 because Hue value in "normal" model is between 0 and 360, and grater than 255 - max value for byte. So if You need color bounds for "normal" HSV model (not HSV for Android OpenCV), You shuld multiply it by 2.
I am using the openCV library 3.20 for java programming in android, and am trying to do adaptive gamma correction of a live image being captured by the JavaCameraView camera object, and hence I am experiencing major lag issues.
Currently the code is using a nested for loop to access each pixel value and apply the gamma correction to it. Is there a method by which I can apply the same function to each pixel of an image in a more efficient way?
Research paper I am following to do the adaptive gamma correction- https://jivp-eurasipjournals.springeropen.com/articles/10.1186/s13640-016-0138-1
My code for the same-
List<Mat> channels = new ArrayList<Mat>();
MatOfDouble mean=new MatOfDouble();
MatOfDouble stdDev=new MatOfDouble();
//convert image to HSV format, going to work with the V channel
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_BGR2HSV);
//split into channels
Core.split(mRgba,channels);
//find mean and standard deviation of values channel and map to 0-1 scale
Core.meanStdDev(channels.get(2),mean,stdDev);
double stdDevVal=stdDev.get(0,0)[0]/255.0;
double meanVal=mean.get(0,0)[0]/255.0;
/*parameter initialization for adaptive gamma filtering
gamma, heaviside, c, k and newPixelIntensity
c and gamma are parameters that control shape of transformation curve (gamma transformation)
k and heaviside are parameters that depict the value of c, newPixelIntensity is the final
intensity value of each separate pixel
Relation between c, k and heaviside.
c=1/(1+Heaviside(0.5-standard deviation)*(k-1))
Where k is defined by
k=(oldPixelIntensity)^gamma + (1-(oldPixelIntensity^gamma))*mean^gamma
newPixelIntensity=c*(oldPixelIntensity)^gamma
*/
double gamma, heaviside,c,newPixelIntensity,k=0;
//based on the standard deviation of the pixel intensities, classify the image as low/high contrast
if(4*stdDevVal<=1/3)
{
//since it is low contrast, gamma=-log{base 2} (std. dev) as we need large increase of constrast through gamma
gamma=-Math.log10(stdDevVal)/Math.log10(2);
//get heaviside value
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
//according to k and heaviside, calculate c
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//high contrast image
else
{
//for high contrast images gamma= exp((1-(mean+std dev))/2) rest remains the same
gamma=Math.exp((1-(meanVal+stdDevVal))/2);
heaviside=heaviside(meanVal);
for(int i=0;i<channels.get(2).rows();i++)
{
for(int j=0;j<channels.get(2).cols();j++)
{
//access the pixel value and map it down to the range 0-1
double pixelIntensity=channels.get(2).get(i,j)[0]/255.0;
if(heaviside==1)
{
k=Math.pow(pixelIntensity,gamma)+(1-Math.pow(pixelIntensity,gamma))*Math.pow(meanVal,gamma);
}
c=1/(1+heaviside*(k-1));
//find new pixel intensity, map it to 0-255 scale and put it back in the Matrix
newPixelIntensity=255*c*Math.pow(pixelIntensity,gamma);
channels.get(2).put(i,j,newPixelIntensity);
}
}
}
//Merge the updated channels to form the image and convert image back to RGB form
Core.merge(channels,mRgba);
Imgproc.cvtColor(mRgba,mRgba,Imgproc.COLOR_HSV2BGR);
Implementation of the heaviside function-
public static double heaviside(double sigma) {
if(0.5-sigma>0)
return 1;
else
return 0;
}
}
I wanted to plot the RGB intensity graph of an image like sinusoidal waves for 3 colours. Can anyone please suggest any idea to do this ?
There are (in general) 256 levels (8-bits) for each color component. If the image has an alpha channel then the overall image bits per pixel will be 32, else for an RGB only image it will be 24.
I will push this out algorithmically to get the image histogram, it will be up to you to write the drawing code.
// Arrays for the histogram data
int histoR[256]; // Array that will hold the counts of how many of each red VALUE the image had
int histoG[256]; // Array that will hold the counts of how many of each green VALUE the image had
int histoB[256]; // Array that will hold the counts of how many of each blue VALUE the image had
int histoA[256]; // Array that will hold the counts of how many of each alpha VALUE the image had
// Zeroize all histogram arrays
for(num = 0 through 255){
histoR[num] = 0;
histoG[num] = 0;
histoB[num] = 0;
histoA[num] = 0;
}
// Move through all image pixels counting up each time a pixel color value is used
for(x = 0 through image width){
for(y = 0 through image height){
histoR[image.pixel(x, y).red] += 1;
histoG[image.pixel(x, y).green] += 1;
histoB[image.pixel(x, y).blue] += 1;
histoA[image.pixel(x, y).alpha] += 1;
}
}
You now have the histogram data, it is up to you to plot it. PLEASE REMEMBER THE ABOVE IS ONLY AN ALGORITHMIC DESCRIPTION, NOT ACTUAL CODE