The right method to obtain R-G-B intensities of image? - android

I am using a simple camera app, and obtain the live camera preview to a surfaceView. I turn the camerea on/off using a button. In the button's onClick, after I call camera.startPreview(), I run:
camera.setPreviewCallback(new PreviewCallback() {
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
sumRED = 0;
sumGREEN = 0;
sumBLUE = 0;
int frameHeight = camera.getParameters().getPictureSize().height;
int frameWidth = camera.getParameters().getPictureSize().width;
int[] myPixels = convertYUV420_NV21toRGB8888(data, 200, 200);
Bitmap bm = Bitmap.createBitmap(myPixels, 200, 200, Bitmap.Config.ARGB_8888);
imageView.setImageBitmap(bm);
for (int i = 0; i < myPixels.length; i++) {
sumRED = sumRED + Color.red(myPixels[i]);
sumGREEN = sumGREEN + Color.green(myPixels[i]);
sumBLUE = sumBLUE + Color.blue(myPixels[i]);
}
sumRED = sumRED / myPixels.length;
sumGREEN = sumGREEN / myPixels.length;
sumBLUE = sumBLUE / myPixels.length;
String sRed = Float.toString(sumRED);
String sGreen = Float.toString(sumGREEN);
String sBlue = Float.toString(sumBLUE);
rTextView.setText("RED: " + sRed);
gTextView.setText("Green: " + sGreen);
bTextView.setText("Blue: " + sBlue);
}
}
I am using the offered code for NV21 to RGB conversion from here.
I thought that by doing Color.blue(myPixels[i]); for the R-G-B colors I will get the color intensity of each pixel, and then obtain the average colol intensity of the picture.
But I have noticed that the TextView shows the Red and the Green as exactly same values.
Is the code I am reusing is not right for this purpose? or is there a better, preferred, more efficieny way to obtain specific color intensity of a pixel (or live video preview?)
Thank you
update:
At first I was not touching the output format of the preview. The preview of the rebuilt bitmap was really bad. When I added cameraParam.setPreviewFormat(ImageFormat.YV12); the rebuilt bitmap divided the rebuilt image to 4 quadrants. When I covered the camera with my finger (was shown mosly orange/red due to flash being on), the rebuilt image was purple on the top two quadrants, and greenish on the bottom two. When I changed the format to NV21, the rebuilt image was a single image, and showed pretty much the same as the shown preview, and there was a segnificant change between all R-G-B colors. So, my question is: Why am I getting results in the range of 150-160 when most of the screen is a single color, and values in the range of 255 or so? I thought that the conversion algorithm I was using was converting to a scale of 255. What am I missing here?

Related

Overlay a image over live frame - OpenCV4Android

I'm trying to overlay on the live frame a little image previosuly selected by the user. I already have the image's path from a previous activity. My problem is that I am not able to show the image on the frame.
I'm trying to detect a rectangle on the frame, and over the rectangle display the image selected. I could detect the rectangle, but now i can't display the image on any part of the frame (I don't care about the rectangle right now).
I've been trying to do it with the explanations from Adding Image Overlay OpenCV for Android and add watermark small image to large image opencv4android but it didn't worked for me.
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
Mat gray = inputFrame.gray();
Mat dst = inputFrame.rgba();
Imgproc.pyrDown(gray, dsIMG, new Size(gray.cols() / 2, gray.rows() / 2));
Imgproc.pyrUp(dsIMG, usIMG, gray.size());
Imgproc.Canny(usIMG, bwIMG, 0, threshold);
Imgproc.dilate(bwIMG, bwIMG, new Mat(), new Point(-1, 1), 1);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
cIMG = bwIMG.clone();
Imgproc.findContours(cIMG, contours, hovIMG, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
for (MatOfPoint cnt : contours) {
MatOfPoint2f curve = new MatOfPoint2f(cnt.toArray());
Imgproc.approxPolyDP(curve, approxCurve, 0.02 * Imgproc.arcLength(curve, true), true);
int numberVertices = (int) approxCurve.total();
double contourArea = Imgproc.contourArea(cnt);
if (Math.abs(contourArea) < 100) {
continue;
}
//Rectangle detected
if (numberVertices >= 4 && numberVertices <= 6) {
List<Double> cos = new ArrayList<>();
for (int j = 2; j < numberVertices + 1; j++) {
cos.add(angle(approxCurve.toArray()[j % numberVertices], approxCurve.toArray()[j - 2], approxCurve.toArray()[j - 1]));
}
Collections.sort(cos);
double mincos = cos.get(0);
double maxcos = cos.get(cos.size() - 1);
if (numberVertices == 4 && mincos >= -0.3 && maxcos <= 0.5) {
//Small watermark image
Mat a = imread(img_path);
Mat bSubmat = dst.submat(0, dst.rows() -1 , 0, dst.cols()-1);
a.copyTo(bSubmat);
}
}
}
return dst;
}
NOTE: img_path is the path of the selected image I want to display over the frame. I got it from the previous activity.
By now, I just want to display the image over the frame. Later, I will try to display it on the same position where it found the rectangle.
Please, any suggestion or recommendation is welcome, as I am new with OpenCV. I'm sorry for my english, but feel free to ask me anything I didn't explain correctly. I'll do my best to explain it better.
Thanks a lot!
If you just want to display the image as an overlay, and not save it as part of the video, you may find it easier to simple display it in a separate view above the video view. This will likely use less processing, battery etc also.
If you want to draw onto the camera image bitmap then the following will allow you do that:
Bitmap cameraBitmap = BitmapFactory.decodeByteArray(bytes,0,bytes.length, opt);
Canvas camImgCanvas = new Canvas(cameraBitmap);
Drawable d = ContextCompat.getDrawable(getActivity(), R.drawable.myDrawable);
//Centre the drawing
int bitMapWidthCenter = cameraBitmap.getWidth()/2;
int bitMapheightCenter = cameraBitmap.getHeight()/2;
d.setBounds(bitMapWidthCenter, bitMapheightCenter, bitMapWidthCenter+d.getIntrinsicWidth(),
bitMapheightCenter+d.getIntrinsicHeight());
//And draw it...
d.draw(camImgCanvas);

How to add cartoon face on Camera Preview using Android

I have a Module to put cartoon face on eyes or anywhere else on live Camera Preview. I am using Moodme Sdk. I have implemented camera preview. I m getting landmark x and y axis value. But I don't know where do i add those landmark and how to put that image on eyes using landmark. This is code for while getting person face on live camera.
#Override
public void onImageAvailable(ImageReader reader) {
Image image = imageReader.acquireLatestImage();
if (image == null) {
return;
}
ByteBuffer yBuffer = image.getPlanes()[0].getBuffer();
byte[] yBytes = new byte[yBuffer.remaining()];
yBuffer.get(yBytes);
ByteBuffer uBuffer = image.getPlanes()[1].getBuffer();
byte[] uBytes = new byte[uBuffer.remaining()];
uBuffer.get(uBytes);
ByteBuffer vBuffer = image.getPlanes()[2].getBuffer();
byte[] vBytes = new byte[vBuffer.remaining()];
vBuffer.get(vBytes);
tracker.processImageBuffer(yBytes, WIDTH, HEIGHT, WIDTH, MDMTrackerManager.FrameFormat.GRAY);
//renderer.updateTextureImage(yBytes, uBytes, vBytes, image.getPlanes()[1].getPixelStride());
image.close();
if (tracker.isFaceTracked()) {
// renderer.updateVertices();
}
if (tracker.isFaceTracked()) {
// translate to opengl coordinates
float[] landmarks = new float[66*2];
for (int i = 0; i < 66; ++i) {
if(i >=17 && i <27 || i >=36 && i <48 ) {
landmarks[2 * i] = 1.0f - tracker.getLandmarks()[2 * i] / (HEIGHT / 2);
landmarks[2 * i + 1] = 1.0f - tracker.getLandmarks()[2 * i + 1] / (WIDTH / 2);
}
}
// renderer.updateLandmarks(landmarks);
} else {
// renderer.updateLandmarks(null);
}
long currentTime = System.currentTimeMillis();
double fps = 1000.0 / (currentTime - lastFrameTime);
updater.update(fps);
lastFrameTime = currentTime;
}
I have also used Face Detection Library But that is not giving me accurate result.Is There any good Library For Face detection and put image or Mask on Camera Preview. Any help will be appreciated.
There are many libraries available which add a face mask on camera preview. Almost all of them use OpenCV. Check out these libraries.
FaceFilter
Face Replace
FaceTracker
Android GPUimage
The Android GPUimage seems to add image on Camera Preview. A similar question used this library to add face mask on camera preview. You can take a look into the answer posted on the question.
The FaceFilter library does the same work, but on a captured image. However you can see the tutorial for the library posted by the author and integrate it with face detection. There are several tutorials for face detection. This tutorial explains how to implement face detection, while also overlaying graphics on it. Although there is not much on the overlaid graphics in the tutorial, it might solve your question.

How to get color saturation or brightness from YUV420SP format in Eclipse

I have to make a simple app that will messure the saturation, brightness etc. from camera preview on Android. My code is now sending image to data in:
public void onPreviewFrame(byte[] data, Camera camera){
}
...and if I'm not wrong it is in YUV420SP format. I have tried to find some information about this but unsuccessful. Can anyone tell me how to manage this format?
See Android document: http://developer.android.com/reference/android/hardware/Camera.Parameters.html#setPreviewFormat(int); the image format is described here: http://www.fourcc.org/yuv.php#NV21.
In the nutshell, this byte[] contains two parts: luma and chroma. You can use the camera object to find the current parameters (don't use this in production code in every call to onPreviewFrame(), because these calls are performance burden, but reuse the values):
int w = camera.getParameters().getPreviewSize().width;
int h = camera.getParameters().getPreviewSize().height;
byte[] luma = new byte[w*h];
byte[] chroma = new byte[w*h/2];
System.arraycopy(data, 0, luma, 0, w*h);
System.arraycopy(data, w*h, chroma, 0, w*h/2);
int Y_at_x_y = luma[x + y*w]; // or data[x + y*w]
int U_at_x_y = chroma[x/2 + y*w/2 + 1]; // or data[w*h + x/2 + y*w/2 + 1]
int V_at_x_y = chroma[x/2 + y*w/2]; // or data[w*h + x/2 + y*w/2]

Treshold face images in various light

I want to ask about some ideas / study materials connected to binarization. I am trying to create system that detects human emotions. I am able to get areas such as brows, eyes, nose, mouth etc. but then comes another stage -> processing...
My images are taken in various places/time of day/weather conditions. It's problematic during binarization, with the same treshold value one images are fully black, other looks well and provide me informations I want.
What I want to ask you about is:
1) If there is known way how to bring all images to the same level of brightness?
2) How to create dependency between treshold value and brightness on image?
What I have tried for now is normalize the image... but there are no effects, maybe I'm doing something wrong. I'm using OpenCV (for android)
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
EDIT:
I tried adaptive treshold, OTSU - they didnt work for me. I have problems with using CLAHE in Android but I managed to implement Niblack algorithm.
Core.normalize(cleanFaceMatGRAY, cleanFaceMatGRAY,0, 255, Core.NORM_MINMAX, CvType.CV_8U);
nibelBlackTresholding(cleanFaceMatGRAY, -0.2);
private void nibelBlackTresholding(Mat image, double parameter) {
Mat meanPowered = image.clone();
Core.multiply(image, image, meanPowered);
Scalar mean = Core.mean(image);
Scalar stdmean = Core.mean(meanPowered);
double tresholdValue = mean.val[0] + parameter * stdmean.val[0];
int totalRows = image.rows();
int totalCols = image.cols();
for (int cols=0; cols < totalCols; cols++) {
for (int rows=0; rows < totalRows; rows++) {
if (image.get(rows, cols)[0] > tresholdValue) {
image.put(rows, cols, 255);
} else {
image.put(rows, cols, 0);
}
}
}
}
The results are really good, but still not enough for some images. I paste links cuz images are big and I don't want to take too much screen:
For example this one is tresholded really fine:
https://dl.dropboxusercontent.com/u/108321090/a1.png
https://dl.dropboxusercontent.com/u/108321090/a.png
But bad light produce shadows sometimes and this gives this effect:
https://dl.dropboxusercontent.com/u/108321090/b1.png
https://dl.dropboxusercontent.com/u/108321090/b.png
Do you have any idea that could help me to improve treshold of those images with high light difference (shadows)?
EDIT2:
I found that my previous Algorithm is implemented in wrong way. Std was calculated in wrong way. In Niblack Thresholding mean is local value not global. I repaired it according to this reference http://arxiv.org/ftp/arxiv/papers/1201/1201.5227.pdf
private void niblackThresholding2(Mat image, double parameter, int window) {
int totalRows = image.rows();
int totalCols = image.cols();
int offset = (window-1)/2;
double tresholdValue = 0;
double localMean = 0;
double meanDeviation = 0;
for (int y=offset+1; y<totalCols-offset; y++) {
for (int x=offset+1; x<totalRows-offset; x++) {
localMean = calculateLocalMean(x, y, image, window);
meanDeviation = image.get(y, x)[0] - localMean;
tresholdValue = localMean*(1 + parameter * ( (meanDeviation/(1 - meanDeviation)) - 1 ));
Log.d("QWERTY","TRESHOLD " +tresholdValue);
if (image.get(y, x)[0] > tresholdValue) {
image.put(y, x, 255);
} else {
image.put(y, x, 0);
}
}
}
}
private double calculateLocalMean(int x, int y, Mat image, int window) {
int offset = (window-1)/2;
Mat tempMat;
Rect tempRect = new Rect();
Point leftTop, bottomRight;
leftTop = new Point(x - (offset + 1), y - (offset + 1));
bottomRight = new Point(x + offset, y + offset);
tempRect = new Rect(leftTop, bottomRight);
tempMat = new Mat(image, tempRect);
return Core.mean(tempMat).val[0];
}
Results for 7x7 window and proposed in reference k parameter = 0.34: I still can't get rid of shadow on faces.
https://dl.dropboxusercontent.com/u/108321090/b2.png
https://dl.dropboxusercontent.com/u/108321090/b1.png
things to look at:
http://docs.opencv.org/java/org/opencv/imgproc/CLAHE.html
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#adaptiveThreshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20int,%20int,%20int,%20double)
http://docs.opencv.org/java/org/opencv/imgproc/Imgproc.html#threshold(org.opencv.core.Mat,%20org.opencv.core.Mat,%20double,%20double,%20int) (THRESH_OTSU)

Camera preview processing on Android

I'm making a line follower for my robot on Android (to learn Java/Android programming), currently I'm facing the image processing problem: the camera preview returns an image format called YUV which I want to convert to a threshold in order to know where the line is, how would one do that?
As of now I've succeeded getting something, that is I definitely can read data from the camera preview and by some miracle even know if the light intensity is over or under a certain value at a certain area on the screen. My goal is to draw the robot's path on an overlay over the camera's preview, that too works to some extent, but the problem is the YUV management.
As you can see not only the dark area is drawn sideways, but it also repeats itself 4 times and the preview image is stretched, I cannot figure out how to fix these problems.
Here's the relevant part of code:
public void surfaceCreated(SurfaceHolder arg0) {
// TODO Auto-generated method stub
// camera setup
mCamera = Camera.open();
Camera.Parameters parameters = mCamera.getParameters();
List<Camera.Size> sizes = parameters.getSupportedPreviewSizes();
for(int i=0; i<sizes.size(); i++)
{
Log.i("CS", i+" - width: "+sizes.get(i).width+" height: "+sizes.get(i).height+" size: "+(sizes.get(i).width*sizes.get(i).height));
}
// change preview size
final Camera.Size cs = sizes.get(8);
parameters.setPreviewSize(cs.width, cs.height);
// initialize image data array
imgData = new int[cs.width*cs.height];
// make picture gray scale
parameters.setColorEffect(Camera.Parameters.EFFECT_MONO);
parameters.setFocusMode(Camera.Parameters.FOCUS_MODE_AUTO);
mCamera.setParameters(parameters);
// change display size
LayoutParams params = (LayoutParams) mSurfaceView.getLayoutParams();
params.height = (int) (mSurfaceView.getWidth()*cs.height/cs.width);
mSurfaceView.setLayoutParams(params);
LayoutParams overlayParams = (LayoutParams) swOverlay.getLayoutParams();
overlayParams.width = mSurfaceView.getWidth();
overlayParams.height = mSurfaceView.getHeight();
swOverlay.setLayoutParams(overlayParams);
try
{
mCamera.setPreviewDisplay(mSurfaceHolder);
mCamera.setDisplayOrientation(90);
mCamera.startPreview();
}
catch (IOException e)
{
e.printStackTrace();
mCamera.stopPreview();
mCamera.release();
}
// callback every time a new frame is available
mCamera.setPreviewCallback(new PreviewCallback() {
public void onPreviewFrame(byte[] data, Camera camera)
{
// create bitmap from camera preview
int pixel, pixVal, frameSize = cs.width*cs.height;
for(int i=0; i<frameSize; i++)
{
pixel = (0xff & ((int) data[i])) - 16;
if(pixel < threshold)
{
pixVal = 0;
}
else
{
pixVal = 1;
}
imgData[i] = pixVal;
}
int cp = imgData[(int) (cs.width*(0.5+(cs.height/2)))];
//Log.i("CAMERA", "Center pixel RGB: "+cp);
debug.setText("Center pixel: "+cp);
// process preview image data
Paint paint = new Paint();
paint.setColor(Color.YELLOW);
int start, finish, last;
start = finish = last = -1;
float x_ratio = mSurfaceView.getWidth()/cs.width;
float y_ratio = mSurfaceView.getHeight()/cs.height;
// display calculated path on overlay using canvas
Canvas overlayCanvas = overlayHolder.lockCanvas();
overlayCanvas.drawColor(0, Mode.CLEAR);
// start by finding the tape from bottom of the screen
for(int y=cs.height; y>0; y--)
{
for(int x=0; x<cs.width; x++)
{
pixel = imgData[y*cs.height+x];
if(pixel == 1 && last == 0 && start == -1)
{
start = x;
}
else if(pixel == 0 && last == 1 && finish == -1)
{
finish = x;
break;
}
last = pixel;
}
//overlayCanvas.drawLine(start*x_ratio, y*y_ratio, finish*x_ratio, y*y_ratio, paint);
//start = finish = last = -1;
}
overlayHolder.unlockCanvasAndPost(overlayCanvas);
}
});
}
This code generates an error sometimes when quitting the application due to some method being called after release, which is the least of my problems.
UPDATE:
Now that the orientation problem is fixed (CCD sensor orientation) I'm still facing the repetition problem, this is probably related to my YUV data management...
Your surface and camera management looks correct, but I would doublecheck that camera actually accepted preview size settings ( some camera implementations reject some settings silently)
As you are working with portrait mode, you have to keep in mind that camera does not give a fart about prhone orientation - its coordinate origin isdetermined by CCD chip and is always to right corner and scan direction is from top to bottom and right to left - quite different from your overlay canvas. ( But if you are in landscape mode, everything is correct ;) ) - this is certaily source of odd drawing result
Your threshloding is bit naive and not very usefull in real life - I would suggest adaptive threshloding. In our javaocr project ( pure java, also has android demos ) we implemented efficient sauvola binarisation (see demos):
http://sourceforge.net/projects/javaocr/
Performance binarisation can be improved to work only on single image rows (patches welcome)
Issue with UV part of image is easy - default forman is NV21, luminance comes first
and this is just byte stream, and you do not need UV part of image at all (look into demos above)

Categories

Resources