I'm using opencv 3.0.0 on android-studio, i need to convert a RGB image to YIQ so i have to do some adds and subtracts equations. i used the Core.Split of openCV to take the Red , Green, and blue channel from the RGB image. Then i used this channels to calculte the YIQ image using this equations : equation. after the test of my appp i got this Error : A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x19b in tid 27631 (orstackoverflow), pid 27631 (orstackoverflow)
And this is my code
public class MainActivity extends AppCompatActivity {
private static final String TAG = "3:qinQctivity";
Button button;
ImageView imageView;
ArrayList<Mat> RGB = new ArrayList<Mat>(3);
ArrayList<Mat> YIQ = new ArrayList<Mat>(3);
Mat newImage;
Mat Blue,Green,Red,I,Y,Q,B,X,D,W;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
button = findViewById(R.id.button);
imageView = findViewById(R.id.image);
button.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View view) {
if (!OpenCVLoader.initDebug()) {
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, getApplicationContext(), baseLoaderCallback);
} else {
baseLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
}
}
});
}
BaseLoaderCallback baseLoaderCallback = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
super.onManagerConnected(status);
if (status == LoaderCallbackInterface.SUCCESS) {
try {
Y = new Mat();
Q = new Mat();
I = new Mat();
X = new Mat();
newImage = Utils.loadResource(getApplicationContext(), R.drawable.retinalimage, CvType.CV_32FC3);
Core.split(newImage, RGB);
Blue = RGB.get(0);
Red = RGB.get(1);
Green = RGB.get(2);
B = new Mat(); // result
D = new Mat(); // result
W = new Mat(); // result
/*working on Y channel*/
Scalar alpha_Y = new Scalar(0.299); // the factor
Scalar alpha1_Y = new Scalar(0.587); // the factor
Scalar alpha2_Y = new Scalar(0.114); // the factor
Core.multiply(Red,alpha_Y, B);
Core.multiply(Green,alpha1_Y,D);
Core.multiply(Blue,alpha2_Y,W);
Core.add(B,D,Y);
Log.i(TAG, "onManagerConnected: "+ Y.toString());
Core.add(Y,W,Y);
/*I = 0.211 * Red - 0.523 * Green + 0.312 * Blue;*/
Mat Z = new Mat(); // result
Mat P = new Mat(); // result
Mat O = new Mat(); // result
/*working on I channel*/
Scalar alpha_I = new Scalar(0.211); // the factor
Scalar alpha1_I = new Scalar(0.523); // the factor
Scalar alpha2_I = new Scalar(0.312); // the factor
Core.multiply(Red,alpha_I,Z);
Core.multiply(Green,alpha1_I,P);
Core.multiply(Blue,alpha2_I,O);
Core.add(Z,P,I);
Core.add(I,O,I);
/*working on Q channel*/
/*Q = 0.596 * Red - 0.274 * Green - 0.322 * Blue;*/
Mat V = new Mat();
Mat W = new Mat();
Mat N = new Mat();
Scalar alpha_Q = new Scalar(0.596); // the factor
Scalar alpha1_Q = new Scalar(0.274); // the factor
Scalar alpha2_Q = new Scalar(0.322); // the factor
Core.multiply(Red,alpha_Q,V);
Core.multiply(Green,alpha1_Q,W);
Core.multiply(Blue,alpha2_Q,N);
Core.subtract(V,W,Q);
Core.subtract(Q,N,Y);
YIQ.add(Y);
YIQ.add(I);
YIQ.add(Q);
Core.merge(YIQ,X);
showImage(X);
} catch(IOException e){
e.printStackTrace();
}
}
}
} ;
void showImage (Mat y){
Bitmap bm = Bitmap.createBitmap(y.width(), y.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(y, bm);
imageView.setImageBitmap(bm);
}
You are not allocating memory for the mats B,D,W,Z,P,O,.... You need to get the size of the original RGB Matrices and pass the size to the new matrix constructors. According to OpenCV documentation for the multiply function:
Parameters:
src1 - First source array.
src2 - Second source array of the same size and the same type as src1.
dst - Destination array of the same size and type as src1.
(highlights mine)
Related
I am using OpenCV in my android application which detects the RGB value of an object in front of the camera. I can successfully detect the RGB value (not perfect because my phone camera keeps adjusting the brightness), and i want to plot a graph of the Red/Blue/Green value against time. I would also like to limit the video to 1 minute. I am using the GraphView library to plot the graph. The problem is the graph view takes 2 parameters, the x (time) and y (my R/G/B value) and since I am could not find a solution to set the timer, I cant plot the graph.
Can anyone help me understand how to set the timer so that the video is recorded to a certain period of time?
public class VideoRecordingActivity extends AppCompatActivity implements CameraBridgeViewBase.CvCameraViewListener2{
//java camera view
JavaCameraView javaCameraView;
Mat mRgba, mHsv;
GraphView graph;
//callback loader
BaseLoaderCallback mCallBackLoader = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
switch (status){
case BaseLoaderCallback.SUCCESS:
javaCameraView.enableView();
break;
default:
super.onManagerConnected(status);
break;
}
}
};
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_video_recording);
//connect the camera
javaCameraView = (JavaCameraView)findViewById(R.id.java_camera_view);
//set visibility
javaCameraView.setVisibility(SurfaceView.VISIBLE);
javaCameraView.setMaxFrameSize(320, 240);
javaCameraView.enableFpsMeter();
javaCameraView.clearFocus();
//set callback function
javaCameraView.setCvCameraViewListener(this);
/*Graph*/
graph = (GraphView)findViewById(R.id.graphView);
}
#Override
protected void onPause() {
super.onPause();
if(javaCameraView!=null){
javaCameraView.disableView();
}
}
#Override
protected void onDestroy() {
super.onDestroy();
if (javaCameraView!=null){
javaCameraView.disableView();
}
}
#Override
protected void onResume() {
super.onResume();
if (OpenCVLoader.initDebug()){
Log.d("openCV", "Connected");
//display when the activity resumed,, callback loader
mCallBackLoader.onManagerConnected(LoaderCallbackInterface.SUCCESS);
}else{
Log.d("openCV", "Not connected");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_3_0, this, mCallBackLoader);
}
}
#Override
public void onCameraViewStarted(int width, int height) {
//4 channel
mRgba = new Mat(width, height, CvType.CV_8UC4);
mHsv = new Mat(width, height, CvType.CV_8UC3);
}
#Override
public void onCameraViewStopped() {
//release
mRgba.release();
}
#Override
public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {
//get each frame from camera
mRgba = inputFrame.rgba();
/**********HSV conversion**************/
//convert mat rgb to mat hsv
Imgproc.cvtColor(mRgba, mHsv, Imgproc.COLOR_RGB2HSV);
//find scalar sum of hsv
Scalar mColorHsv = Core.sumElems(mHsv);
int pointCount = 320*240;
//convert each pixel
for (int i = 0; i < mColorHsv.val.length; i++) {
mColorHsv.val[i] /= pointCount;
}
//convert hsv scalar to rgb scalar
Scalar mColorRgb = convertScalarHsv2Rgba(mColorHsv);
//print scalar value
Log.d("intensity", "R:"+ String.valueOf(mColorRgb.val[0])+" G:"+String.valueOf(mColorRgb.val[1])+" B:"+String.valueOf(mColorRgb.val[2]));
int R = (int) mColorRgb.val[0];
int G = (int) mColorRgb.val[1];
int B = (int) mColorRgb.val[2];
Log.d("intensity", "Y:"+ String.valueOf(Y)+" U:"+String.valueOf(U)+" V:"+String.valueOf(V));
/*>>>>>>>>GRAPH<<<<<<<<<<*/
LineGraphSeries<DataPoint> series = new LineGraphSeries<>(new DataPoint[]{
new DataPoint(0, R),
new DataPoint(1, R),
new DataPoint(2, R),
new DataPoint(3, R)
});
graph.addSeries(series);
return mRgba;
}
//convert Mat hsv to scalar
private Scalar convertScalarHsv2Rgba(Scalar hsvColor) {
Mat pointMatRgba = new Mat();
Mat pointMatHsv = new Mat(1, 1, CvType.CV_8UC3, hsvColor);
Imgproc.cvtColor(pointMatHsv, pointMatRgba, Imgproc.COLOR_HSV2RGB);
return new Scalar(pointMatRgba.get(0, 0));
}
}
the video is at around 15-20 fps. I want to plot the R/G/B value of each frame against time.
Any help will be very much appreciated.
I'm OpenCV learner. I was trying Image Comparison. I have used OpenCV 2.4.13.3
I have these two images 1.jpg and cam1.jpg.
When I use the following command in openCV
File sdCard = Environment.getExternalStorageDirectory();
String path1, path2;
path1 = sdCard.getAbsolutePath() + "/1.jpg";
path2 = sdCard.getAbsolutePath() + "/cam1.jpg";
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.BRIEF);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
Mat img1 = Highgui.imread(path1);
Mat img2 = Highgui.imread(path2);
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(img1, keypoints1);
extractor.compute(img1, keypoints1, descriptors1);
//second image
// Mat img2 = Imgcodecs.imread(path2);
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector.detect(img2, keypoints2);
extractor.compute(img2, keypoints2, descriptors2);
//matcher image descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);
// Filter matches by distance
MatOfDMatch filtered = filterMatchesByDistance(matches);
int total = (int) matches.size().height;
int Match= (int) filtered.size().height;
Log.d("LOG", "total:" + total + " Match:"+Match);
Method filterMatchesByDistance
static MatOfDMatch filterMatchesByDistance(MatOfDMatch matches){
List<DMatch> matches_original = matches.toList();
List<DMatch> matches_filtered = new ArrayList<DMatch>();
int DIST_LIMIT = 30;
// Check all the matches distance and if it passes add to list of filtered matches
Log.d("DISTFILTER", "ORG SIZE:" + matches_original.size() + "");
for (int i = 0; i < matches_original.size(); i++) {
DMatch d = matches_original.get(i);
if (Math.abs(d.distance) <= DIST_LIMIT) {
matches_filtered.add(d);
}
}
Log.d("DISTFILTER", "FIL SIZE:" + matches_filtered.size() + "");
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_filtered);
return mat;
}
Log
total:122 Match:30
As we can see from the log match is 30.
But as we can see both images have same visual element (in).
How can I get match=90 using openCV?
It would be great if somebody can help with code snippet.
If using opencv it is not possible then what are the other
alternatives we can look for?
But as we can see both images have same visual element (in).
So, we should compare not whole images, but "same visual element" on it. You can improve Match value more if you do not compare the "template" and "camera" images themselves, but processed same way (converted to binary black/white, for example) "template" and "camera" images. For example, try to find blue (background of template logo) square on both ("template" and "camera") images and compare that squares (Region Of Interest). The code may be something like that:
Bitmap bmImageTemplate = <get your template image Bitmap>;
Bitmap bmTemplate = findLogo(bmImageTemplate); // process template image
Bitmap bmImage = <get your camera image Bitmap>;
Bitmap bmLogo = findLogo(bmImage); // process camera image same way
compareBitmaps(bmTemplate, bmLogo);
where
private Bitmap findLogo(Bitmap sourceBitmap) {
Bitmap roiBitmap = null;
Mat sourceMat = new Mat(sourceBitmap.getWidth(), sourceBitmap.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(sourceBitmap, sourceMat);
Mat roiTmp = sourceMat.clone();
final Mat hsvMat = new Mat();
sourceMat.copyTo(hsvMat);
// convert mat to HSV format for Core.inRange()
Imgproc.cvtColor(hsvMat, hsvMat, Imgproc.COLOR_RGB2HSV);
Scalar lowerb = new Scalar(85, 50, 40); // lower color border for BLUE
Scalar upperb = new Scalar(135, 255, 255); // upper color border for BLUE
Core.inRange(hsvMat, lowerb, upperb, roiTmp); // select only blue pixels
// find contours
List<MatOfPoint> contours = new ArrayList<>();
List<Rect> squares = new ArrayList<>();
Imgproc.findContours(roiTmp, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
// find appropriate bounding rectangles
for (MatOfPoint contour : contours) {
MatOfPoint2f areaPoints = new MatOfPoint2f(contour.toArray());
RotatedRect boundingRect = Imgproc.minAreaRect(areaPoints);
double rectangleArea = boundingRect.size.area();
// test min ROI area in pixels
if (rectangleArea > 400) {
Point rotated_rect_points[] = new Point[4];
boundingRect.points(rotated_rect_points);
Rect rect = Imgproc.boundingRect(new MatOfPoint(rotated_rect_points));
double aspectRatio = rect.width > rect.height ?
(double) rect.height / (double) rect.width : (double) rect.width / (double) rect.height;
if (aspectRatio >= 0.9) {
squares.add(rect);
}
}
}
Mat logoMat = extractSquareMat(roiTmp, getBiggestSquare(squares));
roiBitmap = Bitmap.createBitmap(logoMat.cols(), logoMat.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(logoMat, roiBitmap);
return roiBitmap;
}
method extractSquareMat() just extract Region of Interest (logo) from whole image
public static Mat extractSquareMat(Mat sourceMat, Rect rect) {
Mat squareMat = null;
int padding = 50;
if (rect != null) {
Rect truncatedRect = new Rect((int) rect.tl().x + padding, (int) rect.tl().y + padding,
rect.width - 2 * padding, rect.height - 2 * padding);
squareMat = new Mat(sourceMat, truncatedRect);
}
return squareMat ;
}
and compareBitmaps() just wrapper for your code:
private void compareBitmaps(Bitmap bitmap1, Bitmap bitmap2) {
Mat mat1 = new Mat(bitmap1.getWidth(), bitmap1.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(bitmap1, mat1);
Mat mat2 = new Mat(bitmap2.getWidth(), bitmap2.getHeight(), CvType.CV_8UC3);
Utils.bitmapToMat(bitmap2, mat2);
compareMats(mat1, mat2);
}
your code as method:
private void compareMats(Mat img1, Mat img2) {
FeatureDetector detector = FeatureDetector.create(FeatureDetector.ORB);
DescriptorExtractor extractor = DescriptorExtractor.create(DescriptorExtractor.BRIEF);
DescriptorMatcher matcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
Mat descriptors1 = new Mat();
MatOfKeyPoint keypoints1 = new MatOfKeyPoint();
detector.detect(img1, keypoints1);
extractor.compute(img1, keypoints1, descriptors1);
//second image
// Mat img2 = Imgcodecs.imread(path2);
Mat descriptors2 = new Mat();
MatOfKeyPoint keypoints2 = new MatOfKeyPoint();
detector.detect(img2, keypoints2);
extractor.compute(img2, keypoints2, descriptors2);
//matcher image descriptors
MatOfDMatch matches = new MatOfDMatch();
matcher.match(descriptors1,descriptors2,matches);
// Filter matches by distance
MatOfDMatch filtered = filterMatchesByDistance(matches);
int total = (int) matches.size().height;
int Match= (int) filtered.size().height;
Log.d("LOG", "total:" + total + " Match:" + Match);
}
static MatOfDMatch filterMatchesByDistance(MatOfDMatch matches){
List<DMatch> matches_original = matches.toList();
List<DMatch> matches_filtered = new ArrayList<DMatch>();
int DIST_LIMIT = 30;
// Check all the matches distance and if it passes add to list of filtered matches
Log.d("DISTFILTER", "ORG SIZE:" + matches_original.size() + "");
for (int i = 0; i < matches_original.size(); i++) {
DMatch d = matches_original.get(i);
if (Math.abs(d.distance) <= DIST_LIMIT) {
matches_filtered.add(d);
}
}
Log.d("DISTFILTER", "FIL SIZE:" + matches_filtered.size() + "");
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_filtered);
return mat;
}
As result for resized (scaled for 50%) images saved from your question result is:
D/DISTFILTER: ORG SIZE:237
D/DISTFILTER: FIL SIZE:230
D/LOG: total:237 Match:230
NB! This is a quick and dirty example just to demonstrate approach only for given template.
P.S. getBiggestSquare() can be like that (based on compare by area):
public static Rect getBiggestSquare(List<Rect> squares) {
Rect biggestSquare = null;
if (squares != null && squares.size() >= 1) {
Rect square;
double maxArea;
int ixMaxArea = 0;
square = squares.get(ixMaxArea);
maxArea = square.area();
for (int ix = 1; ix < squares.size(); ix++) {
square = squares.get(ix);
if (square.area() > maxArea) {
maxArea = square.area();
ixMaxArea = ix;
}
}
biggestSquare = squares.get(ixMaxArea);
}
return biggestSquare;
}
I am trying to implement Paper detection through OpenCV. I am able to understand the concept of how can I get it,
Input-> Canny-> Blur-> Find Conture-> Search (closed)Quadrilateral-> Draw Conture
but still, I am new to OpenCV programming. So having issues in implementing it. I was able to find help through this answer
Android OpenCV Paper Sheet detection
but it's drawing contour on every possible lining. Here is the code I am trying to implement.
public Mat onCameraFrame(CvCameraViewFrame inputFrame) {
mRgba = inputFrame.rgba();
Imgproc.drawContours(mRgba,findContours(mRgba), 0, new Scalar(0 , 255, 0), 5);
return mRgba;
}
public static class Quadrilateral {
public MatOfPoint contour;
public Point[] points;
public Quadrilateral(MatOfPoint contour, Point[] points) {
this.contour = contour;
this.points = points;
}
}
public static Quadrilateral findDocument( Mat inputRgba ) {
ArrayList<MatOfPoint> contours = findContours(inputRgba);
Quadrilateral quad = getQuadrilateral(contours);
return quad;
}
private static ArrayList<MatOfPoint> findContours(Mat src) {
double ratio = src.size().height / 500;
int height = Double.valueOf(src.size().height / ratio).intValue();
int width = Double.valueOf(src.size().width / ratio).intValue();
Size size = new Size(width,height);
Mat resizedImage = new Mat(size, CvType.CV_8UC4);
Mat grayImage = new Mat(size, CvType.CV_8UC4);
Mat cannedImage = new Mat(size, CvType.CV_8UC1);
Imgproc.resize(src,resizedImage,size);
Imgproc.cvtColor(resizedImage, grayImage, Imgproc.COLOR_RGBA2GRAY, 4);
Imgproc.GaussianBlur(grayImage, grayImage, new Size(5, 5), 0);
Imgproc.Canny(grayImage, cannedImage, 75, 200);
ArrayList<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(cannedImage, contours, hierarchy, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
hierarchy.release();
Collections.sort(contours, new Comparator<MatOfPoint>() {
#Override
public int compare(MatOfPoint lhs, MatOfPoint rhs) {
return Double.valueOf(Imgproc.contourArea(rhs)).compareTo(Imgproc.contourArea(lhs));
}
});
resizedImage.release();
grayImage.release();
cannedImage.release();
return contours;
}
private static Quadrilateral getQuadrilateral(ArrayList<MatOfPoint> contours) {
for ( MatOfPoint c: contours ) {
MatOfPoint2f c2f = new MatOfPoint2f(c.toArray());
double peri = Imgproc.arcLength(c2f, true);
MatOfPoint2f approx = new MatOfPoint2f();
Imgproc.approxPolyDP(c2f, approx, 0.02 * peri, true);
Point[] points = approx.toArray();
// select biggest 4 angles polygon
if (points.length == 4) {
Point[] foundPoints = sortPoints(points);
return new Quadrilateral(c, foundPoints);
}
}
return null;
}
private static Point[] sortPoints(Point[] src) {
ArrayList<Point> srcPoints = new ArrayList<>(Arrays.asList(src));
Point[] result = { null , null , null , null };
Comparator<Point> sumComparator = new Comparator<Point>() {
#Override
public int compare(Point lhs, Point rhs) {
return Double.valueOf(lhs.y + lhs.x).compareTo(rhs.y + rhs.x);
}
};
Comparator<Point> diffComparator = new Comparator<Point>() {
#Override
public int compare(Point lhs, Point rhs) {
return Double.valueOf(lhs.y - lhs.x).compareTo(rhs.y - rhs.x);
}
};
// top-left corner = minimal sum
result[0] = Collections.min(srcPoints, sumComparator);
// bottom-right corner = maximal sum
result[2] = Collections.max(srcPoints, sumComparator);
// top-right corner = minimal diference
result[1] = Collections.min(srcPoints, diffComparator);
// bottom-left corner = maximal diference
result[3] = Collections.max(srcPoints, diffComparator);
return result;
}
The answer suggests that I should use Quadrilateral Object and call it with Imgproc.drawContours(), but this function takes in ArrayList as argument where as Quadrilateral object contains MatofPoint and Point[]. Can someone help me through this..I am using OpenCV(3.3) and Android (1.5.1)?
Here is the sample what it should look like
I have a piece of code that is removing bitmap background, at first the code was taking about 40s> to remove each bitmap background but I optimized to about 12son a hauwei y320 -u30 with quadcore 1.3ghz each and 6s on Galaxy s3 Quadcore although this is pretty good compared to the first product it is not good enough I want it start ranging in about 2s to 3s for the huawei due to the fact that the base user will probably have a low end device, I am using an async task with fixed fixedthreadpool of 2 and thread priority of Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND +
Process.THREAD_PRIORITY_MORE_FAVORABLE) but I was wondering, I know multi threading is the ability of running multiple tasks per single time frame but isnt there a way I can run 2 or 3 threads on one task, for example in my case run the threads on my background removal code, am sure this can yield better performance due to the fact that at the time am running this code there is no other thread within my app that is also running, I have all resources per run time of my code, I looked up online but I cannot find anything related to that, below is the my code:
public class ImageBackgrndRemover extends AppCompatActivity {
private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) {
#Override
public void onManagerConnected(int status) {
switch (status) {
case LoaderCallbackInterface.SUCCESS:
{
Log.i(TAG, "OpenCV loaded successfully");
//instantia
if(!alreadyRun) {
BackgroundRemover remover = null;
if(remover == null)
{
remover = new BackgroundRemover();
remover.executeOnExecutor(AsyncTask.DUAL_THREAD_EXECUTOR);
}
}
} break;
default:
{
super.onManagerConnected(status);
} break;
}
}
};
private LoadingAnimation add;
ImageView iv;
Scalar color;
Mat dst;
private boolean alreadyRun;
public static final String TAG = "Grabcut demo";
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_image_backgrnd_remover);
iv = (ImageView) this.findViewById(R.id.imagePreview);
}
private int calculatePercentage(int percentage, int target)
{
int k = (int)(target*(percentage/100.0f));
return k;
}
private Bitmap backGrndErase()
{
color = new Scalar(255, 0, 0, 255);
dst = new Mat();
Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.myshirt);
Log.d(TAG, "bitmap: " + bitmap.getWidth() + "x" + bitmap.getHeight());
bitmap = ResizeImage.getResizedBitmap(bitmap, calculatePercentage(40, bitmap.getWidth()), calculatePercentage(40, bitmap.getHeight()));
// Bitmap bitmap2 = ImageCornerMoulder.getRoundedCornerBitmap(bmp, calculatePercentage(5, bmp.getHeight()));
bitmap = bitmap.copy(Bitmap.Config.ARGB_8888, true);
Log.d(TAG, "bitmap 8888: " + bitmap.getWidth() + "x" + bitmap.getHeight());
//GrabCut part
Mat img = new Mat();
Utils.bitmapToMat(bitmap, img);
Log.d(TAG, "img: " + img);
int r = img.rows();
int c = img.cols();
Point p1 = new Point(c/10, r/10);
Point p2 = new Point(c-c/10, r-r/10);
int border = 20;
int border2 = border + border;
Rect rect2 = new Rect(border,border,img.cols()-border2,img.rows()-border2);
Rect rect = new Rect(p1,p2);
Log.d(TAG, "rect: " + rect);
Mat mask = new Mat();
debugger(""+mask.type());
mask.setTo(new Scalar(125));
Mat fgdModel = new Mat();
fgdModel.setTo(new Scalar(255, 255, 255));
Mat bgdModel = new Mat();
bgdModel.setTo(new Scalar(255, 255, 255));
Mat imgC3 = new Mat();
Imgproc.cvtColor(img, imgC3, Imgproc.COLOR_RGBA2RGB);
Log.d(TAG, "imgC3: " + imgC3);
Log.d(TAG, "Grabcut begins");
Imgproc.grabCut(imgC3, mask, rect2, bgdModel, fgdModel, 2, Imgproc.GC_INIT_WITH_RECT);
Mat source = new Mat(1, 1, CvType.CV_8U, new Scalar(3.0));
Core.compare(mask, source, mask, Core.CMP_EQ);
Mat foreground = new Mat(img.size(), CvType.CV_8UC3, new Scalar(255, 255, 255));
img.copyTo(foreground, mask);
Imgproc.rectangle(img, p1, p2, color);
Mat background = new Mat();
try {
background = Utils.loadResource(getApplicationContext(),
R.drawable.blackcolor );
} catch (IOException e) {
e.printStackTrace();
}
Mat tmp = new Mat();
Imgproc.resize(background, tmp, img.size());
background = tmp;
Mat tempMask = new Mat(foreground.size(), CvType.CV_8UC1, new Scalar(255, 255, 255));
Imgproc.cvtColor(foreground, tempMask, 6/* COLOR_BGR2GRAY */);
//Imgproc.threshold(tempMask, tempMask, 254, 255, 1 /* THRESH_BINARY_INV */);
Mat vals = new Mat(1, 1, CvType.CV_8UC3, new Scalar(0.0));
dst = new Mat();
background.setTo(vals, tempMask);
Imgproc.resize(foreground, tmp, mask.size());
foreground = tmp;
Core.add(background, foreground, dst, tempMask);
Log.d(TAG, "Convert to Bitmap");
//removing blackbaground started
/***
Mat tmp2 = new Mat();
Mat alpha = new Mat();
Imgproc.cvtColor(dst, tmp2, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(tmp2, alpha, 100, 255, Imgproc.THRESH_BINARY);
List<Mat> rgb = new ArrayList<Mat>(3);
Core.split(dst, rgb);
List<Mat> rgba = new ArrayList<Mat>(4);
rgba.add(rgb.get(0));
rgba.add(rgb.get(1));
rgba.add(rgb.get(2));
rgba.add(alpha);
Core.merge(rgba, dst);
Bitmap output = Bitmap.createBitmap(dst.width(), dst.height(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dst, output);
***/
//removing back ended
Utils.matToBitmap(dst, bitmap);
//release MAT part
img.release();
imgC3.release();
mask.release();
fgdModel.release();
bgdModel.release();
alreadyRun = true;
return bitmap;
}
private void showLoadingIndicator()
{
android.support.v4.app.FragmentManager fm = getSupportFragmentManager();
add = LoadingAnimation.newInstance("b");
add.show(fm, "");
add.setCancelable(false);
}
private void dismissLoadingIndicator()
{
try {
add.dismiss();
}catch (Exception e)
{
e.printStackTrace();
}
}
private class BackgroundRemover extends AsyncTask<Void, Void, Bitmap>
{
#Override
protected void onPreExecute()
{
showLoadingIndicator();
}
#Override
protected Bitmap doInBackground(Void... voids) {
Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND +
Process.THREAD_PRIORITY_MORE_FAVORABLE);
try{
return backGrndErase();
}catch (Exception e)
{
System.err.println("Failed to remove background");
}
return null;
}
#Override
protected void onPostExecute(Bitmap bitmap) {
// iv.setBackgroundResource(R.drawable.blackcolor);
iv.setImageBitmap(bitmap);
dismissLoadingIndicator();
}
}
#Override
public void onResume()
{
super.onResume();
if (!OpenCVLoader.initDebug()) {
Log.d(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization");
OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, this, mLoaderCallback);
} else {
Log.d(TAG, "OpenCV library found inside package. Using it!");
mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
}
}
private static Bitmap makeBlackTransparent(Bitmap image) {
// convert image to matrix
Mat src = new Mat(image.getWidth(), image.getHeight(), CvType.CV_8UC4);
Utils.bitmapToMat(image, src);
// init new matrices
Mat dst = new Mat(image.getWidth(), image.getHeight(), CvType.CV_8UC4);
Mat tmp = new Mat(image.getWidth(), image.getHeight(), CvType.CV_8UC4);
Mat alpha = new Mat(image.getWidth(), image.getHeight(), CvType.CV_8UC4);
// convert image to grayscale
Imgproc.cvtColor(src, tmp, Imgproc.COLOR_BGR2GRAY);
// threshold the image to create alpha channel with complete transparency in black background region and zero transparency in foreground object region.
Imgproc.threshold(tmp, alpha, 100, 255, Imgproc.THRESH_BINARY);
// split the original image into three single channel.
List<Mat> rgb = new ArrayList<Mat>(3);
Core.split(src, rgb);
// Create the final result by merging three single channel and alpha(BGRA order)
List<Mat> rgba = new ArrayList<Mat>(4);
rgba.add(rgb.get(0));
rgba.add(rgb.get(1));
rgba.add(rgb.get(2));
rgba.add(alpha);
Core.merge(rgba, dst);
// convert matrix to output bitmap
Bitmap output = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(dst, output);
return output;
}
public void debugger(String s){
Log.v("", "########### " + s);
}
}
I got a problem loading image (in list/on click) in Mat on Android, and processing with OpenCV DFT function then return result.
This is my src
MainActivity.java
public class MainActivity extends Activity {
static final String CAMERA_PIC_DIR = "/DCIM/Camera/";
ImageView iv;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
iv = (ImageView) findViewById(R.id.my_image);
String ImageDir = Environment.getExternalStorageDirectory()
.getAbsolutePath() + CAMERA_PIC_DIR;
Intent i = new Intent(this, ListFile.class);
i.putExtra( "directory", ImageDir);
startActivityForResult(i,0);
}
#Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if(requestCode == 0 && resultCode==RESULT_OK) {
String tmp = data.getExtras().getString("clickedFile");
//Bitmap ImageToChange= BitmapFactory.decodeFile(tmp);
//Bitmap bm = Bitmap.createScaledBitmap(ImageToChange, 480, 320, false);
Mat mRgba = new Mat();
mRgba = Highgui.imread(tmp);
getDFT(mRgba);
//iv.setImageBitmap(bm);
If i uncomment 2 Line of Bitmap and iv.set , image successfully load and display list image after click.
and this is getDFT method, still on MainActivity.java
private Mat getDFT(Mat singleChannel) {
singleChannel.convertTo(singleChannel, CvType.CV_64FC1);
int m = Core.getOptimalDFTSize(singleChannel.rows());
int n = Core.getOptimalDFTSize(singleChannel.cols()); // on the border
// add zero
// values
// Imgproc.copyMakeBorder(image1,
// padded, 0, m -
// image1.rows(), 0, n
Mat padded = new Mat(new Size(n, m), CvType.CV_64FC1); // expand input
// image to
// optimal size
Imgproc.copyMakeBorder(singleChannel, padded, 0, m - singleChannel.rows(), 0,
n - singleChannel.cols(), Imgproc.BORDER_CONSTANT);
List<Mat> planes = new ArrayList<Mat>();
planes.add(padded);
planes.add(Mat.zeros(padded.rows(), padded.cols(), CvType.CV_64FC1));
Mat complexI = Mat.zeros(padded.rows(), padded.cols(), CvType.CV_64FC2);
Mat complexI2 = Mat
.zeros(padded.rows(), padded.cols(), CvType.CV_64FC2);
Core.merge(planes, complexI); // Add to the expanded another plane with
// zeros
Core.dft(complexI, complexI2); // this way the result may fit in the
// source matrix
// compute the magnitude and switch to logarithmic scale
// => log(1 + sqrt(Re(DFT(I))^2 + Im(DFT(I))^2))
Core.split(complexI2, planes); // planes[0] = Re(DFT(I), planes[1] =
// Im(DFT(I))
Mat mag = new Mat(planes.get(0).size(), planes.get(0).type());
Core.magnitude(planes.get(0), planes.get(1), mag);// planes[0]
// =
// magnitude
Mat magI = mag;
Mat magI2 = new Mat(magI.size(), magI.type());
Mat magI3 = new Mat(magI.size(), magI.type());
Mat magI4 = new Mat(magI.size(), magI.type());
Mat magI5 = new Mat(magI.size(), magI.type());
Core.add(magI, Mat.ones(padded.rows(), padded.cols(), CvType.CV_64FC1),
magI2); // switch to logarithmic scale
Core.log(magI2, magI3);
Mat crop = new Mat(magI3, new Rect(0, 0, magI3.cols() & -2,
magI3.rows() & -2));
magI4 = crop.clone();
// rearrange the quadrants of Fourier image so that the origin is at the
// image center
int cx = magI4.cols() / 2;
int cy = magI4.rows() / 2;
Rect q0Rect = new Rect(0, 0, cx, cy);
Rect q1Rect = new Rect(cx, 0, cx, cy);
Rect q2Rect = new Rect(0, cy, cx, cy);
Rect q3Rect = new Rect(cx, cy, cx, cy);
Mat q0 = new Mat(magI4, q0Rect); // Top-Left - Create a ROI per quadrant
Mat q1 = new Mat(magI4, q1Rect); // Top-Right
Mat q2 = new Mat(magI4, q2Rect); // Bottom-Left
Mat q3 = new Mat(magI4, q3Rect); // Bottom-Right
Mat tmp = new Mat(); // swap quadrants (Top-Left with Bottom-Right)
q0.copyTo(tmp);
q3.copyTo(q0);
tmp.copyTo(q3);
q1.copyTo(tmp); // swap quadrant (Top-Right with Bottom-Left)
q2.copyTo(q1);
tmp.copyTo(q2);
Core.normalize(magI4, magI5, 0, 255, Core.NORM_MINMAX);
Mat realResult = new Mat(magI5.size(), CvType.CV_8UC1);
magI5.convertTo(realResult, CvType.CV_8UC1);
return realResult;
}
and This is my ListFile.java
public class ListFile extends ListActivity {
private List<String> directoryEntries = new ArrayList<String>();
#Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
Intent i = getIntent();
File directory = new File(i.getStringExtra("directory"));
if (directory.isDirectory()){
File[] files = directory.listFiles();
//sort in descending date order
Arrays.sort(files, new Comparator<File>(){
public int compare(File f1, File f2) {
return -Long.valueOf(f1.lastModified())
.compareTo(f2.lastModified());
}
});
//fill list with files
this.directoryEntries.clear();
for (File file : files){
this.directoryEntries.add(file.getPath());
}
ArrayAdapter<String> directoryList
= new ArrayAdapter<String>(this,
R.layout.file_row, this.directoryEntries);
//alphabetize entries
//directoryList.sort(null);
this.setListAdapter(directoryList);
}
}
#Override
protected void onListItemClick(ListView l, View v, int pos, long id) {
File clickedFile = new File(this.directoryEntries.get(pos));
Intent i = getIntent();
i.putExtra("clickedFile", clickedFile.toString());
setResult(RESULT_OK, i);
finish();
}
}
Android Successfully running, but after I click one of my choice image, android stopped working,
This is the Logcat ouput
W/dalvikvm(14245): No implementation found for native Lorg/opencv/core/Mat;.n_Mat:()J
D/AndroidRuntime(14245): Shutting down VM
W/dalvikvm(14245): threadid=1: thread exiting with uncaught exception (group=0x40cbb9c0)
E/AndroidRuntime(14245): FATAL EXCEPTION: main
E/AndroidRuntime(14245): java.lang.UnsatisfiedLinkError: Native method not found: org.opencv.core.Mat.n_Mat:()J
Link Reference:
android dft got hang
Convert Opencv DFT