i am working on an android project using opencv , i am creating a histogram for a black and white image (1-0 values on image file) .
i am following some tutorials i found on internet on how to create the histogram . i am doing something like this
ArrayList<Mat> list = new ArrayList<Mat>();
list.add(mRgba);
MatOfInt channels = new MatOfInt(0);
Mat hist= new Mat();
MatOfInt histSize = new MatOfInt(25);
MatOfFloat ranges = new MatOfFloat(0f, 1f);
Imgproc.calcHist(list, channels, new Mat(), hist, histSize, ranges);
Then i want to view the data on hist Mat, if i try something like this...
for(int i = 0;i<25;i++){
Log.e(TAG, "data "+i+" "+hist.get(i, 0));
}
i get
> data 0 [D#2be05908
data 1 [D#2be0a138
data 2 [D#2bdf9f48
data 22 [D#2be06c70
that makes no sense to me. if i try a different approach, like
byte buff[] = new byte[ hist.height()*hist.width() * hist.channels()];
hist.get(0, 0, buff);
i get error about mat compatibility with mat.get function.
Is there any way to directly access the data on hist mat?
i am intrested in getting back all the data, not only man mix
hist.get(i, 0) returns array of doubles.
So you can try this:
for (int i = 0; i< 25; i++) {
double[] histValues = hist.get(i, 0);
for (int j = 0; j < histValues.length; j++) {
Log.d(TAG, "yourData=" + histValues[j]);
}
}
Related
I am trying to port a tensorflow model to tensorflow lite to use it in an android application. The conversion is successful and everything runs except for Internal error: Failed to run on the given Interpreter: input must be 5-dimensional. The input in the original model was input_shape=(20, 320, 240, 1), which is 20 320 x 240 grayscale images (therefore ...,1). Here is the important code:
List<Mat> preprocessedFrames = preprocFrames(buf);
//has length of 20 -> no problem there (shouldn't affect dimensionality either...)
int[] output = new int[2];
float[][][] inputMatrices = new float[preprocessedFrames.toArray().length][320][240];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
// returns float[][] with floats from 0 to 1
inputMatrices[i] = inputMatrix;
}
try{
detector.run(inputMatrices, output);
Debug("results: " + output.toString());
}
The model gives me an output of 2 neurons translating into 2 labels.
The model code is the following:
model = tf.keras.Sequential(name='detector')
model.add(tf.keras.layers.Conv3D(filters=(56), input_shape=(20, 320, 240, 1), strides=(2,2,2), kernel_size=(3,11,11), padding='same', activation="relu"))
model.add(tf.keras.layers.AveragePooling3D(pool_size=(1,4,4)))
model.add(tf.keras.layers.Conv3D(filters=(72), kernel_size=(4,7,7), strides=(1,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(81), kernel_size=(2,4,4), strides=(2,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(100), kernel_size=(1,2,2), strides=(3,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(128), kernel_size=(1,2,2), padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(768, activation='tanh', kernel_regularizer=tf.keras.regularizers.l2(0.011)))
model.add(tf.keras.layers.Dropout(rate=0.1))
model.add(tf.keras.layers.Dense(256, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.012)))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001), loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
EDIT: I printed out the first input tensor as follows:
int[] shape = detector.getInputTensor(0).shape();
for(int r = 0; r < shape.length; r++){
Log.d("********" + r, "*******: " + r + " : " + shape[r]);
}
With that I first get the output [1,20,320,240,1]and after that I only get [20,320,240]. I am really quite desperate now...
So, I figured it out by myself and it seems like I really only had to make the input 5 dimensional by putting the content into a first dimension and every single pixel into a fifth dimension. I don't know why, but I will accept that xD.
float[][] output = new float[1][2];
float[][][][][] inputMatrices = new float[1][preprocessedFrames.toArray().length][320][240][1];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
for (int j = 0; j < inputMatrix.length - 1; j++) {
for(int k = 0; k < inputMatrix[0].length - 1; k++) {
inputMatrices[0][i][k][j][0] = inputMatrix[j][k];
}
}
}
I use submat function to cut out part of a picture and did some processing. I now have two mats to use.
Mat originalMat = new Mat();
Utils.bitmapToMat(originalBmp, originalMat);
Rect rect = new Rect(left, top, right - left, bottom - top);
Mat roi_img = originalMat.submat(rect);
Imgproc.cvtColor(roi_img, roi_img, Imgproc.COLOR_BGR2Lab);
#some processing to roi_img....
Imgproc.cvtColor(roi_img, roi_img, Imgproc.COLOR_Lab2BGR);
I find the cvtColor function maybe changes the reference of roi_img. The processing is disable for originalMat. originalMat is the same as before.
I want to merge the originalMat and the roi_img.
I try to use copyto and clone function, but it is not worked.
Mat mat = new Mat();
Utils.bitmapToMat(originalBmp, mat);
Rect rect = new Rect(40, 40, 100, 100);
Mat roi_img = mat.submat(rect);
double[] value = new double[]{255, 255, 255};
Imgproc.cvtColor(roi_img, roi_img, Imgproc.COLOR_BGR2Lab);
for (int i = 0; i < roi_img.rows(); i++) {
for (int j = 0; j < roi_img.cols(); j++) {
roi_img.put(i, j, value);
}
}
Imgproc.cvtColor(roi_img, roi_img, Imgproc.COLOR_Lab2BGR);
Mat roi_img2 = mat.submat(rect);
// roi_img2 = roi_img.clone();
roi_img.copyTo(roi_img2);
showMat(mat);
I made a mistakes. I should use cvtColor function on the originalMat.
I use the following code to read through all objects that I segmented from my image which should be ordered in row and columns as semi-circle (because of segmentation and morphological processing for reducing noise):
Imgproc.Canny(srcImg, srcImg, 50, 150);
Imgcodecs.imwrite("/mnt/sdcard/DCIM/cannyImg.jpg", srcImg);//check
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
Imgproc.findContours(srcImg, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0,0));
//int index = 0;
//double maximc = Imgproc.contourArea(contours.get(0));
for (int contourIdx = 1; contourIdx < contours.size(); contourIdx++) {
double temp;
temp = Imgproc.contourArea(contours.get(contourIdx));
if (temp > 100) {
// condition to differentiate between noise and objects
Mat drawing = Mat.zeros(srcImg.size(), CvType.CV_8UC1);
Imgproc.drawContours(drawing, contours, contourIdx, new Scalar(255), -1);
Mat resultMat = new Mat();
maskImg.copyTo(resultMat, drawing);
Imgcodecs.imwrite("/mnt/sdcard/DCIM/resultImg" + contourIdx + ".jpg", resultMat);//check
}
}
however, the loop can not read important objects in my image even canny image is correct and can identify all the objects. My questions are: in which order find contours read objects? and also is there another way in Opencv to read through all objects in the image other than find contours? last question i used the size of contours to differentiate between the objects and the noise, so is this Ok or you can suggest other methods.
Any help is appreciated
I recently started developing an app on Android studio and I just finished writing the code. The accuracy which I get is more than satisfactory but the time taken by the device is a lot. {}I followed some tutorials on how to monitor the performance on android studio and I saw that one small part of my code is taking 6 seconds, which half the time my app takes to display the entire result. I have seen a lot of posts Java OpenCV - extracting good matches from knnMatch , OpenCV filtering ORB matches on OpenCV/JavaCV but haven't come across anyone asking for this problem. The OpenCV link http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html does provide a good tutorial but the RANSAC function in OpenCV takes different arguments for keypoints as compared to C++.
Here is my code
public Mat ORB_detection (Mat Scene_image, Mat Object_image){
/*This function is used to find the reference card in the captured image with the help of
* the reference card saved in the application
* Inputs - Captured image (Scene_image), Reference Image (Object_image)*/
FeatureDetector orb = FeatureDetector.create(FeatureDetector.DYNAMIC_ORB);
/*1.a Keypoint Detection for Scene Image*/
//convert input to grayscale
channels = new ArrayList<Mat>(3);
Core.split(Scene_image, channels);
Scene_image = channels.get(0);
//Sharpen the image
Scene_image = unsharpMask(Scene_image);
MatOfKeyPoint keypoint_scene = new MatOfKeyPoint();
//Convert image to eight bit, unsigned char
Scene_image.convertTo(Scene_image, CvType.CV_8UC1);
orb.detect(Scene_image, keypoint_scene);
channels.clear();
/*1.b Keypoint Detection for Object image*/
//convert input to grayscale
Core.split(Object_image,channels);
Object_image = channels.get(0);
channels.clear();
MatOfKeyPoint keypoint_object = new MatOfKeyPoint();
Object_image.convertTo(Object_image, CvType.CV_8UC1);
orb.detect(Object_image, keypoint_object);
//2. Calculate the descriptors/feature vectors
//Initialize orb descriptor extractor
DescriptorExtractor orb_descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
Mat Obj_descriptor = new Mat();
Mat Scene_descriptor = new Mat();
orb_descriptor.compute(Object_image, keypoint_object, Obj_descriptor);
orb_descriptor.compute(Scene_image, keypoint_scene, Scene_descriptor);
//3. Matching the descriptors using Brute-Force
DescriptorMatcher brt_frc = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
MatOfDMatch matches = new MatOfDMatch();
brt_frc.match(Obj_descriptor, Scene_descriptor, matches);
//4. Calculating the max and min distance between Keypoints
float max_dist = 0,min_dist = 100,dist =0;
DMatch[] for_calculating;
for_calculating = matches.toArray();
for( int i = 0; i < Obj_descriptor.rows(); i++ )
{ dist = for_calculating[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
System.out.print("\nInterval min_dist: " + min_dist + ", max_dist:" + max_dist);
//-- Use only "good" matches (i.e. whose distance is less than 2.5*min_dist)
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");
/*Necessary type conversion for drawing matches
MatOfDMatch goodMatches = new MatOfDMatch();
goodMatches.fromList(good_matches);
Mat matches_scn_obj = new Mat();
Features2d.drawKeypoints(Object_image, keypoint_object, new Mat(Object_image.rows(), keypoint_object.cols(), keypoint_object.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawKeypoints(Scene_image, keypoint_scene, new Mat(Scene_image.rows(), Scene_image.cols(), Scene_image.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
Features2d.drawMatches(Object_image, keypoint_object, Scene_image, keypoint_scene, goodMatches, matches_scn_obj);
SaveImage(matches_scn_obj,"drawing_good_matches.jpg");
*/
if(good_matches.size() <= 6){
ph_value = "7";
System.out.println("Wrong Detection");
return Scene_image;
}
else{
//5. RANSAC thresholding for finding the optimum homography
Mat outputImg = new Mat();
LinkedList<Point> objList = new LinkedList<Point>();
LinkedList<Point> sceneList = new LinkedList<Point>();
List<org.opencv.core.KeyPoint> keypoints_objectList = keypoint_object.toList();
List<org.opencv.core.KeyPoint> keypoints_sceneList = keypoint_scene.toList();
//getting the object and scene points from good matches
for(i = 0; i<good_matches.size(); i++){
objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
}
good_matches.clear();
MatOfPoint2f obj = new MatOfPoint2f();
obj.fromList(objList);
objList.clear();
MatOfPoint2f scene = new MatOfPoint2f();
scene.fromList(sceneList);
sceneList.clear();
float RANSAC_dist=(float)2.0;
Mat hg = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, RANSAC_dist);
for(i = 0;i<hg.cols();i++) {
String tmp = "";
for ( int j = 0; j < hg.rows(); j++) {
Point val = new Point(hg.get(j, i));
tmp= tmp + val.x + " ";
}
}
Mat scene_image_transformed_color = new Mat();
Imgproc.warpPerspective(original_image, scene_image_transformed_color, hg, Object_image.size(), Imgproc.WARP_INVERSE_MAP);
processing(scene_image_transformed_color, template_match);
return outputImg;
}
} }
and this part is what is taking 6 seconds to implement on runtime -
LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
double ratio_dist=2.5;
ratio_dist = ratio_dist*min_dist;
int i, iter = matches.toArray().length;
matches.release();
for(i = 0;i < iter; i++){
if (for_calculating[i].distance <=ratio_dist)
good_matches.addLast(for_calculating[i]);
}
System.out.print("\n done Good Matches");}
I was thinking may be I can write this part of the code in C++ using NDK but I just wanted to be sure that the language is the problem and not the code itself.
Please don't be strict, first question! Any criticism is much appreciated!
So the problem was the logcat was giving me false timing results. The lag was due to a Huge Gaussian Blur later on in the code. Instead of System.out.print, I used System.currentTimeMillis, which showed me the bug.
its may be a simple / stupid question, but I have a conversion problem in opencv (android).
my goal is to calculate the fundamentalMatrix out of corresponding matches from two consecutive images.
i programmed this so far (and working):
detector.detect(actImg, actKP);
detector.detect(prevImg, prevKP);
descExtractor.compute(prevImg, prevKP, descriptorPrev);
descExtractor.compute(actImg, actKP, descriptorAct);
descMatcher.match(descriptorPrev, descriptorAct, matches);
Features2d.drawMatches(prevImg, prevKP, actImg, actKP,matches, mRgba);
matches are of the type MatOfDMatch.
now i would calculate the fundamentalMatrix out of the points that matches against each other. therefor i must know which of the keypoints in the first image (prevKP) were found in the second image (actKP).
Mat fundamental_matrix = Calib3d.findFundamentalMat(nextPts, prevPts, Calib3d.FM_RANSAC,3, 0.99);
first question:
how can i extract / convert MatOfKeyPoints to MatOfPoint2f (that they can be passed to findFundamentalMatrix)
second question:
how to pass only the matched keypoints to the function findFundamentalMatrix.
is this a good way of doing it?
thanks a lot in advace!
EDIT
thanks a lot for your detailed response!
i wrote your code into two functions:
private MatOfPoint2f getMatOfPoint2fFromDMatchesTrain(MatOfDMatch matches2,
MatOfKeyPoint prevKP2) {
DMatch dm[] = matches2.toArray();
List<Point> lp1 = new ArrayList<Point>(dm.length);
KeyPoint tkp[] = prevKP2.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dmm = dm[i];
if (dmm.trainIdx < tkp.length)
lp1.add(tkp[dmm.trainIdx].pt);
}
return new MatOfPoint2f(lp1.toArray(new Point[0]));
}
private MatOfPoint2f getMatOfPoint2fFromDMatchesQuery(MatOfDMatch matches2,
MatOfKeyPoint actKP2) {
DMatch dm[] = matches2.toArray();
List<Point> lp2 = new ArrayList<Point>(dm.length);
KeyPoint qkp[] = actKP2.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dmm = dm[i];
if (dmm.queryIdx < qkp.length)
lp2.add(qkp[dmm.queryIdx].pt);
}
return new MatOfPoint2f(lp2.toArray(new Point[0]));
}
but when i am calling
prevPts = getMatOfPoint2fFromDMatchesTrain(matches, prevKP);
nextPts = getMatOfPoint2fFromDMatchesQuery(matches, actKP);
Mat fundamental_matrix = Calib3d.findFundamentalMat(
nextPts, prevPts, Calib3d.FM_RANSAC, 3, 0.99);
the problem is that i get the error -215.
the error:
error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function cv::Mat
cv::findFundamentalMat(...
i proved that prevPts and nextPts arend below 10 points (for ransac).
so i would guess that the error is that the points arend floating points. but i checked this with the debugger that these points are floating points.
your suggested codeline:
return new MatOfPoint2f(lp2.toArray(new Point[0]));
should convert the points to floating point or am i wrong?
thanks again
Unfortunately there is no better way (even in C++ API) than loop through all matches and copy values to new Mat (or vector).
In Java you can do it as follows:
DMatch dm[] = matches.toArray();
List<Point> lp1 = new ArrayList<Point>(dm.length);
List<Point> lp2 = new ArrayList<Point>(dm.length);
KeyPoint tkp[] = prevKP.toArray();
KeyPoint qkp[] = actKP.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dm = dm[i];
lp1.add(tkp[dm.trainIdx].pt);
lp2.add(qkp[dm.queryIdx].pt);
}
MatOfPoint2f pointsPrev = new MatOfPoint2f(lp1.toArray(new Point[0]));
MatOfPoint2f pointsAct = new MatOfPoint2f(lp2.toArray(new Point[0]));