I want to use getPerspectiveTransform function, but it only accepts Mat as arguments and I have coordinate data in an array. So I convert them into points and then to Mat as follows:
List<Point> l = new ArrayList<Point>();
for (int i = 0; i < pts_1.length; i++) {
Point pt = new Point(pts_1[0][i], pts_1[1][i]);
l.add(i,pt);
}
Mat mat1 = Converters.vector_Point2f_to_Mat(l);
for (int i = 0; i < pts_2.length; i++) {
Point pt = new Point(pts_2[0][i], pts_2[1][i]);
l.add(i,pt);
}
Mat mat2 = Converters.vector_Point2f_to_Mat(l);
Mat perspectiveTransform = Imgproc.getPerspectiveTransform(mat1,mat2);
But when I run my app, then it gives me error 'Something went wrong', and in logcat I am getting the following error:
E/cv::error(): OpenCV Error: Assertion failed (src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4) in cv::Mat cv::getPerspectiveTransform(cv::InputArray, cv::InputArray), file /build/master_pack-android/opencv/modules/imgproc/src/imgwarp.cpp, line 6748
E/org.opencv.imgproc: imgproc::getPerspectiveTransform_10() caught cv::Exception: /build/master_pack-android/opencv/modules/imgproc/src/imgwarp.cpp:6748: error: (-215) src.checkVector(2, CV_32F) == 4 && dst.checkVector(2, CV_32F) == 4 in function cv::Mat cv::getPerspectiveTransform(cv::InputArray, cv::InputArray)
I am new to OpenCV4Android so I don't understand why this is happening. How to fix it? Is there any other better way to do this?
Thanks for help!
Note: I know that a similar procedure is followed here: Can't get OpenCV's warpPerspective to work on Android but he is not getting this error, so I am posting it here.
As mentioned in the comment discussion on OP's post (all credit to them), the problem lies with the list l:
List<Point> l = new ArrayList<Point>();
for (int i = 0; i < pts_1.length; i++) {
Point pt = new Point(pts_1[0][i], pts_1[1][i]);
l.add(i,pt);
}
Mat mat1 = Converters.vector_Point2f_to_Mat(l);
If we take a look at the contents of the List<Point> l:
for (Point pt : l)
System.out.println("(" + pt.x + ", " + p.ty + ")");
(x0, y0)
(x1, y1)
(x2, y2)
(x3, y3)
And moving onto the next matrix:
for (int i = 0; i < pts_2.length; i++) {
Point pt = new Point(pts_2[0][i], pts_2[1][i]);
l.add(i,pt);
}
Mat mat2 = Converters.vector_Point2f_to_Mat(l);
Taking another look at the contents of the List<Point> l:
for (Point pt : l)
System.out.println("(" + pt.x + ", " + p.ty + ")");
(x4, y4)
(x5, y5)
(x6, y6)
(x7, y7)
(x0, y0)
(x1, y1)
(x2, y2)
(x3, y3)
So this is the culprit; your second matrix will have 8 points in it.
From the Java docs for ArrayList:
Parameters: index - index at which the specified element is to be inserted
Using l.add will insert and not overwrite the old list. So the solution would be to make a new list for each transformation matrix.
Related
I am trying to port a tensorflow model to tensorflow lite to use it in an android application. The conversion is successful and everything runs except for Internal error: Failed to run on the given Interpreter: input must be 5-dimensional. The input in the original model was input_shape=(20, 320, 240, 1), which is 20 320 x 240 grayscale images (therefore ...,1). Here is the important code:
List<Mat> preprocessedFrames = preprocFrames(buf);
//has length of 20 -> no problem there (shouldn't affect dimensionality either...)
int[] output = new int[2];
float[][][] inputMatrices = new float[preprocessedFrames.toArray().length][320][240];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
// returns float[][] with floats from 0 to 1
inputMatrices[i] = inputMatrix;
}
try{
detector.run(inputMatrices, output);
Debug("results: " + output.toString());
}
The model gives me an output of 2 neurons translating into 2 labels.
The model code is the following:
model = tf.keras.Sequential(name='detector')
model.add(tf.keras.layers.Conv3D(filters=(56), input_shape=(20, 320, 240, 1), strides=(2,2,2), kernel_size=(3,11,11), padding='same', activation="relu"))
model.add(tf.keras.layers.AveragePooling3D(pool_size=(1,4,4)))
model.add(tf.keras.layers.Conv3D(filters=(72), kernel_size=(4,7,7), strides=(1,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(81), kernel_size=(2,4,4), strides=(2,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(100), kernel_size=(1,2,2), strides=(3,2,2), padding='same'))
model.add(tf.keras.layers.Conv3D(filters=(128), kernel_size=(1,2,2), padding='same'))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(768, activation='tanh', kernel_regularizer=tf.keras.regularizers.l2(0.011)))
model.add(tf.keras.layers.Dropout(rate=0.1))
model.add(tf.keras.layers.Dense(256, activation='sigmoid', kernel_regularizer=tf.keras.regularizers.l2(0.012)))
model.add(tf.keras.layers.Dense(2, activation='softmax'))
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.00001), loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
EDIT: I printed out the first input tensor as follows:
int[] shape = detector.getInputTensor(0).shape();
for(int r = 0; r < shape.length; r++){
Log.d("********" + r, "*******: " + r + " : " + shape[r]);
}
With that I first get the output [1,20,320,240,1]and after that I only get [20,320,240]. I am really quite desperate now...
So, I figured it out by myself and it seems like I really only had to make the input 5 dimensional by putting the content into a first dimension and every single pixel into a fifth dimension. I don't know why, but I will accept that xD.
float[][] output = new float[1][2];
float[][][][][] inputMatrices = new float[1][preprocessedFrames.toArray().length][320][240][1];
for(int i = 0; i < preprocessedFrames.toArray().length; i++) {
Mat inpRaw = preprocessedFrames.get(i);
Bitmap data = Bitmap.createBitmap(inpRaw.cols(), inpRaw.rows(), Bitmap.Config.ARGB_8888);
Utils.matToBitmap(inpRaw, data);
int[][] pixels = pixelsFromBitmap(data);
float[][] inputMatrix = inputMatrixFromIntPixels(pixels);
for (int j = 0; j < inputMatrix.length - 1; j++) {
for(int k = 0; k < inputMatrix[0].length - 1; k++) {
inputMatrices[0][i][k][j][0] = inputMatrix[j][k];
}
}
}
game.batch.begin();
for (Array obstacle_array123: obstacle_array) {
body = obstacle_array123;
for (Body bodies: body) {
if (bodies.getUserData() instanceof Array && bodies.isActive()) {
sprites_array = (Array)bodies.getUserData();
for (int fix_pos = 0; fix_pos < sprites_array.size; fix_pos++) {
sprite = sprites_array.get(fix_pos);
if (verts.size != 0) verts.removeRange(0, verts.size - 1);
f = bodies.getFixtureList().get(fix_pos);
s = (PolygonShape)f.getShape();
transform = bodies.getTransform();
for (int i = 0; i < s.getVertexCount(); i++)
{
s.getVertex(i, tmp);
transform.mul(tmp);
verts.add(new Vector2(tmp));
}
rotation_point.set((verts.get(0).x + verts.get(1).x + verts.get(2).x + verts.get(3).x) / 4, (verts.get(0).y + verts.get(1).y + verts.get(2).y + verts.get(3).y) / 4);
sprite.setPosition(rotation_point.x - sprite.getWidth() / 2, rotation_point.y - sprite.getHeight() / 2);
sprite.setRotation(bodies.getAngle() * MathUtils.radiansToDegrees);
sprite.draw(game.batch);
}
}
}
}
game.batch.end();
I have a game where my bodies are made from multiple square fixtures, so this is the code to render each square sprite on each square fixture.
2 problems - 1.st --> it only renders the first sprite in the array
2.nd --> if you look at the following loop (SOLVED)
for (int i = 0; i < s.getVertexCount(); i++)
{
s.getVertex(i, tmp);
transform.mul(tmp);
verts.add(new Vector2(tmp));
}
well it is apperantly different compared to
for (int i = 0; i < s.getVertexCount(); i++)
{
s.getVertex(i, tmp);
transform.mul(tmp);
verts.add(tmp);
}
The spawned coordinates in 2nd example are wrong for half width and half height of the square.
When I try to get the coordinated from both examples the numbers are the same, but when setting the sprite position, 2nd example goes off.
You should probably ask both questions separately, but to answer your second question, then they ARE different.
In the first one you add a new Vector2 to verts each time through the loop. So verts will end up holding a load of different Vector2's.
In the second one you add the same Vector2 to verts over and over again, so it will just have one Vector2 with the same value over and over again (remember Java is pass by reference).
Caveat - My answer assumes that verts is some sort of standard collection or a libgdx Array.
I'm a beginner in openCV4android and I would like to get some help if possible
.
I'm trying to detect colored triangles,squares or circles using my Android phone camera but I don't know where to start.
I have been reading OReilly Learning OpenCV book and I got some knowledge about OpenCV.
Here is what I want to make:
1- Get the tracking color (just the color HSV) of the object by touching the screen
- I have already done this by using the color blob example from the OpenCV4android example
2- Find on the camera shapes like triangles, squares or circles based on the color choosed before.
I have just found examples of finding shapes within an image. What I would like to make is finding using the camera on real time.
Any help would be appreciated.
Best regards and have a nice day.
If you plan to implement NDK for your opencv stuff then you can use the same idea they are using in OpenCV tutorial 2-Mixedprocessing.
// on camera frames call your native method
public Mat onCameraFrame(CvCameraViewFrame inputFrame)
{
mRgba = inputFrame.rgba();
Nativecleshpdetect(mRgba.getNativeObjAddr()); // native method call to perform color and object detection
// the method getNativeObjAddr gets the address of the Mat object(camera frame) and passes it to native side as long object so that you dont have to create and destroy Mat object on each frame
}
public native void Nativecleshpdetect(long matAddrRgba);
In Native side
JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial2_Tutorial2Activity_Nativecleshpdetect(JNIEnv*, jobject,jlong addrRgba1)
{
Mat& mRgb1 = *(Mat*)addrRgba1;
// mRgb1 is a mat object which points to the address of the input camera frame, so all the manipulations you do here will reflect on the live camera frame
//once you have your mat object(i.e mRgb1 ) you can implement all the colour and shape detection algorithm you have learnt in opencv book
}
since all manipulations are done using pointers you have to be bit careful handling them. hope this helps
Why dont you make use of JavaCV i think its a better alternative..you dont have to use the NDK at all for this..
try this:
http://code.google.com/p/javacv/
If you check OpenCV's Back Projection tutorial it does what you are looking for (and a bit more).
Back Projection:
"In terms of statistics, the values stored in the BackProjection
matrix represent the probability that a pixel in a image belongs to
the region with the selected color."
I have converted that tutorial to OpenCV4Android (2.4.8) like you were looking for, it does not use Android NDK. You can see all the code here at Github.
You can also check this answer for more details.
Though its a bit late i would like to make a contribution to the question.
1- Get the tracking color (just the color HSV) of the object by
touching the screen - I have already done this by using the color blob
example from the OpenCV4android example
Implement OnTouchListener to your activity
onTouch function
int cols = mRgba.cols();
int rows = mRgba.rows();
int xOffset = (mOpenCvCameraView.getWidth() - cols) / 2;
int yOffset = (mOpenCvCameraView.getHeight() - rows) / 2;
int x = (int) event.getX() - xOffset;
int y = (int) event.getY() - yOffset;
Log.i(TAG, "Touch image coordinates: (" + x + ", " + y + ")");
if ((x < 0) || (y < 0) || (x > cols) || (y > rows)) return false;
Rect touchedRect = new Rect();
touchedRect.x = (x > 4) ? x - 4 : 0;
touchedRect.y = (y > 4) ? y - 4 : 0;
touchedRect.width = (x + 4 < cols) ? x + 4 - touchedRect.x : cols - touchedRect.x;
touchedRect.height = (y + 4 < rows) ? y + 4 - touchedRect.y : rows - touchedRect.y;
Mat touchedRegionRgba = mRgba.submat(touchedRect);
Mat touchedRegionHsv = new Mat();
Imgproc.cvtColor(touchedRegionRgba, touchedRegionHsv, Imgproc.COLOR_RGB2HSV_FULL);
// Calculate average color of touched region
mBlobColorHsv = Core.sumElems(touchedRegionHsv);
int pointCount = touchedRect.width * touchedRect.height;
for (int i = 0; i < mBlobColorHsv.val.length; i++)
mBlobColorHsv.val[i] /= pointCount;
mBlobColorRgba = converScalarHsv2Rgba(mBlobColorHsv);
mColor = mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] + ", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3];
Log.i(TAG, "Touched rgba color: (" + mBlobColorRgba.val[0] + ", " + mBlobColorRgba.val[1] +
", " + mBlobColorRgba.val[2] + ", " + mBlobColorRgba.val[3] + ")");
mRGBA is a mat object which was initiated in onCameraViewStarted as
mRgba = new Mat(height, width, CvType.CV_8UC4);
And for the 2nd part:
2- Find on the camera shapes like triangles, squares or circles based
on the color choosed before.
I have tried to find out the selected contours shape using approxPolyDP
MatOfPoint2f contour2f = new MatOfPoint2f(contours.get(0).toArray());
//Processing on mMOP2f1 which is in type MatOfPoint2f
double approxDistance = Imgproc.arcLength(contour2f, true) * 0.02;
Imgproc.approxPolyDP(contour2f, approxCurve, approxDistance, true);
//Convert back to MatOfPoint
MatOfPoint points = new MatOfPoint(approxCurve.toArray());
System.out.println("points length" + points.toArray().length);
if( points.toArray().length == 5)
{
System.out.println("Pentagon");
mShape = "Pentagon";
}
else if(points.toArray().length > 5)
{
System.out.println("Circle");
Imgproc.drawContours(mRgba, contours, 0, new Scalar(255, 255, 0, -1));
mShape = "Circle";
}
else if(points.toArray().length == 4)
{
System.out.println("Square");
mShape = "Square";
}
else if(points.toArray().length == 4)
{
System.out.println("Triangle");
mShape = "Triangle";
}
This was done on onCameraFrame function after i obtained the contour list
For me if the length of point array was more than 5 it was usually a circle. But there is other algorithm to obtain circle and its attributes.
I wrote code to detect rectangles in open cv. And i am able to detect few object but i am not able to detect physical door or big rectangle. Please check my code and correct me if i am wrong somewhere . Another problem is this code is not able to detect rectangle constantly so when i draw rectangle it comes and go , comes and go and it looks bad. Any way to detect regularly in every frame.
Mat output= getGray(inputFrame.rgba(),inputFrame.rgba());
Imgproc.medianBlur(output, output, 5);
Imgproc.erode(output, output, new Mat());
Imgproc.dilate(output, output, new Mat());
Mat edges = new Mat();
Imgproc.Canny(output, output, 5, 50);
// Vector<MatOfPoint> vector=new Vector<MatOfPoint>();
// Imgproc.findContours(output, points, output, Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
contours = new ArrayList<MatOfPoint>();
Mat hierarchy = new Mat();
contours.clear();
Imgproc.findContours(output, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE);
MatOfPoint2f approxCurve = new MatOfPoint2f();
rgbImage=inputFrame.rgba();
mDrawnContours.clear();
> Blockquote
for(int i=0;i< contours.size();i++){
MatOfPoint tempContour=contours.get(i);
MatOfPoint2f newMat = new MatOfPoint2f( tempContour.toArray() );
int contourSize = (int)tempContour.total();
Imgproc.approxPolyDP(newMat, approxCurve, contourSize*0.15, true);
MatOfPoint points=new MatOfPoint(approxCurve.toArray());
if((Math.abs(Imgproc.contourArea(tempContour))<100) || !Imgproc.isContourConvex(points)){
Log.i(TAG, "::onCameraFrame:" + " too small");
appendLog("Too small");
continue;
}
else if(points.toArray().length >= 4 && points.toArray().length <= 6){
int vtc = points.toArray().length;
Vector<Double> cosList=new Vector<Double>();
for (int j = 2; j < vtc+1; j++){
cosList.add(angle(points.toArray()[j%vtc], points.toArray()[j-2], points.toArray()[j-1]));
}
double mincos = getMin(cosList);
double maxcos = getMax(cosList);
Log.i(TAG, "::onCameraFrame:" + "mincos:"+mincos+"maxcos:"+maxcos);
if (vtc == 4 && mincos >= -0.1 && maxcos <= 0.3)
{
mTotalSquare++;
Imgproc.drawContours(rgbImage, contours, i, new Scalar(0,0,255));
DrawnContours contours2=new DrawnContours();
contours2.setIndex(i);
mDrawnContours.add(contours2);
Log.i(TAG, "::onCameraFrame:" + "found");
appendLog("found");
}
else{
Log.i(TAG, "::onCameraFrame:" +" not found " +"mincos:"+mincos+"maxcos:"+maxcos);
appendLog("not found 1");
}
}
return rgbImage
Let me know if you have any questions.
I suppose, that large contours have more than 4 edges. Their contour consists of large number of short line segments (depends on approximation function parameter in line
Imgproc.approxPolyDP(newMat, approxCurve, contourSize*0.15, true);
).
And you have condition which check for edge numbers:
points.toArray().length <= 6
its may be a simple / stupid question, but I have a conversion problem in opencv (android).
my goal is to calculate the fundamentalMatrix out of corresponding matches from two consecutive images.
i programmed this so far (and working):
detector.detect(actImg, actKP);
detector.detect(prevImg, prevKP);
descExtractor.compute(prevImg, prevKP, descriptorPrev);
descExtractor.compute(actImg, actKP, descriptorAct);
descMatcher.match(descriptorPrev, descriptorAct, matches);
Features2d.drawMatches(prevImg, prevKP, actImg, actKP,matches, mRgba);
matches are of the type MatOfDMatch.
now i would calculate the fundamentalMatrix out of the points that matches against each other. therefor i must know which of the keypoints in the first image (prevKP) were found in the second image (actKP).
Mat fundamental_matrix = Calib3d.findFundamentalMat(nextPts, prevPts, Calib3d.FM_RANSAC,3, 0.99);
first question:
how can i extract / convert MatOfKeyPoints to MatOfPoint2f (that they can be passed to findFundamentalMatrix)
second question:
how to pass only the matched keypoints to the function findFundamentalMatrix.
is this a good way of doing it?
thanks a lot in advace!
EDIT
thanks a lot for your detailed response!
i wrote your code into two functions:
private MatOfPoint2f getMatOfPoint2fFromDMatchesTrain(MatOfDMatch matches2,
MatOfKeyPoint prevKP2) {
DMatch dm[] = matches2.toArray();
List<Point> lp1 = new ArrayList<Point>(dm.length);
KeyPoint tkp[] = prevKP2.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dmm = dm[i];
if (dmm.trainIdx < tkp.length)
lp1.add(tkp[dmm.trainIdx].pt);
}
return new MatOfPoint2f(lp1.toArray(new Point[0]));
}
private MatOfPoint2f getMatOfPoint2fFromDMatchesQuery(MatOfDMatch matches2,
MatOfKeyPoint actKP2) {
DMatch dm[] = matches2.toArray();
List<Point> lp2 = new ArrayList<Point>(dm.length);
KeyPoint qkp[] = actKP2.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dmm = dm[i];
if (dmm.queryIdx < qkp.length)
lp2.add(qkp[dmm.queryIdx].pt);
}
return new MatOfPoint2f(lp2.toArray(new Point[0]));
}
but when i am calling
prevPts = getMatOfPoint2fFromDMatchesTrain(matches, prevKP);
nextPts = getMatOfPoint2fFromDMatchesQuery(matches, actKP);
Mat fundamental_matrix = Calib3d.findFundamentalMat(
nextPts, prevPts, Calib3d.FM_RANSAC, 3, 0.99);
the problem is that i get the error -215.
the error:
error: (-215) npoints >= 0 && points2.checkVector(2) == npoints && points1.type() == points2.type() in function cv::Mat
cv::findFundamentalMat(...
i proved that prevPts and nextPts arend below 10 points (for ransac).
so i would guess that the error is that the points arend floating points. but i checked this with the debugger that these points are floating points.
your suggested codeline:
return new MatOfPoint2f(lp2.toArray(new Point[0]));
should convert the points to floating point or am i wrong?
thanks again
Unfortunately there is no better way (even in C++ API) than loop through all matches and copy values to new Mat (or vector).
In Java you can do it as follows:
DMatch dm[] = matches.toArray();
List<Point> lp1 = new ArrayList<Point>(dm.length);
List<Point> lp2 = new ArrayList<Point>(dm.length);
KeyPoint tkp[] = prevKP.toArray();
KeyPoint qkp[] = actKP.toArray();
for (int i = 0; i < dm.length; i++) {
DMatch dm = dm[i];
lp1.add(tkp[dm.trainIdx].pt);
lp2.add(qkp[dm.queryIdx].pt);
}
MatOfPoint2f pointsPrev = new MatOfPoint2f(lp1.toArray(new Point[0]));
MatOfPoint2f pointsAct = new MatOfPoint2f(lp2.toArray(new Point[0]));