how to reduce wrong recognition using cascade classifier - android

Hello I'm tring to recognize a car using cascade classifier, android and opencv library. My problem is that my phone is marking almoust everything as a car.
I've created my code based on:
https://www.youtube.com/watch?v=WEzm7L5zoZE
and face detection sample. My app behave very strange cause marking looks like random. I even don't know if marking car is correct or maybe it is just some random behaviour. At the moment it is even marking my keyboard as a car. I'm not sure what can I improve. I don't see any progress between training it up to 5 or 14 stages
I've trained my file up to 14 stages
my code looks like this:
#Override
public Mat onCameraFrame(Mat aInputFrame) {
// return FrameAnalyzer.analyzeFrame(aInputFrame);
// Create a grayscale image
Imgproc.cvtColor(aInputFrame, grayscaleImage, Imgproc.COLOR_RGBA2RGB);
MatOfRect objects = new MatOfRect();
// Use the classifier to detect faces
if (cascadeClassifier != null) {
cascadeClassifier.detectMultiScale(grayscaleImage, objects, 1.1, 1,
2, new Size(absoluteObjectSize, absoluteObjectSize),
new Size());
}
Rect[] dataArray = objects.toArray();
for (int i = 0; i < dataArray.length; i++) {
Core.rectangle(aInputFrame, dataArray[i].tl(), dataArray[i].br(),
new Scalar(0, 255, 0, 255), 3);
}
return aInputFrame;
}

Try changing the below.
Using COLOR_RGBA2RGB with cvtColor as in sample code will not give a gray scale image. Try RGBA2GRAY
Increase the number of neighbors in detectMultiScale. Now it's 2. More neighbors means more confidence in result.
Hope there are enough samples to train with. A quick search and reading through books, gives an impression like thousands of images are needed for training. For e.g. around 10000 images are used for OCR haar training. For face training, 3000 to 5000 samples are used.
More importantly, decide if you really want to go with haar training for identifying a car. There could be better methods of vehicle identification. For e.g. for a moving vehicle we could use optical flow based techniques.

Related

Opencv findContours in Android seems much slower than findContours in Python. Do you have any suggestion to improve algorithm speed?

it's the first time for me that I ask help here. I will try to be as precise as possible in my question.
I am trying to develop a shape detection app for Android.
I first identified the algorithm which works for my case playing with Python. Basically for each frame I do this:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_color, upper_color)
contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
#here I filter my results
by this algorithm I am able to run the analysis realtime on videos having a frame rate of 120fps.
So I tryied to implement the same algorithm on Android Studio, doing the following for each Frame:
Imgproc.cvtColor(frameInput, tempFrame, Imgproc.COLOR_BGR2HSV);
Core.inRange(tempFrame,lowColorRoi,highColorRoi,tempFrame);
List<MatOfPoint> contours1 = new ArrayList<MatOfPoint>();
Imgproc.findContours(tempFrame /*.clone()*/, contours1, new Mat(), Imgproc.RETR_TREE, Imgproc.CHAIN_APPROX_SIMPLE);
for(MatOfPoint c : contours1){
//here I filter my results
}
and I see that only the findContour function takes 5-600ms to be performed at each iteration (I noticed that it takes also more using tempFrame.clone()), allowing more or less to run the analysis with only 2fps.
This speed is not acceptable at all of course. Do you have any suggestion about how to improve this speed? 30-40fps would be already a good target for me.
I will really appreciate any help from you all. Many thanks in advance.
I would suggest trying to do your shape analysis on a lower resolution version of the image, if that is acceptable. I often see directly proportional timing with number of pixels of the image and the number of channels of the image - so if you can halve the width and height it could be a 4 times performance improvement. If that works, likely the first thing to do is a resize, then all subsequent calls have a smaller burden.
Next, be careful using OpenCV in Java/Kotlin because there is a definite cost to marshalling over the JNI interface. You could write the majority of your code in native C++, and then make just a single call across JNI to a C++ function that handles all of the shape analysis at once.

Generate and export point cloud from Project Tango

After some weeks of waiting I finally have my Project Tango. My idea is to create an app that generates a point cloud of my room and exports this to .xyz data. I'll then use the .xyz file to show the point cloud in a browser! I started off by compiling and adjusting the point cloud example that's on Google's github.
Right now I use the onXyzIjAvailable(TangoXyzIjData tangoXyzIjData) to get a frame of x y and z values; the points. I then save these frames in a PCLManager in the form of Vector3. After I'm done scanning my room, I simple write all the Vector3 from the PCLManager to a .xyz file using:
OutputStream os = new FileOutputStream(file);
size = pointCloud.size();
for (int i = 0; i < size; i++) {
String row = String.valueOf(pointCloud.get(i).x) + " "
+ String.valueOf(pointCloud.get(i).y) + " "
+ String.valueOf(pointCloud.get(i).z) + "\r\n";
os.write(row.getBytes());
}
os.close();
Everything works fine, not compilation errors or crashes. The only thing that seems to be going wrong is the rotation or translation of the points in the cloud. When I view the point cloud everything is messed up; the area I scanned is not recognizable, though the amount of points is the same as recorded.
Could this have to do something with the fact that I don't use PoseData together with the XyzIjData? I'm kind of new to this subject and have a hard time understanding what the PoseData exactly does. Could someone explain it to me and help me fix my point cloud?
Yes, you have to use TangoPoseData.
I guess you are using TangoXyzIjData correctly; but the data you get this way is relative to where the device is and how the device is tilted when you take the shot.
Here's how i solved this:
I started from java_point_to_point_example. In this example they get the coords of 2 different points with 2 different coordinate system and then write those coordinates wrt the base Coordinate frame pair.
First of all you have to setup your exstrinsics, so you'll be able to perform all the transformations you'll need. To do that I call mExstrinsics = setupExtrinsics(mTango) function at the end of my setTangoListener() function. Here's the code (that you can find also in the example I linked above).
private DeviceExtrinsics setupExtrinsics(Tango mTango) {
//camera to IMU tranform
TangoCoordinateFramePair framePair = new TangoCoordinateFramePair();
framePair.baseFrame = TangoPoseData.COORDINATE_FRAME_IMU;
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_COLOR;
TangoPoseData imu_T_rgb = mTango.getPoseAtTime(0.0,framePair);
//IMU to device transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_DEVICE;
TangoPoseData imu_T_device = mTango.getPoseAtTime(0.0,framePair);
//IMU to depth transform
framePair.targetFrame = TangoPoseData.COORDINATE_FRAME_CAMERA_DEPTH;
TangoPoseData imu_T_depth = mTango.getPoseAtTime(0.0,framePair);
return new DeviceExtrinsics(imu_T_device,imu_T_rgb,imu_T_depth);
}
Then when you get the point Cloud you have to "normalize" it. Using your exstrinsics is pretty simple:
public ArrayList<Vector3> normalize(TangoXyzIjData cloud, TangoPoseData cameraPose, DeviceExtrinsics extrinsics) {
ArrayList<Vector3> normalizedCloud = new ArrayList<>();
TangoPoseData camera_T_imu = ScenePoseCalculator.matrixToTangoPose(extrinsics.getDeviceTDepthCamera());
while (cloud.xyz.hasRemaining()) {
Vector3 rotatedV = ScenePoseCalculator.getPointInEngineFrame(
new Vector3(cloud.xyz.get(),cloud.xyz.get(),cloud.xyz.get()),
camera_T_imu,
cameraPose
);
normalizedCloud.add(rotatedV);
}
return normalizedCloud;
}
This should be enough, now you have a point cloud wrt you base frame of reference.
If you overimpose two or more of this "normalized" cloud you can get the 3D representation of your room.
There is another way to do this with rotation matrix, explained here.
My solution is pretty slow (it takes around 700ms to the dev kit to normalize a cloud of ~3000 points), so it is not suitable for a real time application for 3D reconstruction.
Atm i'm trying to use Tango 3D Reconstruction Library in C using NDK and JNI. The library is well documented but it is very painful to set up your environment and start using JNI. (I'm stuck at the moment in fact).
Drifting
There still is a problem when I turn around with the device. It seems that the point cloud spreads out a lot.
I guess you are experiencing some drifting.
Drifting happens when you use Motion Tracking alone: it consist of a lot of very small error in estimating your Pose that all together cause a big error in your pose relative to the world. For instance if you take your tango device and you walk in a circle tracking your TangoPoseData and then you draw you trajectory in a spreadsheet or whatever you want you'll notice that the Tablet will never return at his starting point because he is drifting away.
Solution to that is using Area Learning.
If you have no clear ideas about this topic i'll suggest watching this talk from Google I/O 2016. It will cover lots of point and give you a nice introduction.
Using area learning is quite simple.
You have just to change your base frame of reference in TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION. In this way you tell your Tango to estimate his pose not wrt on where it was when you launched the app but wrt some fixed point in the area.
Here's my code:
private static final ArrayList<TangoCoordinateFramePair> FRAME_PAIRS =
new ArrayList<TangoCoordinateFramePair>();
{
FRAME_PAIRS.add(new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_AREA_DESCRIPTION,
TangoPoseData.COORDINATE_FRAME_DEVICE
));
}
Now you can use this FRAME_PAIRS as usual.
Then you have to modify your TangoConfig in order to issue Tango to use Area Learning using the key TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION. Remember that when using TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION you CAN'T use learningmode and load ADF (area description file).
So you cant use:
TangoConfig.KEY_BOOLEAN_LEARNINGMODE
TangoConfig.KEY_STRING_AREADESCRIPTION
Here's how I initialize TangoConfig in my app:
TangoConfig config = tango.getConfig(TangoConfig.CONFIG_TYPE_DEFAULT);
//Turning depth sensor on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DEPTH, true);
//Turning motiontracking on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_MOTIONTRACKING,true);
//If tango gets stuck he tries to autorecover himself.
config.putBoolean(TangoConfig.KEY_BOOLEAN_AUTORECOVERY,true);
//Tango tries to store and remember places and rooms,
//this is used to reduce drifting.
config.putBoolean(TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION,true);
//Turns the color camera on.
config.putBoolean(TangoConfig.KEY_BOOLEAN_COLORCAMERA, true);
Using this technique you'll get rid of those spreads.
PS
In the Talk i linked above, at around 22:35 they show you how to port your application to Area Learning. In their example they use TangoConfig.KEY_BOOLEAN_ENABLE_DRIFT_CORRECTION. This key does not exist anymore (at least in Java API). Use TangoConfig.KEY_BOOLEAN_DRIFT_CORRECTION instead.

why detected object jumps using OpenCV

Ok I have a strange problem. I'll try to describe it as best as I can.
I've learned my app to detect a car when looking on it from the side
Imgproc.cvtColor(aInputFrame, grayscaleImage, Imgproc.COLOR_RGBA2RGB);
MatOfRect objects = new MatOfRect();
// Use the classifier to detect cars
if (cascadeClassifier != null) {
cascadeClassifier.detectMultiScale(grayscaleImage, objects, 1.1, 1,
2, new Size(absoluteObjectSize, absoluteObjectSize),
new Size());
}
for (int i = 0; i < dataArray.length; i++) {
Core.rectangle(aInputFrame, dataArray[i].tl(), dataArray[i].br(),
new Scalar(0, 255, 0, 255), 3);
mRenderer.setCameraPosition(-5, 5, 60f);
}
Now, this code works nice. I mean that i detects cars and it marks them with green rectangle. The problem is that the marked rectangle jumps like hell. I mean even when the phone is hold still the rectangle jumps from left to right to middle. There is never one still rectangle. I hope I've described the problem properly. I would like to stabilizy the marking cause I want to draw an overlay based on it and I can't make it to jump like this
See(1) the parameters for detectMultiScale and it expects
image of type CV_8U. You will need to convert to gray scale image
with COLOR_RGBA2GRAY instead of COLOR_RGBA2RGB
In detectMultiScale, increase the number of neighbours parameter to avoid false positives.
Suggestion: If input is a video stream, don't run
detectMultiScale on every frame. It is slow even if you use LBP
cascades. Try detection in one frame, followed by tracking techniques.

How to improve OpenCV face detection performance in android?

I am working on a project in android in which i am using OpenCV to detect faces from all the images which are in the gallery. The process of getting faces from the images is performing in the service. Service continuously working till all the images are processed. It is storing the detected faces in the internal storage and also showing in the grid view if activity is opened.
My code is:
CascadeClassifier mJavaDetector=null;
public void getFaces()
{
for (int i=0 ; i<size ; i++)
{
File file=new File(urls.get(i));
imagepath=urls.get(i);
defaultBitmap=BitmapFactory.decodeFile(file, bitmapFatoryOptions);
mJavaDetector = new CascadeClassifier(FaceDetector.class.getResource("lbpcascade_frontalface").getPath());
Mat image = new Mat (defaultBitmap.getWidth(), defaultBitmap.getHeight(), CvType.CV_8UC1);
Utils.bitmapToMat(defaultBitmap,image);
MatOfRect faceDetections = new MatOfRect();
try
{
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
}
catch(Exception e)
{
e.printStackTrace();
}
if(faceDetections.toArray().length>0)
{
}
}
}
Everything is fine but it is detection faces very slow. The performance is very slow. When i debug the code then i found the line which is taking time is:
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));
I have checked multiple post for this problem but i didn't get any solution.
Please tell me what should i do to solve this problem.
Any help would be greatly appreciated. Thank you.
You should pay attention to the parameters of detectMultiScale():
scaleFactor – Parameter specifying how much the image size is reduced at each image scale. This parameter is used to create a scale pyramid. It is necessary because the model has a fixed size during training. Without pyramid the only size to detect would be this fix one (which can be read from the XML also). However the face detection can be scale-invariant by using multi-scale representation i.e., detecting large and small faces using the same detection window.
scaleFactor depends on the size of your trained detector, but in fact, you need to set it as high as possible while still getting "good" results, so this should be determined empirically.
Your 1.1 value can be a good value for this purpose. It means, a relative small step is used for resizing (reduce size by 10%), you increase the chance of a matching size with the model for detection is found. If your trained detector has the size 10x10 then you can detect faces with size 11x11, 12x12 and so on. But in fact a factor of 1.1 requires roughly double the # of layers in the pyramid (and 2x computation time) than 1.2 does.
minNeighbors – Parameter specifying how many neighbours each candidate rectangle should have to retain it.
Cascade classifier works with a sliding window approach. By applying this approach, you slide a window through over the image than you resize it and search again until you can not resize it further. In every iteration the true outputs (of cascade classifier) are stored but unfortunately it actually detects many false positives. And to eliminate false positives and get the proper face rectangle out of detections, neighbourhood approach is applied. 3-6 is a good value for it. If the value is too high then you can lose true positives too.
minSize – Regarding to the sliding window approach of minNeighbors, this is the smallest window that cascade can detect. Objects smaller than that are ignored. Usually cv::Size(20, 20) are enough for face detections.
maxSize – Maximum possible object size. Objects bigger than that are ignored.
Finally you can try different classifiers based on different features (such as Haar, LBP, HoG). Usually, LBP classifiers are a few times faster than Haar's, but also less accurate.
And it is also strongly recommended to look over these questions:
Recommended values for OpenCV detectMultiScale() parameters
OpenCV detectMultiScale() minNeighbors parameter
Instead reading images as Bitmap and then converting them to Mat via using Utils.bitmapToMat(defaultBitmap,image) you can directly use Mat image = Highgui.imread(imagepath); You can check here for imread() function.
Also, below line takes too much time because the detector is looking for faces with at least having Size(20, 20) which is pretty small. Check this video for visualization of face detection using OpenCV.
mJavaDetector.detectMultiScale(image,faceDetections,1.1, 10, 0, new Size(20,20), new Size(image.width(), image.height()));

digit detection from the image in android

I am searching for how to extract a digital number from the image in android. I have to take a picture then i need to get numbers from image. OpenCV is a option . can we convert opencv into android ? Kindly suggest me any proper way. I will be grateful to you.
There are many OCR for Android
check there links
https://github.com/rmtheis/android-ocr
https://github.com/GautamGupta/Simple-Android-OCR
http://www.abbyy.com/mobileocr/android/
best OCR (Optical character recognition) example in android
OpenCV supports Android platform. You have to set up OpenCV4Android, it's instructions step by step here.
http://docs.opencv.org/doc/tutorials/introduction/android_binary_package/O4A_SDK.html
However OpenCV is not an option but only a step. Then you have to use a character recognition engine. Most popular one is Tesseract-ocr. But it is really not easy task.
Also, they often recognize all characters. If you could achieve it, extracting the digits will be the easiest part in Java.
this works for me you just need to specify the number size` ArrayList output=new ArrayList<>();
cvtColor(input,input,COLOR_BGRA2GRAY);
Mat img_threshold = new Mat();
threshold(input, img_threshold, 60, 255,THRESH_BINARY_INV);
Mat img_contours =copy(img_threshold);
//Find contours of possibles characters
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
findContours(img_contours, contours,new Mat(), Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_NONE); // all pixels of each contours
contours=sort(contours);
// Draw blue contours on a white image
Mat result=copy(img_threshold);
cvtColor(result, result, COLOR_GRAY2BGR);
drawContours(result,contours,
-1, // draw all contours
new Scalar(0,0,255), // in blue
1); // with a thickness of 1
//Start to iterate to each contour founded
ListIterator<MatOfPoint> itc = contours.listIterator();
//Remove patch that are no inside limits of aspect ratio and area.
while (itc.hasNext())
{
//Create bounding rect of object
MatOfPoint mp = new MatOfPoint(itc.next().toArray());
Rect mr = boundingRect(mp);
rectangle(result,new Point(mr.x,mr.y),new Point(mr.x+mr.width,mr.y+mr.height),new Scalar(0,255,0));
Mat auxRoi=new Mat(img_threshold,mr);
if (OCR_verifySizes(auxRoi))
{
output.add(preprocessChar(auxRoi));
}
}
return output;`

Categories

Resources