Android OpenCV optimization - line detection is slow Hough Lines - android

in my project I need to detect lines in image. I actually have to detect rows and columns inside rectangle. I use OpenCV to accomplish this. I have done it succesfuly but it's kinda slow. I use many functions to preprocess the image - e.g. thresholding, canny, dilation, gaussian blur etc.
I use HoughLines like this
Imgproc.HoughLines(image, lines, 1, Math.PI/90 , threshold, minLineSize, maxGap);
It takes about 2.5 s to complete my program which has ~ 600 lines. But on this one line it takes 2.2 s. As you can see I use Java interface for OpenCV. Is it possible to optimize it someway? Would rewriting my program to NDK make it faster? If I understand OpenCV4Android, than it's just wrapper for functions written in NDK, so I don't think it would be faster. Or is there better and faster approach to detect lines in image? Thanks for any advice.

Can you count the number of lines returned from HoughLines - if there are 1000's then it will likely take that sort of time to generate.
I'd recommend changing your Canny settings to reduce the number of edges that HoughLines needs to work on if possible.
Also, you can try different parameters for HoughLines. My values of 80, 30 and 10 respectively for the call to HoughLines seems to give manageable results.

Related

Fast drawing lots of lines in Android Canvas: dealing with antialiasing and KitKat's peculiarities

I need to draw lots (~10k/frame is a common quantity) of lines fast. Known ways to draw those lines (single-threaded, antialiased):
drawLines() with a hardware accelerated Canvas. Fast, but the lines get antialiasing blur (see the picture below). Also, on KitKat nothing is drawn if the points array contains NaN values
drawLines() with a software rendered Canvas. Moderate speed, no blur, KitKat problem remains
for (int i ...) drawLine() with a hardware accelerated Canvas. The slowest of the three listed, no blur, no KitKat problems
Something tells me there is a couple of simple tricks to avoid both antialiasing and KitKat issues keeping the performance high
For the first one - is it possible, for example, to draw those lines not antialiased and then apply antialiasing to the whole bitmap (a variation of the fastest option)?
For the second - no ideas but some trimmer method dealing with NaNs. Anyway, KitKat is a relatively new and popular - there should be some solutions for its issues - otherwise the platform would be quite a headache to use
UPD2:
This question contains two separate issues:
- Antialiasing when using drawLines() with hardware acceleration and arbitrary (yet valid) input
- drawLines() refusing to draw anything on KitKat if the input contains NaNs
drawLines() is in focus because it is way faster than, for example, drawing the lines one by one using drawLine()
Also, the pictures below are the results of applying drawLines() two times to the same array:
canvas.drawLines(toDraw, 0, qty, paint);
canvas.drawLines(toDraw, 2, qty-4, paint); //qty % 4 == 0
UPD:
That's how the blur looks like. It's on the ends of the lines
There's a tangle of topics here; Let's try to unravel them.
Line "blurring"
Based on our chat, this issue only shows up once you start drawing line segments whose length < 1px. It's worth noting that drawLines() will tesselate all the line segments at one time, and likely the tessellator is falling over due to the presence of <1px lines in the set. This is the reason you're not seeing the "blurring" with using individual drawLine() commands; each drawLine() tessellates only the segment you've given it, so the errors are bound to just those super small segments (that are too small to see anyhow). The fix here is to add in some logic to remove the <1px length lines from your set, and this will fix the issue, and allow you to use drawLines() which is faster than the other methods.
NaNs issue
NaNs cause lots of problems on GPUs, so it makes sense that if they are included in your drawing list, that you would see problems. (much like the <1px line segments that are causing the blurring issues). Again, this is why you don't see visual problems using individual drawLine() commands; NaNs are breaking the tesselator, and isolating it only to those single segments, rather than the entire line list. Again the solution here would point to filtering the list to remove NaNs.
Performance
Given than the overhead of tessellating a line is significantly larger than a CPU check to discard a bad line, it would make sense that adding a pre process to remove the NaNs and <1px lines should be within your performance budget, and would remove the visual issues you're seeing.
Well, the solution is just the result of what was written in chat (see Colt's post):
1) The original input array is split into pieces containing no NaNs
2) Those pieces are split so that maximum difference between line lengths in sub-pieces is less than 4x (experimentally obtained value). Applying drawLines() to the final sub-arrays results in no blur and rather good performance

How to create a magicwand in android?

I want to create a magic wand tool in Android like it is implemented in Photoshop. Is there an opensource library to perform this work? And if not can anyone guide me on the right way?
OpenCV has floodFill that with a little work can give you the magic wand functionality.
basically you need acccess to the pixels of the image. You can do that in numerous ways -> Canvas. Then, your algorithm is a bit like the A* Pathfinding algorithm (but not really);
set color-diff threshold
define starting point
check every pixel arround the starting point if it passes threshold. if yes -> save coords
for every pixel that passed threshold, go to 2
the pixel-color difference that should pass the threshold is in essence the pythagoras theorem between the original starting point and the pixel you are comparing; d=SQRT((x2-x1)^2+(y2-y1)^2+(z2-z1)^2)
of course photoshop has a number extreme efficient algorithms, but essentially it boils down to above

Find longest straight line in live camera(After edge detection filter) in android

I want make app that show something above longest straight line in image.
i know should convert RGB Image to GrayScale.
also know should use edge detections algorithm and(sobel,canny,...)
Sobel Edge Detection in Android
but i don't know how can i find largest straight line in image,line may be part of rectangle or any shape,i just want find longest line position in image but no gradient(or small level of gradient)
how can i implement it with no external library (or lightweight libraries)
The Hough Transform is the most commonly used algorithm to find lines in an image. Once you run the transform and find lines, it's just a matter of sorting them by length and then crawling along the lines to check for the constraints your application might have.
RANSAC is also a very quick and reliable solution for finding lines once you have the edge image.
Both these algorithms are fairly easy to implement on your own if you don't want to use an external library.

Android: How to detect these objects in images? (Image included). Tried OpenCV and metaioSDK, but both are not working good enough

i have been working with object detection / recognition in images captured from an android device camera recently.
the object i am trying to detect are all kinds of buttons that look like this:
Picture of buttons
so far i have been trying with OpenCV and also with the metaio SDK. results:
OpenCV was always detecting something, but gave lots of false hits. also it is too much work to collect all the pictures for what i have in mind. i have tried three ways with OpenCV:
FeatureDetection (SURF, ORB and so on) -> was way too slow and not enough features on my objects.
Template Matching -> seems to only work when the template is exactly a part out of the scene image
Training classifiers -> this worked the best so far, but is too much work for my goal, and still gives too many false detections.
metaioSDK was working ok when i took my reference images (the icon part of each button) out of a picture like shown above, then printed the full image and pointed my android device camera at the printed picture. but when i tried with the real buttons (not a picture of them) then almost nothing got detected anymore. in the metaio documentation it is said that the reference images need to have lots of features and color differences and also should not only consist of white text. well, as you see my reference images are exactly the opposite from what they should be. but thats just how the buttons look ;)
so, my question would be: does any of you have a suggestion about what else i could try to detect and recognize each of those buttons when i point my android camera at them?
As a suggestion can you try the following approach:
Class-Specific Hough Forest for Object Detection
they provide a C code implementation. Compile and run it and see the results, then replace positive and negative training images with the ones you have according the following rules:
In a car you will need to define the following 3 areas:
target region (the image you provided is a good representation of a target region)
nearby working area (this area have information regarding you target relative location) I would recommend: area 3-5 times the target regions, around the target, can be a good working area
everything outside the above can be used as negative images
then,
Use "many" positive images (100-1000) at different viewing angles (-30 - +30 degrees) and various distances.
You will have to make assumptions at which viewing angles and distances your users will use the application. The more strict they are the better performance you will get. A simple "hint" camera overlay can give a good idea to people what you expect the working area to be.
Use few times (3-5) more different negative image set which includes pictures of things that might be in the camera but should not contribute any target position information.
Do not use big images, somewhere around 100-300px in width should be enough
Assemble the database, and modify the configuration file that the code comes with. Run the program, see if performance is OK for your needs.
The program will return a voting map cloud of the object you are looking fore. Add gaussian blur to it, and apply some threshold to it (you will have to make another assumption for this threshold value).
Extracted mask will define the area you are looking for. The size of the masked region can give you good estimate of the object scale. Given this information it will be much easier to select proper template and perform template matching.
(Also some thoughts) You can also try to do a small trick by using goodFeaturesToTrack function with the mask you got, to get a set of locations and compare them with the corresponding locations on a template. Constuct an SSD and solve it for rotation, scale and transition parameters, by mimizing alignment error (but not sure if this approach will work)

Image detection inside an image

I usually play a game called Burako.
It has some color playing pieces with numbers from 1-13.
After a match finishes you have to count your points.
For example:
1 == 15 points
2 == 20 points
I want to create an app that takes a picture and count the pieces for me.
So I need something that recognizes an image inside an image.
I was about to read about OpenCV since there is an Android port but it feels there should be something simpler to do this.
What do you think?
I had not used the Android port, but i think it's doable under good lighting conditions.
I would obtain the minimal bounding boxes of each of the pieces and rotate it accordingly so you can compare it with a model image.
Another way could be to get the contour of the numbers written on the piece ( which i guess are in color) and do some contour matching with the numbers.
Opencv is a big and complex framework but it's also suitable for simple tasks like this.

Categories

Resources