To assume that we have 9 points. Each one can only be visited by once. The path, such as from the upper left corner to bottom right corner, is also allowed. Can anyone provide an algorithm to calculate the longest path for a screen lock pattern?
You need to provide distance metrics first.
Let's assume the following:
-Horizontal or vertical move can be long 1 for one step or 2 for two steps.
-Diagonally you will have length 1.41 for one step (Square root of 2, pythagorean theorem) or 2.83 for two steps (Square root of 8).
-Like a knight in chess you will have length 2.24 (Square root of 5)
So now you need to find just the maximum sum of this possible steps.
If you go with "Best first search" as mentioned above, it will be troublesome because the longest path does not choose alwayst the first best option.
For the following graph:
123
456
789
One option is 519467382, which would have length about 17.7
So maybe it is safer to try calculating all options as mentioned, but you can also keep in mind that because of the symmetry you need to calculate the lengths just for starting nodes 1, 2 and 5. The other nodes would give the same results, so no need for calculations....
It is similar to the travelling salesman problem (TSP), but instead of the shortest path you look for the longest one, and the path is not closed.
For the 9 points case I wouldn't be afraid of just trying all possible paths since there are just 9! = 362880 of them. And this number could potentially be reduced since 3 by 3 regular grid is highly symmetrical.
Another approach (since the path is not closed) could be doing a best-first search from a node with "best" being the one that has the longest path so far. You'll do this from each node remembering the longest path of them all. But this is just a quick thought and I have no proof this would actually work.
I calculated manually and I got the longest one to be 561943728 which has a length of 17.76 (take distance between two nearest dots to be 1 unit). If anyone can beat this then show your pattern!
Basically a TSP problem with a bit of modification that disallow stepping over points that hasn't been visited.
3x3 can be easily brute-forced. For slightly larger problems, the dynamic programming algorithm for TSP modified also works in {O(2^n*n^2)} time.
https://repl.it/#farteryhr/AndroidLockscreen#index.js
3x3: 17.779271744364845
591643728
573461928
591827346
537281946
573829164
519283764
537649182
519467382
4x4: 45.679014611640504 (0123456789abcdef)
92d6c3875e1f4b0a
68793c2d5e1f4b0a
92d6c3875b4f1e0a
68793c2d5b4f1e0a
a1e5f0b46d2c7839
5b4a0f1e6d2c7839
a1e5f0b4687c2d39
5b4a0f1e687c2d39
a4b5f0e19783d2c6
5e1a0f4b9783d2c6
a4b5f0e192d387c6
5e1a0f4b92d387c6
9786c3d2a4b0e1f5
6d293c78a4b0e1f5
9786c3d2a1e0b4f5
6d293c78a1e0b4f5
5x5: 91.8712723085273 (0123456789abcdefghijklmno)
ci6o0j5dbea9f8g4k3m1n7l2h
cg8k4f9bdae5j6i0o1m3l7n2h
ci6o0n1h7m2l3g8k4fe5jb9ad
c8g4k3l7h2m1n6i0o5ef9bjad
cg8k4l3h7m2n1i6o0ja9fd5eb
c6i0o1n7h2m3l8g4k9aj5dfeb
c8g4k9fdbeaj5i6o0n2l3h1m7
c6i0o5jbdaef9g8k4l2n1h3m7
The longest path would be 567 348 192
which is about 18.428
There are atleast 8 such patterns, one other is 567 381 932 (traverse length 18.428). Put a mirror image around those patterns and you get 4 patterns from one such pattern.
Related
I would like to detect some nutrition facts on food package with an Android Application, with OpenCV.
So far I managed to do it with one image of a nutrition table, but of course it only works with this one.
The goal is to detect and retrieve the value of Energy, Proteines, and Glucides, for 100g of product. These informations are present in almost every table, that is why I focus only on them for the moment.
So I was wondering if there a good method to do so ? For the moment, I try to detect each block of text, recognise it with Tesseract, and if it fits the word I'm looking for, I get the corresponding column and line in the picture, to finally get the value I want.
Is there any way to track the words straightly, and get the value that fits best in the image (in terms of alignement with the "100g" column).
Typical image : hpics.li/4231f79
Sorry if my problem is not well explained, just ask if something is not clearor if you want me to explain more what I've done for the moment.. Also sorry for my english
Cheers
Just a few ideas:
1. Convert image to HSV color space and look only for black and white regions (using inRange function). Blobs which contains only those 2 colors probably will be your informations (but unfortunetely some other things too - barcode, maybe some drawing or logo).
2 You regions should be rectangles, so if the blob is not rectangle - discard it.
3. If founded rectangle is rotated use affineTransform function to align it vertically - here i've explained how to do it. Note that rectangle width and height should stay the same.
4. After using affine transform your rectangle might be rotated by 90, 180 or 270 degrees. In the example you provided on the top there is a black region - it it's true for all you images than finding top is quite easy - just find black rectangle within you region. In other case finding top might be harder - a quick idea, which might be worth testing is to look for black pixels in each white rectangle. In most cases there are aligned to center (not interesting case for us) or to the left - if you find left sied of rectangle, finding top is obvious :) Alternatively you may look for characters which are always on the right side - %, g and mg
If you will have any problems, give us more examples and describe what you have done already - right know it's hard to tell something more.
I'd like to build a reverse guess-game. (The player has a number in his mind, and the program tries to guess the number. You have three buttons. One button for a smaller tip, one for a bigger and one for the correct.) My app is generating the numbers on keypress, but the problem, that it doesn't remember my buttons I pressed. So for example the program tips number 50. I click "Smaller" button, it generates a smaller number, for example 35. I click "Bigger" button, and it can generate 80 or 90, even if I pressed "Smaller" for 50. How could I make the program "remember" the choices? Thank you :) Best regards!Sorry if I'm unclear, but I'm beginner.
This is my onclick:public void lowerClick(View v) {
tip = randomGenerator.nextInt(( highest + 1 ) - lowest ) + lowest;
textTip.setText(Integer.toString(tip));
{
The only problem is how I am supposed to change the highest and lowest parts and if I have to add anything to the program. I hope It's clear now. :) And thank you for your cooperation and understanding.
Tip: In the future, you should post any code you already have to show effort and avoid downvotes. However, I'll interpret this post as a general algorithm question.
You'll want two variables:
High Range: The highest possible number the user can be thinking of.
Low Range: The lowest possible number the user can be thinking of.
If the program guesses 50 and the user clicks Smaller, set the high range to 50, as you know the number must be under 50 from that point on.
If the program guesses 35 next, and the user clicks Bigger, set the low range to 35.
Always guess only numbers between the low range and high range, updating at each step. It's probably best to guess the point halfway in between the high and low range to maximize your chances. This would be somewhat of a binary search approach for guessing a number.
I need to take a measure (Euclidean distance) over a selected points on an image. In order to make this simpler I broke down all the process in steps:
Taking/Loading a photo (with a known pattern in it)
Recognizing the pattern to calibrate measure
Selecting points "from" and "to" in order to measure the distance between them
For my first iteration I will:
load a picture (half of number 1),
select the pattern manually (a rough approximation of number 2), and
select point to measure distances between them.
I'm just beginning with OpenCV; do I need it for my first iteration?
For described steps only OpenCV too massive library. But if you plan some kind of evolution for your project (recognition, detection,complicated image processing tasks), I think OpenCV can speed up your development process.
Okay so I have these images:
Basically what I'm trying to do is to create a "mosaic" of about 5 to 12 hexagons, with most of them roughly centralised, and where all of the lines meet up.
For example:
I'm aware that I could probably just brute-force it, but as I'm developing for Android I need a faster, more efficient and less processor-intensive way of doing it.
Can anybody provide me with a solution, or even just point me in the right direction?
A random idea that I had is to go with what Deepak said about defining a class that tracks the state of each of its six edges (say, in an int[] neighbor in which neighbor[0] states if top edge has neighbor, neighbor[1] states if top-right edge has neighbor, and so on going clockwise)
Then for each hexagon on screen, convert its array to an integer via binary. Based on that integer, use a lookup table to determine which hexagon image to use + how it should be oriented/flipped, then assign that hexagon object to that image.
For instance, let's take the central hexagon with four neighbors in your first screenshot. Its array would be [1, 0, 1, 1, 0, 1] based on the scheme mentioned above. Take neighbor[0] to be the least-significant bit (2^0) and neighbor[5] to be the most-significant bit (2^5), and we have [1, 0, 1, 1, 0, 1] --> 45. Somewhere in a lookup table we would have already defined 45 to mean the 5th hexagon image, flipped horizontally*, among the seven base hexagon icons you've posted.
Yes, brute-force is involved, but it's a "smarter" brute-force since you're not rotating to see if a hexagon will fit. Rather, it involves a more efficient look-up table.
*or rotated 120 degrees clockwise if you prefer ;)
Nice and tricky question. What you can start with is define object for each image which has attributes that specify which edge has a line attached to it. Then while adding the images in the layout you can rotate it in such a way that the edge with line in one image lies adjacent to the other image's edge with line. It may be little complicated but I hope you can at least start with something like this.
My android application loads some markers on an overlay onto a MapView.
The markers are placed based on a dynamic list of GeoPoints.
I want to move the map center and zoom into the area with most items.
Naively, I can calculate the superposition of all the points, but I would like to remove the points that are very far from the mass of points from the calculation.
Is there a known way to calculate this ? (e.g. probability, statistics .. ?)
I once solved the exact same problem you describe for a real estate app I wrote a little while ago. What worked for me was:
Calculate a center point somehow
(centroid, average the lats and
lons, or whatever)
Calc the distances between this imaginary point and each of your real pins
Use a Standard Deviation algorithm and remove any pin whose distance has a StdDev >
2 (or whatever threshold for you)
Repeat steps 1 - 3 (you'll be using a new center point each time
you loop) until there are no more
outliers to remove at step 3
This approach work great for my needs. But I'm sure there's more interesting ways to solve the same problem if you look around. For example, I found this interesting CompSci paper...
http://people.scs.carleton.ca/~michiel/outliers.pdf
Good luck!