Set number of keypoints in Android OpenCV - android

I'm trying to run feature detection on an image using some of the inbuilt OpenCV feature detectors. However, I only want to detect the top/best n features present in the image (lets say 30 for this example). I already have code that will find features and then use them to identify that object in other images, but I can't work out how to restrict the number of keypoints found. I initialise the various detectors/extractors/matchers as below:
private final FeatureDetector mFeatureDetector = FeatureDetector.create(FeatureDetector.ORB);
private final DescriptorExtractor mDescriptorExtractor = descriptorExtractor.create(DescriptorExtractor.ORB);
private final DescriptorMatcher mDescriptorMatcher = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMINGLUT);
I have tried to find a solution already on SO but the only solutions I can find aren't for the android version of OpenCV. Trying similar methods to these solutions also didn't work.
The only method I can think of that might work is just taking the first 30 features, but I don't think this will work well as they might all be clustered in one part of the image. So I was wondering if anyone knows how the top 30 features can be chosen (if indeed they can). I don't mind which feature detection algorithm the solution is for (MSER, ORB, SIFT, SURF, STAR, GFTT, etc.).
I also require that exactly 30 features be detected each time, so playing with the sensitivity until it's "about right" isn't an option.
EDIT: the reason for needing to find exactly 30 features is that I am going to use them to train my detector. The idea is that I will get 30 features from each of a set of training images and then use the result to then find the object again in a scene. As the training images will be close ups of the object it doesn't matter that the features won't be clustered in one part of the image.

Whilst I haven't been able to find a way of setting the number of keypoints to search for, I have been able to work out how to extract the correct number of keypoints afterwards. Doing this isn't computationally efficient but from the comments I received I don't think doing it before is possible.
My keypoints variable is:
private final MatOfKeyPoint mTargetKeypoints = new MatOfKeyPoint();
After it has been "filled" with features (it seems to stop after 500) the individual features can be extracted by transforming it to an array (where each element of the array is a feature.
mTargetKeypoints.toArray()[0]; //NOTE: Not actual code, used in a print statement
When I print the above the result is:
KeyPoint [pt={82.0, 232.0}, size=31.0, angle=267.77094, response=0.0041551706, octave=0, class_id=-1]
The individual information can then be extracted with the inbuilt Keypoint functions, e.g.:
mTargetKeypoints.toArray()[0].pt.x //Printing outputs x location of feature.
mTargetKeypoints.toArray()[0].response // Printing outputs response of feature.
This SO question indicates that the response indicates "how good" the keypoint is. Thus from here it is relatively simple to pick the best 30 features to use.

Related

Android ArCore Sceneform API. How to change textures in runtime?

The server has more than 3000 models and each of them has several colors of material. I need to load separately models and textures and set textures depending on the user's choice. How to change baseColorMap, normalMap, metallicMap, roughnessMap in runtime?
after
modelRenderable.getMaterial().setTexture("normalMap", normalMap.get());
nothing happens
I'm doing something wrong. There is no information in documentation for that.
thank you for posting this question.
setTexture() appears to not work: Unfortunately this part of our API is still a little rough; it works but is very easy to get wrong. We're working on a sample to illustrate how to modify material parameters (including textures) at runtime and will improve our error reporting in the next release.
Thousands of models w/ multiple permutations how?: The plan here has two parts:
The binaries used by the Android Studio plugin will be made available for use in build scripts on server platforms. This will allow you to do a server-side conversion of your assets to .sfb. We'll be releasing a blog post soon on how to do this.
The .sfa will get the ability to contain loose textures and materials not explicitly associated with geometry, and .sfa's will be able to declare data dependencies on other .sfa's. This will mean that you can author (and deliver) .sfb's that contain textures/materials (but no geometry) and .sfb's that contain geometry (but no textures/materials), and if they're both available at instantiation time it will just work.
use this code`
CompletableFuture<Texture> futureTexture = Texture.builder()
.setSource(this, R.drawable.shoes)
.build();
and replace with
/*.thenAccept(renderable -> andyRenderable = renderable)*/
.thenAcceptBoth(futureTexture, (renderable, texture) -> {
andyRenderable = renderable;
andyRenderable.getMaterial().setTexture("baseColor", texture);
})
would work.

Add a layer of paths on whole map

I use here-map sdk. I have db file with 16500 ! paths (coordinates of a point). I need to draw all paths on the map, when user activate function "show additional paths". But i think, if i try to fetch big number of path and add all poplilynes object on here map, it will take a huge amount of time.
Help to find the optimal solution.
I would filter your data based on the visible viewport and disable this functionality where it doesn't make much sense (continental or globe level).
So, let's assume you app shows the map on zoomlevel 16 or 17 (district level), you can retrieve the viewport as GeoBoundingBox from the Map instance (e.g. via mapView.getMap()) with getBoundingBox().
The GeoBoundingBox makes it easy for you now to check for collisions with your own objects, since it has several "contains()" methods.
So everything that collides with your viewport should be shown, everything else is ignored.
You can update whenever the map viewport changes with either listening for OnTransformListener in the Map class or register for the MapGesture events (get MapGesture via getMapGesture() and listen for zooming events via addOnGestureListener())
If the amount of data for filtering is still too big, you can also think about preparing your data for more efficient filtering, like partitioning (region based would be my first idea) so only a subset of your data needs to be filtered.
It seems that Custom Location Extension (https://developer.here.com/platform-extensions/documentation/custom-location/topics/what-is.html) can help with this case.
In short, it allows you to upload a custom data to the HERE backend and query it later.

Need help to improve execution time for face detection in OpenCV 2.4.0

I'm using the cvHaarDetectObjects C function to detect faces in my Android application, but the execution time is not fast enough to process a certain number of video frames per second. So, I'm thinking of commenting out code that is unnecessary for me, e.g. I've noticed a lot of branching conditions for the flags and memory allocation statements that can be commented out. The same thing can be done for the functions that are called from cvHaarDetectObjects.
Has anyone tried doing this sort of optimization before? Any help is much appreciated.
Code:
cascadeFile1 = (CvHaarClassifierCascade *) cvLoad(cascadeFace,0,0,0);
CvSeq *face = cvHaarDetectObjects(img1, cascadeFile1, storage,1.1, 3,CV_HAAR_DO_CANNY_PRUNING,cvSize(0,0));
As a first step you should try to tune the input parameters, as these have a big impact on the performance of the classifier.
You could try to:
reduce the source image resolution to a reasonable value
increase the scaleFactor parameter by small amounts (e.g 0.1 steps)
depending on your resolution, camera field of view and distance of faces, define values for the min_size and max_size parameters. This can dramatically influence the number of operations the algorithm needs to perform.
Second you could post your actual parameters and your profiling results and people around here can surely give some more hints on what to improve.
As a side note: I don't think that commenting out branching conditions will make a noticeable difference in speed if you want to leave the algorithm functioning.

Get a (Android.Graphics) Path of a String

I cannot find any way to retrieve a Path object representing a string. Does it exist? A list of the necessary points would be enough, but I guess, internally, a path is used.
For example in GDI+ there is:
GraphicsPath p = new GraphicsPath();
p.AddString("string");
From there any point of the "drawn string" can be accessed and modified.
PS: I do not mean drawing a text along a path.
I've spent quite a long time solving this problem (to draw vectorized text in OpenGL) and had to dig deep into libSkia sources. And it turned out pretty simple:
Indeed, canvas uses paths internally and it converts text to vector paths using SkPaint's getTextPath() method. Which is, luckily, exposed at Java side in public API as android.graphics.Paint.getTextPath(). After that, you can read Path back with android.graphics.PathMeasure.
Unfortunately, you can't read exact drawing commands, but you can sample the curve pretty closely using something like bisection.

Moving instances from Set to another in Scala

In a game I need to keeps tabs of which of my pooled sprites are in use. When "active" multiple sprites at once I want to transfer them from my passivePool to activePool both of which are immutable HashSets (ok, i'll be creating new sets each time to be exact). So my basic idea is to along the lines of:
activePool ++= passivePool.take(5)
passivePool = passivePool.drop(5)
but reading the scala documentation I'm guessing that the 5 that I take might be different that the 5 I then drop. Which is definitely not what I want. I could also say something like:
val moved = passivePool.take(5)
activePool ++= moved
passivePool --= moved
but as this is something I need to do pretty much every frame in realtime on a limited device (Android phone) I guess this would be much slower as I will have to search one by one each of the moved sprites from the passivePool.
Any clever solutions? Or am I missing something basic? Remember the efficiency is a primary concern here. And I can't use Lists instead of Sets because I also need random-access removal of sprites from activePools when the sprites are destroyed in the game.
There's nothing like benchmarking for getting answers to these questions. Let's take 100 sets of size 1000 and drop them 5 at a time until they're empty, and see how long it takes.
passivePool.take(5); passivePool.drop(5) // 2.5 s
passivePool.splitAt(5) // 2.4 s
val a = passivePool.take(5); passivePool --= a // 0.042 s
repeat(5){ val a = passivePool.head; passivePool -= a } // 0.020 s
What is going on?
The reason things work this way is that immutable.HashSet is built as a hash trie with optimized (effectively O(1)) add and remove operations, but many of the other methods are not re-implemented; instead, they are inherited from collections that don't support add/remove and therefore can't get the efficient methods for free. They therefore mostly rebuild the entire hash set from scratch. Unless your hash set has only a handful of elements in it, this is bad idea. (In contrast to the 50-100x slowdown with sets of size 1000, a set of size 100 has "only" a 6-10x slowdown....)
So, bottom line: until the library is improved, do it the "inefficient" way. You'll be vastly faster.
I think there may be some mileage in using splitAt here, which will give you back both the five sprites to move and the trimmed pool in a single method invocation:
val (moved, newPassivePool) = passivePool.splitAt(5)
activePool ++= moved
passivePool = newPassivePool
Bonus points if you can assign directly back to passivePool on the first line, though I don't think it's possible in a short example where you're defining the new variable moved as well.

Categories

Resources