I am using the Vuforia SDK to build an Android application and am curious as to how the marker tracking works. Does the app convert the video frame into byte codes and then compare these against the .dat file generated by creating the marker? Also, where is this code found in the Vuforia sample app, is it in the C++ ? Thanks.
Well, you don't see the code for recognition and tracking because they are Intellectual property of Qualcomm and usually should not be revealed. Vuforia is not an open-source library.
Vuforia first detects "feature points" in your target image [Web-based target management] and then uses the data to compare the features in target image and the receiving frame from camera.
Google "Natural feature detection and tracking" which falls under Computer Vision area and you will find interesting stuff.
No, detection and tracking code placed in libQCAR.so
But question "how its work" is to complex to answer here. If you want to be familiar with object detection and tracking - start to learn method as mser, surf, sift, ferns and other.
Vuforia uses edge-detection technique. If there are more vertices or lines in the high-contrast image then its a high rated image for Vuforia. So, we can say that their algorithm is somewhat similar to SIFT.
Related
I have been trying to integrate ARToolkit Marker Object tracking into a Tango Application.
So far I have created a build so that a tango app can access and use the ARToolkit Native Library or the ARToolkit Unity wrappers.
However, they both seem to require exclusive access to the camera in their default configurations.
How could you feed the same Android video feed to both libraries?
Could you create a dummy camera device which doubles out the feed?
Could you take the tango feed as normal, and then resend it into ARToolkit with a special VideoConf
[edit]
ARToolkit uses the older Camera1 API, takes a onPreviewFrame() callback and passes that byte[] data to it's own Native Library call, which does the actual work.
Along the lines of the second bullet point, could Tango provide a copy of each frames raw camera data using something like iTangoVideoOverlay .
(ARToolkits NDK functionality seems to expect NV21, but can also accept other formats)
If that data was extractable from tango, I believe the ARToolkit NDK functionality can be used without actually owning the camera.
I am afraid that neither of the method you mentioned would work. Tango has exclusive access to camera and I believe ARToolkit also occupies the camera exclusively through camera2 API. With current TangoSDK, I think the walk-around would be use ARToolkit for camera rendering, and Tango for pose tracking.
However, this could expose a problem for time-stamping, which is Tango and ARToolkit has different timestamps. The solution for this is to take a timestamp offset at the very beginning when application starts, and constantly apply that offset when querying pose from Tango based on timestamp.
This blog shows an example integrating the two.
It also links to example source code, but I haven't tidied it up at all after testing - proceed with caution!
You cannot feed the same camera source to both libraries (first bullet point), but you can forward the camera feed from Tango (ITangoVideoOverlay) into ARToolkit ([AcceptVideoImage][2]) (second bullet point).
This is not ideal, because it is fairly inefficient to send the data to Java from C#. The Phab 2 Pro has to downsample the video X4 to achieve a decent framerate.
A better answer would replace the AndroidJavaClass calls with pipes/sockets.
Also there are many little problems - it's a pretty hacky workaround.
I am new in OpenCV, image processing and also in Native language C/C++ for, and I would like to have some guides on where I should focus on in order to complete my task. I am developing an android application that can recognize the bent pins and circle/square the bent pins, for example the face recognition in openCV will "square" the human face once it was detected. The bent pins can be in defected in various different forms. I am using Eclipse ADT. Currently I had downloaded the Face recognition in OpenCV for android and I am analyzing it, and according to what I had discovered is that, it consist of an xml file, where it had been trained and is used for detection by the system. Now my questions are:
How can I train and generate the xml file?
What software should I use in order to train and generate the xml file?
What type of images do I need to retrieve in order to train the system/image requirements (eg, image of the bent pins from multiple angles)?
What is the best algorithm to achieve this?
According to my research I discovered that Face detection recognition is using Haar-like feature. What is the difference of Haar-like feature, cascade classifier and also artificial neural network? I am confused of the difference. Are they the same thing?
Thank you
1) 2) the pc-version of opencv comes with a tool named opencv_traincascade, this is used to generate the haar/hog/lbp xml-cascades off-line. (no, you don't run that kind of task on your smartphone)
3) 4) multiple(hundreds) images from your object, also even more negative (non-pin/background images)
5) haar cascades train on edge features like those:
so, here's the bummer: i seriously doubt, that your 'bent pins' come with enough 'edge features' for this.
Hi i'm trying to create a application related to the Augmented Reality (AR) and was able to configure my application with Metaio SDK and OpenCV library successfully in two separate application.
but the thing is i want to use both the library of OpenCV and Metaio together into one application. so can any one help me with its integration.
In my single application I want to use OpenCV for markerless detection and MetaIO for 3D Model rendering.
Metaio:http://www.metaio.com/
OpenCV:http://opencv.org/
=====>
I'm using opencv to detect shapes in a camera image and want to display 3D objects rendered by metaio on those shapes. Similar to marker tracking.
Metaio and openCV, each have their own cameraview. I have disabled cameraview of openCV.
I want to convert an ImageStruct object received in onNewCameraFrame() method into an OpenCV Mat in Android. For this, I have registered MetaioSDKCallback to continuously receive camera frame.
But onSDKReady() and onNewCameraFrame() method of this callback is not being called,though I have added 'metaioSDK.requestCameraImage()'.
This where i'm stuck with it.
I suggest you to integrate the sdk of Opencv4android, and to see the samples come with, they are very good examples to teach you how to use the camera easily
For your objective probably face detection example is good to check.
Here is the tuto that help you to install and configure opencv SDK
For AR I can't help you so much, but have look at this discussion, it could be helpful.
Since i began programmation this forum provides me all that i always need! i want to thank you for... Thanks for all!!!!
Now i'm here to asking a problem that i not found it yet here.
I'm working on an android application. In my app, i have to read an android panorama which must be on a distant server. I have two problems to do this and hope u to save:
1. I take a panorama from my phone and When i connect it to PC from copying my panorama, this one become a simple jpeg image. I don't know how and why!!!
2. I have no idea on how to view panorama on android. I search on google, on android forums and still at my beginning point, i have to present my application next week!!!
So i give myself to you for bringing me out from this depth.
Thanks.
there's this library that do that with spherical cubic and cylindrical panoramic imagesPanoramaGL
and there's the utility with the google Play Services libs but only for spheric images refer to this :
Android Support for Photo Sphere
Is there an android Framework that can be used in an app to recognize a 3D image and send the user to a video. This should fall under augmented reality, but so far everything I have viewed uses 2D image and stuff to produce a 3D image on the screen... My situation is backwards from that. I tried using vuforia but I couldn't get the sdk to work, and unity needs an android license. DroidAr doesn't seem to fit the bill either. Or are there any tutorials on this matter? Thanks.
I have not used the feature, but Metaio has a 3D object, "markerless" tracking feature as well as the ability to do video playback within the SDK. I am sure if you would rather simply redirect to a video (YouTube) or something this would not be exceptionally difficult.
http://www.metaio.com/software/mobile-sdk/features/
Metaio's mobile SDK is similar to Vuforia, so if you had trouble with that you might have difficulty getting it up and running. If your programming skills aren't up to that, you might consider looking into Junaio, an AR browser made by Metaio. With Junaio you simply create a content channel rather than having to build the app from scratch. Again, I have not actually tried this feature yet but the documentation seems to indicate that 3D tracking is available in Juniao:
http://www.junaio.com/develop/quickstart/3d-tracking-and-junaio/
Good luck!