I have been trying to integrate ARToolkit Marker Object tracking into a Tango Application.
So far I have created a build so that a tango app can access and use the ARToolkit Native Library or the ARToolkit Unity wrappers.
However, they both seem to require exclusive access to the camera in their default configurations.
How could you feed the same Android video feed to both libraries?
Could you create a dummy camera device which doubles out the feed?
Could you take the tango feed as normal, and then resend it into ARToolkit with a special VideoConf
[edit]
ARToolkit uses the older Camera1 API, takes a onPreviewFrame() callback and passes that byte[] data to it's own Native Library call, which does the actual work.
Along the lines of the second bullet point, could Tango provide a copy of each frames raw camera data using something like iTangoVideoOverlay .
(ARToolkits NDK functionality seems to expect NV21, but can also accept other formats)
If that data was extractable from tango, I believe the ARToolkit NDK functionality can be used without actually owning the camera.
I am afraid that neither of the method you mentioned would work. Tango has exclusive access to camera and I believe ARToolkit also occupies the camera exclusively through camera2 API. With current TangoSDK, I think the walk-around would be use ARToolkit for camera rendering, and Tango for pose tracking.
However, this could expose a problem for time-stamping, which is Tango and ARToolkit has different timestamps. The solution for this is to take a timestamp offset at the very beginning when application starts, and constantly apply that offset when querying pose from Tango based on timestamp.
This blog shows an example integrating the two.
It also links to example source code, but I haven't tidied it up at all after testing - proceed with caution!
You cannot feed the same camera source to both libraries (first bullet point), but you can forward the camera feed from Tango (ITangoVideoOverlay) into ARToolkit ([AcceptVideoImage][2]) (second bullet point).
This is not ideal, because it is fairly inefficient to send the data to Java from C#. The Phab 2 Pro has to downsample the video X4 to achieve a decent framerate.
A better answer would replace the AndroidJavaClass calls with pipes/sockets.
Also there are many little problems - it's a pretty hacky workaround.
Related
I am a complete beginner with ARCore and I would like to ask if it is possible to display 3D objects using 3D coordinates and having the user as the origin? For example, having (x,y,z) values of (0,3,0) will display the 3D object to the right of the user.
First of all: There is no native implementation in ARCore for that!.
But you're not completely lost. There are some very useful libraries in Android like appoly. I've used this fork from Eric Neidhardt.
For iOS I only know this library based on ARKit.
You can also make a Unity3D app in C# and use AR Foundation and Vuforia Plug-in. This way is platform-independent, but imho testing is a pain.
You see, there are a few options out there, but be careful that none is really accurate!
I am using the Vuforia SDK to build an Android application and am curious as to how the marker tracking works. Does the app convert the video frame into byte codes and then compare these against the .dat file generated by creating the marker? Also, where is this code found in the Vuforia sample app, is it in the C++ ? Thanks.
Well, you don't see the code for recognition and tracking because they are Intellectual property of Qualcomm and usually should not be revealed. Vuforia is not an open-source library.
Vuforia first detects "feature points" in your target image [Web-based target management] and then uses the data to compare the features in target image and the receiving frame from camera.
Google "Natural feature detection and tracking" which falls under Computer Vision area and you will find interesting stuff.
No, detection and tracking code placed in libQCAR.so
But question "how its work" is to complex to answer here. If you want to be familiar with object detection and tracking - start to learn method as mser, surf, sift, ferns and other.
Vuforia uses edge-detection technique. If there are more vertices or lines in the high-contrast image then its a high rated image for Vuforia. So, we can say that their algorithm is somewhat similar to SIFT.
Is there an android Framework that can be used in an app to recognize a 3D image and send the user to a video. This should fall under augmented reality, but so far everything I have viewed uses 2D image and stuff to produce a 3D image on the screen... My situation is backwards from that. I tried using vuforia but I couldn't get the sdk to work, and unity needs an android license. DroidAr doesn't seem to fit the bill either. Or are there any tutorials on this matter? Thanks.
I have not used the feature, but Metaio has a 3D object, "markerless" tracking feature as well as the ability to do video playback within the SDK. I am sure if you would rather simply redirect to a video (YouTube) or something this would not be exceptionally difficult.
http://www.metaio.com/software/mobile-sdk/features/
Metaio's mobile SDK is similar to Vuforia, so if you had trouble with that you might have difficulty getting it up and running. If your programming skills aren't up to that, you might consider looking into Junaio, an AR browser made by Metaio. With Junaio you simply create a content channel rather than having to build the app from scratch. Again, I have not actually tried this feature yet but the documentation seems to indicate that 3D tracking is available in Juniao:
http://www.junaio.com/develop/quickstart/3d-tracking-and-junaio/
Good luck!
I am trying to create a small Android app with reasonably simple AR functionality - load a few known markers and render known 2D/3D objects on top of the video stream when those are detected. I would appreciate any pointers to a library for doing this, or at least a decent example of doing it right.
Here are some leads I have looked into:
AndAR - https://code.google.com/p/andar/ - This starts out great, and the AndAR app works well enough to render one cube on a single pattern on a real-time video stream, but it looks like the project is effectively abandoned, and to extend it I'll have to go heavily into OpenGL land - not impossible, but very undesirable. The follow-up AndAR Model Viewer project, which supposedly lets you load custom .obj files, doesn't seem to recognize the marker at all. Once again, this looks very much abandonware, and it could have been so much more.
Processing - Previously mentioned NyARToolkit is great with Processing from a PC - example usage, which works perfectly for the 'here's a pattern, here's an object, just render it there' functionality, but then it all breaks down for Android - GStreamer for Android is at a very very early hacky stage, and in general video functionality seems to be a rather low priority for the Android Processing project - right now import processing.video.*; just fails.
layar, wikitude etc, they all seem to focus more on interactivity, location and whatnot, which I absolutely don't need, and are somehow missing this basic usage.
Where am I going wrong? I would be happy to code some part of the video capture/detection/rendering, I don't need a drag-and-drop library, but the sample code from AndAR just fills me with dread
I suggest to take a look at the Vuforia SDK (formerly QCAR) by Qualcomm plus jPCT-AE as 3D-Engine. They both work very well together, no pure OpenGL needed. However you need some C/C++ knowledge, since Vuforia relies to the NDK to some degree.
It basically boils to to get the marker pose from Vuforia via a simple JNI-function (the SDK contains fully functional and extensive sample code) and use that to place the 3D-objects with jPCT (the easiest way is to set the pose as the rotation matrix of the object, which is a bit hacky, but produces quick results).
jPCT-AE supports 3D-model loading for some common formats. The API docs are good, but you may need to consult the forums for sample code.
I'm looking to do basic eye tracking in android using the OpenCV api. I've found that there seem to be two ways to use opencv in Andriod, either by using their c++ wrapper or by using JavaCV api. I'm willing to do either but I'm looking for some idea or sample code as to how i would track basic eye movement with either platform. I'm leaning toward the JavaCV api because it looks easier to use but I could really use some sort of tutorial on the basics of using it with android.
Assuming you already looked into JNI (Java Native interface), JavaCV is exactly the same thing as OpenCV. As per eye tracking, you will need to get the live video feed from the camera and locate the participant's eyes in the frames using template matching and blink detection.
You will just have to make your View implements Camera.PreviewCallback in order to get a hold on the camera feed.
The OpenCV Site on eye tracking provides some sample codes that will help you track the eyes.
If you want to see an example of opencv on android, click on this open source code.
Hope it helps