Hi i'm trying to create a application related to the Augmented Reality (AR) and was able to configure my application with Metaio SDK and OpenCV library successfully in two separate application.
but the thing is i want to use both the library of OpenCV and Metaio together into one application. so can any one help me with its integration.
In my single application I want to use OpenCV for markerless detection and MetaIO for 3D Model rendering.
Metaio:http://www.metaio.com/
OpenCV:http://opencv.org/
=====>
I'm using opencv to detect shapes in a camera image and want to display 3D objects rendered by metaio on those shapes. Similar to marker tracking.
Metaio and openCV, each have their own cameraview. I have disabled cameraview of openCV.
I want to convert an ImageStruct object received in onNewCameraFrame() method into an OpenCV Mat in Android. For this, I have registered MetaioSDKCallback to continuously receive camera frame.
But onSDKReady() and onNewCameraFrame() method of this callback is not being called,though I have added 'metaioSDK.requestCameraImage()'.
This where i'm stuck with it.
I suggest you to integrate the sdk of Opencv4android, and to see the samples come with, they are very good examples to teach you how to use the camera easily
For your objective probably face detection example is good to check.
Here is the tuto that help you to install and configure opencv SDK
For AR I can't help you so much, but have look at this discussion, it could be helpful.
Related
I am currently trying to make a Drawing app for Android. I have already got the basic things up and running. Now I want to add 3d models to it so that users can select the models and paint on them and they can even make models in the app itself. I found Sceneform but it is primarily used for AR and I don't want to use AR. I would like to know if there is a way through which I may be able to achieve this or should I use some other framework for doing it. Any help would be welcome!
Sceneform supports 3D assets in the following formats:
OBJ
glTF (animations not supported)
FBX, with or without animations.
Please Reading.
https://developers.google.com/sceneform/develop/getting-started#import-sceneform-plugin
Sample Project
https://github.com/the3deers/android-3D-model-viewer
I need to implement location based app with augmented reality. I draw a model with simple interaction with it (that's why I've chosen LibGDX) and I need to place this model on some point at the real world.
My small research gave me some solutions (like this or this). But all of them use marker-based drawing, but I need to draw model on surface.
Could anyone help me?
Person from Google has made some effort with ARCore using Libgdx, and had created a blog:
Investigating ARCore with LibGDX
Take a look at BaseARCoreActivity, which we can use to show some 3D Modle defined in Libgdx.
I want to implement android app to recognize buildings using wikitude but i can not find any sample that can only recognize 3d scene i tried to modify in the inernalrendering class in their example to load my .wtm file but nothing happened
is there any open source application i can look at? or any tutorials for 3d tracking using wikitude ?
after hours of trying i found that all the wikitude examples was indoors and my app should work outdoor so i moved to vuforia and used their 2d images not 3d model storing multiple images of the buildings i want.
I am using the Vuforia SDK to build an Android application and am curious as to how the marker tracking works. Does the app convert the video frame into byte codes and then compare these against the .dat file generated by creating the marker? Also, where is this code found in the Vuforia sample app, is it in the C++ ? Thanks.
Well, you don't see the code for recognition and tracking because they are Intellectual property of Qualcomm and usually should not be revealed. Vuforia is not an open-source library.
Vuforia first detects "feature points" in your target image [Web-based target management] and then uses the data to compare the features in target image and the receiving frame from camera.
Google "Natural feature detection and tracking" which falls under Computer Vision area and you will find interesting stuff.
No, detection and tracking code placed in libQCAR.so
But question "how its work" is to complex to answer here. If you want to be familiar with object detection and tracking - start to learn method as mser, surf, sift, ferns and other.
Vuforia uses edge-detection technique. If there are more vertices or lines in the high-contrast image then its a high rated image for Vuforia. So, we can say that their algorithm is somewhat similar to SIFT.
I'm looking to do basic eye tracking in android using the OpenCV api. I've found that there seem to be two ways to use opencv in Andriod, either by using their c++ wrapper or by using JavaCV api. I'm willing to do either but I'm looking for some idea or sample code as to how i would track basic eye movement with either platform. I'm leaning toward the JavaCV api because it looks easier to use but I could really use some sort of tutorial on the basics of using it with android.
Assuming you already looked into JNI (Java Native interface), JavaCV is exactly the same thing as OpenCV. As per eye tracking, you will need to get the live video feed from the camera and locate the participant's eyes in the frames using template matching and blink detection.
You will just have to make your View implements Camera.PreviewCallback in order to get a hold on the camera feed.
The OpenCV Site on eye tracking provides some sample codes that will help you track the eyes.
If you want to see an example of opencv on android, click on this open source code.
Hope it helps