Augmented Reality android App using Wikitude - android

I want to implement android app to recognize buildings using wikitude but i can not find any sample that can only recognize 3d scene i tried to modify in the inernalrendering class in their example to load my .wtm file but nothing happened
is there any open source application i can look at? or any tutorials for 3d tracking using wikitude ?

after hours of trying i found that all the wikitude examples was indoors and my app should work outdoor so i moved to vuforia and used their 2d images not 3d model storing multiple images of the buildings i want.

Related

How to add 3d models to Android app for drawing in Android Studio?

I am currently trying to make a Drawing app for Android. I have already got the basic things up and running. Now I want to add 3d models to it so that users can select the models and paint on them and they can even make models in the app itself. I found Sceneform but it is primarily used for AR and I don't want to use AR. I would like to know if there is a way through which I may be able to achieve this or should I use some other framework for doing it. Any help would be welcome!
Sceneform supports 3D assets in the following formats:
OBJ
glTF (animations not supported)
FBX, with or without animations.
Please Reading.
https://developers.google.com/sceneform/develop/getting-started#import-sceneform-plugin
Sample Project
https://github.com/the3deers/android-3D-model-viewer

creating a panoramic spherical view in android's cardboard

I've been wanting to create an android's cardboard app, in which the user enters a spherical panoramic view of some panoramic picture I've taken, and that I could gather data about where the user is looking in the process.
I've seen the "Cardboard Demo" provided by google, they have a feature called "photo sphere" in which the user can view photos exactly the way I want, but I want to implement it differently.
Can anyone give me some direction on how such a panormaic viewer with cardboard?
I was also trying to build the same kind of app. You can do a Skybox implementation in Opengl.(Cube mapping)
As you said, if you already have panorama image, then you can map the same to a Sphere.
You can Refer this Blog
I have ended up using Google's Cardboard SDK for Unity.
Basic idea :
Get Unity.
https://unity3d.com/get-unity/download
Get Google CardBoard
https://www.google.com/get/cardboard/get-cardboard/
Download the following SDK for unity
https://github.com/googlesamples/cardboard-unity
Try first to get the CardBoard/DemoScene (It's in the previous link) running on your app. Use this link for guidance :
https://developers.google.com/cardboard/unity/get-started
The DemoScene is basically a 3D game for Google cardboard, you can start by loading that project into unity and playing around with the different elements. From here it's pure unity, you can use the cardboard elements (They are the only important thing here, and define two offset cameras that are synced with the movement of the phone for 3d viewing of the cardboard) from the DemoScene project and create your own scene.
As due to gathering data on where the user is looking at , I extracted the
angles of the cardboard camera objects on unity.
Good Luck!
Refer this link- Open Photoshpere from SD card in android to view in Google Cardboard
You can use the Rajawali framework-
https://github.com/ejeinc/RajawaliCardboardExample

How to integrate the metaio + Open CV for android application?

Hi i'm trying to create a application related to the Augmented Reality (AR) and was able to configure my application with Metaio SDK and OpenCV library successfully in two separate application.
but the thing is i want to use both the library of OpenCV and Metaio together into one application. so can any one help me with its integration.
In my single application I want to use OpenCV for markerless detection and MetaIO for 3D Model rendering.
Metaio:http://www.metaio.com/
OpenCV:http://opencv.org/
=====>
I'm using opencv to detect shapes in a camera image and want to display 3D objects rendered by metaio on those shapes. Similar to marker tracking.
Metaio and openCV, each have their own cameraview. I have disabled cameraview of openCV.
I want to convert an ImageStruct object received in onNewCameraFrame() method into an OpenCV Mat in Android. For this, I have registered MetaioSDKCallback to continuously receive camera frame.
But onSDKReady() and onNewCameraFrame() method of this callback is not being called,though I have added 'metaioSDK.requestCameraImage()'.
This where i'm stuck with it.
I suggest you to integrate the sdk of Opencv4android, and to see the samples come with, they are very good examples to teach you how to use the camera easily
For your objective probably face detection example is good to check.
Here is the tuto that help you to install and configure opencv SDK
For AR I can't help you so much, but have look at this discussion, it could be helpful.

How does Vuforia image recognition work?

I am using the Vuforia SDK to build an Android application and am curious as to how the marker tracking works. Does the app convert the video frame into byte codes and then compare these against the .dat file generated by creating the marker? Also, where is this code found in the Vuforia sample app, is it in the C++ ? Thanks.
Well, you don't see the code for recognition and tracking because they are Intellectual property of Qualcomm and usually should not be revealed. Vuforia is not an open-source library.
Vuforia first detects "feature points" in your target image [Web-based target management] and then uses the data to compare the features in target image and the receiving frame from camera.
Google "Natural feature detection and tracking" which falls under Computer Vision area and you will find interesting stuff.
No, detection and tracking code placed in libQCAR.so
But question "how its work" is to complex to answer here. If you want to be familiar with object detection and tracking - start to learn method as mser, surf, sift, ferns and other.
Vuforia uses edge-detection technique. If there are more vertices or lines in the high-contrast image then its a high rated image for Vuforia. So, we can say that their algorithm is somewhat similar to SIFT.

Which Framework choosing for 3D animation on Android?

I started to develop an Android application, and I would like to run a 3D animation at launch, showing a car in 3D. It will be my first experience, as a result I would like to know what framework i have to use?. I've heard about min3D but I can't find documentation to guide me in my development from A to Z
Since you asked also about choosing a framework, I want to recommend one more than the min3d. Take a look to Rajawali Framework.
There is big list of Rajawali tutorials here. Also you can download Android project with examples from github.
I'm very impressed with this library!
How to load 3D models with min3D, a tutorial: see this link
you can also see this project it seem it target your same goal sources are also attached, feel free to focus on it and get some benefis. min3d car

Categories

Resources