Using ARToolkit with android - android

A project is assigned to me, in which i need to use ARToolKit. But i am much confused on how to use it, how to get started with ARToolKit.
Is it the same as metaio, vuforia and total immersion ? Please help me get started with it. I would be thankful to start if some startup tutorials and sample examples on ARToolKit is provided..
Any kind of help would appreciated...

Here is the ARToolkit demo provided on Google Code which might guide you Check Out
Also check out the documentation here

For Android:
You can give a try to droidAR
droidAR
For iOS
You can give a try to VRToolkit.This app uses ARToolKit plus to detect markers on the video frames and then overlays 3D objects following the movements of this marker.
You can scan all 4096 BCH marker as well as thin based Marker after setting corresponding property to YES.
VRToolKit github

Related

Workout Movement Counting using google_ml_kit flutter

I am creating a app using
google_ml_kit
to create face recognition. I have successfully implemented the face recognition using flutter(front end),node js(back end) and mongo database. now I have to create Workout Movement Counting example(dumbbell count). can anyone please let me know it is possible with google_ml_kit package. If yes, please share some tips which will helps me a lot.
Thanks in advance!!!
The ML Kit's Android vision quickstart app provides an example to do rep counting with Pose Detection. https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart
Please search for "Pose Detection" on the page linked above and see instruction about how to enable classification and counting in the demo app.
Here is the most related code: https://github.com/googlesamples/mlkit/tree/master/android/vision-quickstart/app/src/main/java/com/google/mlkit/vision/demo/java/posedetector
The implementation is in Java, but it is just algorithm, so you should be able to convert it to Dart I guess (I am not familiar with Dart personally).
#all I have implemented the dumbbell count using tflite plugin. If you need source code or support comment below.

How can I display video when a marker is detected in unity android artoolkit?

I want to display a video when a marker is detected. I am working in unity and coding for android. Please help.
The problem here is, that the current MovieTexture of Unity does not support mobile playback (It's on their roadmap to upgrade this).
But there are some options you can check, either there are a lot of plugins which implement your wanted functionality, but most of them are not for free.
Or you go the easy way and check the download section from Vuforia, because they got an example which implements exactly your wanted functionality :
https://developer.vuforia.com/downloads/samples
I used Easy Movie Texture Plugin to play Video in Unity. Its working as I needed.

Geolocation based augmented reality

My task is to develop application for Android that should be used by tourist. Basic use case: I am going through old part of some town and then i start my app, point with camera to some place and some old building that is already gone will be present in its place as it was before.
My first direction that i was exploring was location based recognizing, I tried some frameworks like Wikitude, MetaIO and DroidAR. None of these was 100% fulfilling my need, because (in my opinion), noone was using (for its robustness) the newest tools that should make easier this task, like new Google Play Services Location API. I dont know if I could do better but I would prefer not to write my own solution.
I am now thinking about exploring marker based recognition but it would require additional work to place some markers to desired places and I dont believe that user would be in right angle and distance to that marker. I have seen some video that used some sort of edge detection but none of frameworks I used had this feature.
Do you know about some direction, technology or idea that I could explore and may lead to successful solution?
Augmented Reality will transfer real coordinates system to camera coordinates system. In AR Location-based, the real coordinate is Geographic coordinate system. We will convert the GPS coordinate (Latitude, Longitude, Altitude) to Navigation coordinate (East, North, Up), then transfer Navigation coordinate to Camera coordinate and display it on camera view.
I just create demo for you, not using any SDK
https://github.com/dat-ng/ar-location-based-android
I personally recommend you to use "Wikitude". Because I have created AR app for android using Wikitude SDk.
Also here I'm providing you app link which has been developed by Wikitude itself.
See below link :
https://play.google.com/store/apps/details?id=com.wikitude&hl=en
This app will give you brief idea about exploring place details using Wikitude Sdk. These sdk have free as well as paid library. It is well documented & very easy to implement. Also they have given very good sample practices for beginners.
Refer this link :
http://www.wikitude.com/products/wikitude-augmented-reality-sdk-mobile/wikitude-sdk-android/
I hope this will take you on track.
you already had some great ideas about your app. I guess these links will make you to learn more.
See links below:
http://net.educause.edu/ir/library/pdf/ERB1101.pdf
http://www.adristorical-lands.eu/index.php/sq/augmented-reality-app
Hope this will help you to go further in your project. Thank you.

Android Panorama viewing

Since i began programmation this forum provides me all that i always need! i want to thank you for... Thanks for all!!!!
Now i'm here to asking a problem that i not found it yet here.
I'm working on an android application. In my app, i have to read an android panorama which must be on a distant server. I have two problems to do this and hope u to save:
1. I take a panorama from my phone and When i connect it to PC from copying my panorama, this one become a simple jpeg image. I don't know how and why!!!
2. I have no idea on how to view panorama on android. I search on google, on android forums and still at my beginning point, i have to present my application next week!!!
So i give myself to you for bringing me out from this depth.
Thanks.
there's this library that do that with spherical cubic and cylindrical panoramic imagesPanoramaGL
and there's the utility with the google Play Services libs but only for spheric images refer to this :
Android Support for Photo Sphere

How does Vuforia image recognition work?

I am using the Vuforia SDK to build an Android application and am curious as to how the marker tracking works. Does the app convert the video frame into byte codes and then compare these against the .dat file generated by creating the marker? Also, where is this code found in the Vuforia sample app, is it in the C++ ? Thanks.
Well, you don't see the code for recognition and tracking because they are Intellectual property of Qualcomm and usually should not be revealed. Vuforia is not an open-source library.
Vuforia first detects "feature points" in your target image [Web-based target management] and then uses the data to compare the features in target image and the receiving frame from camera.
Google "Natural feature detection and tracking" which falls under Computer Vision area and you will find interesting stuff.
No, detection and tracking code placed in libQCAR.so
But question "how its work" is to complex to answer here. If you want to be familiar with object detection and tracking - start to learn method as mser, surf, sift, ferns and other.
Vuforia uses edge-detection technique. If there are more vertices or lines in the high-contrast image then its a high rated image for Vuforia. So, we can say that their algorithm is somewhat similar to SIFT.

Categories

Resources