I want to develop an augmented reality application for android that is capable of using markers to generate 3D objects and these 3D objects are interactive upon touch using the mobile's touch input.
I have browsed through the available SDKs like Vuforia , Junaio or Layar Player and found that they all support:
Marker detection with 3D virtual image overlay
Virtual buttons that get active when you make them invisible. (Vuforia)
Interactive video playback.
However, what I am looking for is:
Virtual object in AR that can be made interactive using mobile's touch.
I am pretty sure it is possible, as there are virtual video overlays that upon clicking/tap would start a video (similar to an interactive virtual element).
Q. Could someone suggest a library/toolkit best suited for this functionality that I'm looking for?
or
Q. Is there something that I apparently missed during my search with the aforementioned toolkits that already support the functionality I want?
According to your last description, what you need is supported by Vuforia, and there is a sample for pure Android (no Unity) as well.
You need to look at the Dominos sample, where they show how to drag an OpenGL domino object on the screen.
Look here for a quick description:
https://developer.vuforia.com/forum/faq/android-how-do-i-project-screen-touch-target
In case you run into some problems while trying to implement it yourself, you can search at the Vuforia forums for some answers to common problems others have faced with this. But basically, it works well in their sample.
Well, this is for Unity 5.x
First, go through Vuforia's Documentation to know more about Image Targets and AR Camera.
Import your 3D models to the scene so that all interactive objects are a child of the image target.
Read touch on mobile phone (I used android for my project)
if(Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
Convert touch point into a ray from the screen into the 3D world
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Create a plane in the scene (for the ray to hit)
Plane plane = new Plane(Vector3.up, Vector3.zero);
If, the ray hits the plane, get the x,y,z position. Value of pos will have the world position
if (plane.Raycast(ray, out distance)){
Vector3 pos = ray.GetPoint(distance);
}
Please modify the code according to your need. This is a very basic example.
Related
I can create a gazePointer in ARCore using Unity (as shown in: https://codelabs.developers.google.com/codelabs/arcore-intro/#10), but is it possible to do the same without Unity (just using ARCore / Sceneform)?
The functionality I need to replicate in augmented reality is to detect the user's gaze on a predefined 2D reference image, and if the gaze is more than (say) 5 seconds, then a particular action should be implemented. We can assume that the "gaze" is at the absolute center of the user's view.
Note: I'm planning to do this in stereoscopic mode of a headset like Google Cardboard (i.e. with the phone camera uncovered).
The ARCore Unity example you reference makes use of the Raycast function of the Frame class of the ARCore Unity API. The equivalent function in the ARCore Sceneform API appears to be the hitTest function of the Scene class.
I have been working on and exploring ARCore for the past few days. I saw this video from Scope AR. I noticed that they are freely rendering the arrows wherever they touch (ex: on the engine). From what I have understood , you can only render at points or planes identified by ARCore. My question is how they are rendering the arrows without even knowing if that point(where the person taps on the screen) is actually identified by ARCore?
They are using a Microsoft Hololens, it has nothing to do with ARCore.
I want to create a 3d view (360 degree view) for an object captured using camera like the apps Fyuse or Phogy doing it. I researched on this but did not found something useful to start with.
I have some questions like:
What tool should I use for this e.g unity or Android Studio is enough?
Should I use any sdk (like Rajawali for 3d modeling) and some other tool to accomplish this or can this be implemented without using any third party sdk?
Can this be implemented by capturing a video of object and then extracting its frames and then combining them to show 360 degree view?
Can anyone please guide me on this? Any help is appreciated.
In fact, those apps are not really 3D.
You can get similar results by recording a video together with information from motion / pose sensor so you can assign a cellphone pose to every frame.
Then you can control the playback in respect to actual cellphone rotation.
This project might help you: https://github.com/e-lab/VideoSensors
I am working on an Augmented Reality app that requires image tracker placed at a distant. A target would be a bill board or scoreboard in a basketball game. I have tried Qualcomm's Vuforia SDK, it seems it only works when the marker is placed within 3 feet from the camera. When you move further, I think it loses detail and AR engine is not able to recognize the tracker any more.
In theory, if the marker is large and bright enough, and with a clearly defined details and border markings for tracking purpose, should it not work?
Also, is there anyway for an AR app to recognize ANY flat surface like a table or hardwood floor with variety of colors and textures, as long as it's a flat surface. Typical applications would be virtual keyboard or chess board.
thanks,
Joe
AR is about recognition markers, not shape. AR engine input is image from camera and there is no way to determine shape from it, so answer for Your second question is: NO.
PS: In my case (iOS) default marker is detected from about 1,5m and can be tracked to about 4m. I think, that resolution of the camera is important thing and can affect on tracking effiency.
The experience we have is that a marker in the size of about 20x20cm is readable by the Vuforia SDK in about 5 meter distance. That seems to be the very limit.
I have implemented Augmented Reality application for android using Adobe Air for Android, FLARManager, Away3DLite.
The program works fine on flash, However when i publish it on my mobile phone (HTC Nexus One) or run it on the emulator my camera doesn’t activate and all i can see is the colour of my background and the framerate display.
I think that the problem is the Camera3D that i have used which is the FLARCamera_Away3DLite from FLARManager.
This is how I set my camera
import com.transmote.flar.camera.FLARCamera_Away3DLite;
private var camera3D:FLARCamera_Away3DLite;
this.camera3D = new FLARCamera_Away3DLite(this.flarManager, new Rectangle(0, 0, this.stage.stageWidth, this.stage.stageHeight));
I will really appreciate any advice i can get from you.
Thank you George
I think that you think wrong of the camera class. The camera class you use is the camera in your "virtual" 3d world and it is filming your 3d world. The "film" it then makes goes to the View class which can show your 3d world to 2d. Your screen is a 2d screen and is not capable of showing 3d. The camera class in combination with the view converts your 3D scene to a 2D image what is shows on your screen.
But since you want to make an AR app you mean the camera of the phone. You cant use the Away 3d camera class for this. This tutorial shows how to use the camera of your andriod phone in flash.
The steps you want to take is that you get your phonecamera feed, and past this on the screen. Then use the FLARtoolkit to determine the position of your marker. And then adjust the 3D model to the position of the marker. And last but not least show the 3d model on the screen (using the away3d/papervision camera and view). So basically you got 2 layers in you flash app. 1 background layer which is the feed of your phonecamera and the other layer (on top of it) is your view from away3d or papervision.
I think if you combine those tutorials you can make your application:
Use your phone camera
Augmented Reality with FLARManager
AR basics