I have implemented Augmented Reality application for android using Adobe Air for Android, FLARManager, Away3DLite.
The program works fine on flash, However when i publish it on my mobile phone (HTC Nexus One) or run it on the emulator my camera doesn’t activate and all i can see is the colour of my background and the framerate display.
I think that the problem is the Camera3D that i have used which is the FLARCamera_Away3DLite from FLARManager.
This is how I set my camera
import com.transmote.flar.camera.FLARCamera_Away3DLite;
private var camera3D:FLARCamera_Away3DLite;
this.camera3D = new FLARCamera_Away3DLite(this.flarManager, new Rectangle(0, 0, this.stage.stageWidth, this.stage.stageHeight));
I will really appreciate any advice i can get from you.
Thank you George
I think that you think wrong of the camera class. The camera class you use is the camera in your "virtual" 3d world and it is filming your 3d world. The "film" it then makes goes to the View class which can show your 3d world to 2d. Your screen is a 2d screen and is not capable of showing 3d. The camera class in combination with the view converts your 3D scene to a 2D image what is shows on your screen.
But since you want to make an AR app you mean the camera of the phone. You cant use the Away 3d camera class for this. This tutorial shows how to use the camera of your andriod phone in flash.
The steps you want to take is that you get your phonecamera feed, and past this on the screen. Then use the FLARtoolkit to determine the position of your marker. And then adjust the 3D model to the position of the marker. And last but not least show the 3d model on the screen (using the away3d/papervision camera and view). So basically you got 2 layers in you flash app. 1 background layer which is the feed of your phonecamera and the other layer (on top of it) is your view from away3d or papervision.
I think if you combine those tutorials you can make your application:
Use your phone camera
Augmented Reality with FLARManager
AR basics
Related
I'm making an android app quite like Snapchat but focusing on the background. Basically, I want to make a custom background, let's say it's a picture of snow like this
Then I also use Vuforia for Unity to display an augmented reality model, I'm using the ARcamera from Vuforia, I'm using arcamera for the 3d model, I need it to display the 3d model, I'm using lean touch asset to resize and move the model.
The problem is, how to merge the background to the device camera? So people can also be detected. Or how to make the snow background as my background in device camera?
I want the result as I can take a picture of someone in the middle of the snow background.
I'm using Unity 5.4 and Vuforia 6.2.
I'm building an Android app that has to identify, in realtime, a mark/pattern which will be on the four corners of a visiting card. I'm using a preview stream of the rear camera of the phone as input.
I want to overlay a small circle on the screen where the mark is present. This is similar to how reference dots will be shown on screen by a QR reader at the corner points of the QR code preview.
I'm aware about how to get the frames from camera using native Android SDK, but I have no clue about the processing which needs to be done and optimization for real time detection. I tried messing around with OpenCV and there seems to be a bit of lag in its preview frames.
So I'm trying to write a native algorithm usint raw pixel values from the frame. Is this advisable? The mark/pattern will always be the same in my case. Please guide me with the algorithm to use to find the pattern.
The below image shows my pattern along with some details (ratios) about the same (same as the one used in QR, but I'm having it at 4 corners instead of 3)
I think one approach is to find black and white pixels in the ratio mentioned below to detect the mark and find coordinates of its center, but I have no idea how to code it in Android. I looking forward for an optimized approach for real-time recognition and display.
Any help is much appreciated! Thanks
Detecting patterns on four corners of a visiting card:
Assuming background is white, you can simply try this method.
Needs to be done and optimization for real time detection:
Yes, you need OpenCV
Here is an example of real-time marker detection on Google Glass using OpenCV
In this example, image showing in tablet has delay (blutooth), Google Glass preview is much faster than that of tablet. But, still have lag.
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I know about Google cardboard and I want to make say a campus tour of any company which will be handled by head movements in Google Cardboard with unity how to make campus buildings from the real images which I clicked by my camera.I am new to unity and much aware with android coding.could you link any unity tutorial.
And second thing my approach toward this idea is good with unity or it should be with android.
i want to make thing like this youtube link which i want Please suggest
In theory you could position a great lot of images in 3D space and have a scene with a very modern-art-like look - but it's much easier to do a spherical image and drag it to a sphere.
You can use a digital camera and hugin to stitch photos manually for better quality or just take any android phone with a gyroscope and semi-good camera and do a photosphere.
After getting a spherical image, just drag it to a sphere with reversed normals and put the VR camera in it's center - voilla, you've got a VR app.
A good idea would be to allow the user some interaction, moving between scenes etc. Usually there is a point where you look and either wait a bit or press the cardboard button.
I want to develop an augmented reality application for android that is capable of using markers to generate 3D objects and these 3D objects are interactive upon touch using the mobile's touch input.
I have browsed through the available SDKs like Vuforia , Junaio or Layar Player and found that they all support:
Marker detection with 3D virtual image overlay
Virtual buttons that get active when you make them invisible. (Vuforia)
Interactive video playback.
However, what I am looking for is:
Virtual object in AR that can be made interactive using mobile's touch.
I am pretty sure it is possible, as there are virtual video overlays that upon clicking/tap would start a video (similar to an interactive virtual element).
Q. Could someone suggest a library/toolkit best suited for this functionality that I'm looking for?
or
Q. Is there something that I apparently missed during my search with the aforementioned toolkits that already support the functionality I want?
According to your last description, what you need is supported by Vuforia, and there is a sample for pure Android (no Unity) as well.
You need to look at the Dominos sample, where they show how to drag an OpenGL domino object on the screen.
Look here for a quick description:
https://developer.vuforia.com/forum/faq/android-how-do-i-project-screen-touch-target
In case you run into some problems while trying to implement it yourself, you can search at the Vuforia forums for some answers to common problems others have faced with this. But basically, it works well in their sample.
Well, this is for Unity 5.x
First, go through Vuforia's Documentation to know more about Image Targets and AR Camera.
Import your 3D models to the scene so that all interactive objects are a child of the image target.
Read touch on mobile phone (I used android for my project)
if(Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
Convert touch point into a ray from the screen into the 3D world
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Create a plane in the scene (for the ray to hit)
Plane plane = new Plane(Vector3.up, Vector3.zero);
If, the ray hits the plane, get the x,y,z position. Value of pos will have the world position
if (plane.Raycast(ray, out distance)){
Vector3 pos = ray.GetPoint(distance);
}
Please modify the code according to your need. This is a very basic example.