I am working on an Augmented Reality app that requires image tracker placed at a distant. A target would be a bill board or scoreboard in a basketball game. I have tried Qualcomm's Vuforia SDK, it seems it only works when the marker is placed within 3 feet from the camera. When you move further, I think it loses detail and AR engine is not able to recognize the tracker any more.
In theory, if the marker is large and bright enough, and with a clearly defined details and border markings for tracking purpose, should it not work?
Also, is there anyway for an AR app to recognize ANY flat surface like a table or hardwood floor with variety of colors and textures, as long as it's a flat surface. Typical applications would be virtual keyboard or chess board.
thanks,
Joe
AR is about recognition markers, not shape. AR engine input is image from camera and there is no way to determine shape from it, so answer for Your second question is: NO.
PS: In my case (iOS) default marker is detected from about 1,5m and can be tracked to about 4m. I think, that resolution of the camera is important thing and can affect on tracking effiency.
The experience we have is that a marker in the size of about 20x20cm is readable by the Vuforia SDK in about 5 meter distance. That seems to be the very limit.
Related
Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.
I know about Google cardboard and I want to make say a campus tour of any company which will be handled by head movements in Google Cardboard with unity how to make campus buildings from the real images which I clicked by my camera.I am new to unity and much aware with android coding.could you link any unity tutorial.
And second thing my approach toward this idea is good with unity or it should be with android.
i want to make thing like this youtube link which i want Please suggest
In theory you could position a great lot of images in 3D space and have a scene with a very modern-art-like look - but it's much easier to do a spherical image and drag it to a sphere.
You can use a digital camera and hugin to stitch photos manually for better quality or just take any android phone with a gyroscope and semi-good camera and do a photosphere.
After getting a spherical image, just drag it to a sphere with reversed normals and put the VR camera in it's center - voilla, you've got a VR app.
A good idea would be to allow the user some interaction, moving between scenes etc. Usually there is a point where you look and either wait a bit or press the cardboard button.
im new to augmented reality but what is meant by the term marker ? i have done a web search and it says the marker is a place where content will be shown on the mobile device but im not clear still. Here is what i found out so far:
Augmented reality is hidden content, most commonly hidden behind
marker images, that can be included in printed and film media, as long
as the marker is displayed for a suitable length of time, in a steady
position for an application to identify and analyze it. Depending on
the content, the marker may have to remain visible.
There are a couple of types of marker in Vuforia, there are ones you define yourself, after putting them in to they CMS online, ones that you can create at run time and set markers that just have information around the edge. They are where your content will appear. You can see a video here where my business card is the marker and the 3d content is rendered on top. http://youtu.be/MvlHXKOonjI
When the app sees the marker it will work out the pose (position and rotation) of the marker and apply that to any 3d content you want to load, that way as you move around date marker the content stays in the same relative position to the marker.
And one final heads up, this is much easier in Unity 3D than using the iOS or Android native versions. I've done quite a lot and it saves a lot of time.
Marker is nothing but a target for your Augmented Reality app. Whenever you see your marker through the AR CAMERA, the models of your augmented reality app will be shown on!
It will be easy to understand if you develop your first app in augmented reality! :)
TLDR: Augmented Reality markers ~= Google Goggles + Activator
AR markers can be real-world objects or locations that trigger associated actions in your AR system when identified in sight, and usually result in some action, like displaying annotations (à la Rap Genius for objects around you).
Example:
Imagine you are gazing through your AR glasses. (You will see both what is displayed on the lenses, if anything, as well as the world around you.)
As you drive by a series of road cones closing the rightmost lane ahead, the AR software analyzes the scene and identifies several road cones in formation. This pattern is programmed to launch traffic notification software and, in conjunction with your AR's built-in GPS, obtains the latest information regarding what is going on here and what to expect.
In this way, the road cone formation is a marker: something particular and pre-defined triggers some action: obtaining and providing special information regarding your surroundings.
i just need some guide on how to detect a marker and make an output text.. for ex: a marker with an image of a dog , when detected, i have an output text "DOG" in a textfield .. can someone help me with my idea? oh, btw which one is more effective to use nyartoolkit or andar for my idea?thanks:) need help..!
What you're looking for isn't augmented reality, it's object recognition. AR is chiefly concerned with presenting data overlaid on the the real world, so computation is devoted each frame to determining the position relative to the camera of the object. If you don't intent to use this data, AR libraries may be an inefficient. That said...
AR marker tracking libraries usually find markers by prominent features like corners, and can distinguish markers by binary patters encoded inside the marker, or in the marker's borders. If you're happy with having the "dog" part encoded in the border of a marker, there are libraries you can use like Qualcomm's AR development kit. This library, and Metaio's Unifeye mobile can also do natural feature tracking on pre-defined images. If you're happy with being able to recognize one specific image or images of dogs that you have defined in advance, either of these should be ok. You might have to manipulate your dog images to get good features they can identify and track. Natural objects can be problematic.
General object recognition (being able to recognize a picture of any dog, not known beforehand) is still a research topic. There are approaches, but they're mostly very computationally intensive, and most mobile solutions involve offloading the serious computation to a server. Recognition of simple outline sketches however is more tractable, there's a great paper called "Shape recognition and pose estimation for mobile augmented reality" (I can't find a copy online, but the IEEE link is here) that uses contours to identify objects - this is light enough to run on a mobile (and it's pure genius).
I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!