Augmented Reality on Android: Rendering stuff to the ground - android

As rendering engine, I want to use LibGdx. As AR Engine I think Vuforia could fit.
I want to place an object (2D or 3D) on the ground. What I am going to achieve: When walking through the streets, I want to see augmented objects lying there or some textual information. So how can I
measure the current camera position (based on the sensor)
get the real-world y compared to the current ground
get the angle / current camera position and pose
calculate all of the gathered data into something that I can use in libgdx for rendering the object correctly over the camera preview ?
I want to use GPS coordinates or Natural Markers for the virtual objects, so any kind of other markers can not be used. I have already found a sample that uses the sensors of the camera so that I can turn around me and see then augmented objects. However all of these "fly", I want to place them on the floor and walk on by them instead of "taking them with me".

Related

is it posible to accurately place a circle relevant to real life object using arcore?

Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.

How can I determine which model contains x, y coordinates in ARCore

I noob in Android ARCore. I took sample (hello_ar_java) from ARCore sdk and ran it. This sample show a model when user taps on screen (on detected plane).
But I want to remove the model when user taps on it. How I can do it?
Has ARCore mechanisms for this or need to transform OpenGL coordinates to screen x,y?
Thanks.
You have to do it with OpenGL. Take a look at Intersection test for ray picking in opengl es for android for how to test screen coordinates against 3D objects.

Extended Tracking Vuforia Unity3d

I am trying to implement Extended tracking. Everything works fine,until user move device fastly. Basically I tracked an image and see 3d model. It remain in real world there If I move my camera here and there but at slow speed but if I move my device fastly 3d model will stick to view of my screen, which is not right. I guess its a bug in Vuforia.
Thanks,
Vanshika
It is not a bug. Extended tracking uses the visual information in the camera images from frame to frame to try and keep track of where the camera is, relative to the trackable -- there is no other way, a camera does not have position-tracking hardware. If the device is moved slowly, successive frames of the camera will partly contain 'the same' things, and can try to determine its own movement from that information (although there will be some drift). When the camera moves too fast, there is no information shared from frame to frame for the camera to use to determine its own viewpoint change. I believe the 3d model will only 'stick' to your screen if you do not disable it / its renderer when tracking of it is lost, i.e. in an OnTrackingLost() type of method, as found in the DefaultTrackableEventHandler.

AR image tracker at distant?

I am working on an Augmented Reality app that requires image tracker placed at a distant. A target would be a bill board or scoreboard in a basketball game. I have tried Qualcomm's Vuforia SDK, it seems it only works when the marker is placed within 3 feet from the camera. When you move further, I think it loses detail and AR engine is not able to recognize the tracker any more.
In theory, if the marker is large and bright enough, and with a clearly defined details and border markings for tracking purpose, should it not work?
Also, is there anyway for an AR app to recognize ANY flat surface like a table or hardwood floor with variety of colors and textures, as long as it's a flat surface. Typical applications would be virtual keyboard or chess board.
thanks,
Joe
AR is about recognition markers, not shape. AR engine input is image from camera and there is no way to determine shape from it, so answer for Your second question is: NO.
PS: In my case (iOS) default marker is detected from about 1,5m and can be tracked to about 4m. I think, that resolution of the camera is important thing and can affect on tracking effiency.
The experience we have is that a marker in the size of about 20x20cm is readable by the Vuforia SDK in about 5 meter distance. That seems to be the very limit.

Augmented Reality - Using only GPS

I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!

Categories

Resources