I am trying to implement Extended tracking. Everything works fine,until user move device fastly. Basically I tracked an image and see 3d model. It remain in real world there If I move my camera here and there but at slow speed but if I move my device fastly 3d model will stick to view of my screen, which is not right. I guess its a bug in Vuforia.
Thanks,
Vanshika
It is not a bug. Extended tracking uses the visual information in the camera images from frame to frame to try and keep track of where the camera is, relative to the trackable -- there is no other way, a camera does not have position-tracking hardware. If the device is moved slowly, successive frames of the camera will partly contain 'the same' things, and can try to determine its own movement from that information (although there will be some drift). When the camera moves too fast, there is no information shared from frame to frame for the camera to use to determine its own viewpoint change. I believe the 3d model will only 'stick' to your screen if you do not disable it / its renderer when tracking of it is lost, i.e. in an OnTrackingLost() type of method, as found in the DefaultTrackableEventHandler.
Related
I want to use the ARCore library to place objects in the real world location and want to just view the exact same view even when I am not present at the same location in the future.
It sounds like you want to do either one of two things:
capture and record the view from your device display when you are using ARCore and then view again the recording of the screen view you saw at the time
capture the entire 360 degree world you are in at the time so that you can recreate it and move around in it, having some ARCore renderable at the place you put them when you did the capture, so you can walk up to them and around them etc.
The first case is relatively straightforward and ArCore provides some instructions to manage this:
https://developers.google.com/ar/develop/java/sceneform/video-recording
The second case would require you to record the 360 degree world and also to have the ability to move around in that recording with what is usually called '6 degrees of freedom' 6DOF - similar to the way you can move around a first person video game.
Recording 6DOF videos is a complex technology in itself, typically requiring multiple cameras and quite a bit of post processing and I am not aware of anything that integrates this with ARCore at this time. It tends to be still at the research stage in many cases - e.g.:
https://v-sense.scss.tcd.ie/research/6dof/
I want to apply offsets to both translation and rotation of ArCore's virtual camera pose(displayOrientedCameraPose). Is there any way I can do that ? ArCore's camera only lets me read the current pose and not edit/update the same. Trying to create another virtual camera that will have the Pose with offsets applied doesn't work since a frame can have only one camera.
Unlike many others I have started working with ArCore first with Unity and now moving to Android Studio. In Unity it was quite staright-forward since it supports multiple camera rendering. Wondering if anything similar is possible with Android Studio ?
At the moment ARCore allows you to use only one active ArSession which contains only one ArCamera, i.e. camera in your smartphone. Changing ArCamera's Pose is highly useless because 3D tracking heavily depends on its Pose (every ArFrame stores camera position and rotation as well as all scene's ArAnchors and feature points).
Instead of reposition and reorientation of your ArCamera you can move/rotate the whole ArScene.
Hope this helps.
Using arcore and/or sceneform, would it be possible to place circles accurately on a real life object. Lets say i had a real world table and a known set of coordinates where small ( 10mm ) AR "stickers" need to be placed. They could be on the top/side/underside of the table and need to be placed accurately to the mm. I am currently solving this problem with a number of fixed mounted lasers. would this be possible to accomplish using arcore on a mobile device - either a phone or AR/smart glasses? Accuracy is critical so how accurate could this solution using arcore be ?
I think you may find that current AR on mobile devices would struggle to meet your requirements.
Partly because, in my experience, there is a certain amount of drift or movement with Anchors, especially when you move the view quickly or leave and come back to a view. Given the technologies available to create and locate anchors, i.e. movement sensors, camera, etc it is natural this will not give consistent millimetre accuracy.
Possibly a bigger issues for you at this time is Occlusion - currently ARcore does not support this. This means that if you place your renderable behind an object it will still be drawn in front of, or on top of, the object as you move away or zoom out.
If you use multiple markers or AR "stickers" your solution will be pretty precise considering your location of your circles will be calculated relative to those markers. Image or marker based tracking is quite impressive with any Augmented Reality SDKs. However, having these markers 10mm can cause problems for detection of markers. I would recommend creating these markers using AugmentedImageDatabase and you can specify real world size of the images which helps for tracking of these images. Then you can check if ARCore can detect your images on the table. ARCore is not the fastest SDK when it comes to detecting images but it can continue tracking even markers are not in the frame. If you need fast detection of markers i would recommend Vuforia SDK.
Is there a way I could show what the hind-side camera captures on a full-screen such that it creates an illusion of screen being see-through? It doesn't need to be perfect, just convincing enough, a little lag won't make any difference.
Is it possible to create such an effect using phone camera? If yes, how can the effect be achieved? (as in what transformations to apply etc.)
(I already know how to create a simple Camera Preview)
Edit : Now, I also know it has been done, http://gizmodo.com/5587749/the-samsung-galaxy-s-goes-see+through, But, I still have no clue how to properly do this, I know trial and error is one way, other is calculating what part a user should be seeing if phone wasn't there.
I think there would be some factors involved like -
viewing distance,
viewing angle,
camera zoom range,
camera focus,
camera quality,
phone orientation,
camera position (where is camera located on phone) etc.
So, I don't feel this problem has a simple enough solution, if it is not so, please clarify with an answer.
Thanks for help,
Shobhit,
You can use standard 3D projection math to project a portion of the backside camera image onto the display; you can manage this by assuming everything the camera sees is at a particular depth from the backside camera, and by assuming a particular viewpoint for the observer
You can improve on this by looking for faces/eyes using the frontside camera. You can get a rough estimate of the the viewing distance from the eye spacing, and assume a viewer position midway between the eyes. Of course, this only works for one viewer at a time (e.g., if your face tracker finds multiple faces, you can select one of them).
Also, you can improve the illusion by calibrating the camera and screen so you can match the color and brightness from one to the other.
I'm planning on doing an AR application that will just use GPS technology to get a location, and then use compass/gyroscope for tracking 6DOF viewfinder movements. Its a personal project for my own development, but I'm looking for starting places as its a new field to me so this might be a slightly open ended question with more than 1 right answer. By using GPS I am hoping to simply the development for my first AR application at the cost of its accuracy.
The idea for this AR is not to use any vision processing (relying on GPS only), and to display 3d models on the screen at roughly correct distances (up to a point) from where the user is standing. It sounds simple given games work in a 3D world with a view point and locations of faces/objects/models etc to draw. My target platform will be mobile devices & tablets potentially running one of these OS's WM6, Phone7 or Android.
Most of the applications I have seen use markers and use AR-ToolKit or ARTag, and those that use GPS tend to just display a point of interest or a flat box on a screen to state your at a desired location.
I've done some very limited work with 3D graphics programming, but are there any libraries that you think may be able to get me started on this, rather than building everything from the bottom up. Ignoring the low accuracy of GPS (in regards to AR) I will have a defined point in a 3D space (constantly moving due to GPS fix), and then a defined point in which to render a 3D model in the same 3D space.
I've seen some examples of applications which are similar but nothing which I can expand on, so can anyone suggest places to start of libraries to use that might be suitable for my project.
Sensor-based AR is do-able from scratch without using any libraries. All you're doing is estimating your camera's position in 6DOF using, and then performing a perspective projection which projects the known 3D point onto your camera's focal plane. You define your camera matrix using sensors and GPS, and perform the projection on each new camera frame. If you get this up and running that's plenty sufficient to begin projecting billboards, images etc into the camera frame.
Once you have a pin-hole camera model working you can try to compensate for your camera's wide-angle lens, for lens distortion etc.
For calculating relative distances there's the haversine forumula.
Moving to 3D models will probably be the most difficult part. It can be tricky to introduce camera frames into OpenGL on mobile devices. I don't have any experience on windows mobile or android, so I can't help there.
In any case have fun, it's really nice to see your virtual elements in the world for the first time!