I am using Oculus VR toolkit to build a Unity3D Application for Gear VR. I followed the given tutorial to setup my UI. But Gaze Pointer Ring provided with in the toolkit jitters when I try to move head a bit faster.
I also tried to add a small cube as child of OVRCameraRig at certain distance in z axis to make it visible and it has same behaviour. At first I thought that jitters in Gaze Pointer Ring is due to nun-optimised code but child of camera Object which is supposed to move smooth along with parents movement has the same problem.
Has anyone faced the same problem earlier?
Related
I've been working on the following project:
I have an android app made with unity with the ARfoundation librairy + ARCore plugin, the goal is to scan a QRCode (with Zxing) and summon a shelf at its position.
To instantiate the shelf, I launch a raycast right to the center of the qrcode I've scanned and when it hits an AR point, it should instantiate the shelf.
It actually works, but when the AR shelf is instantiated, it tends to shift to an other direction instead of staying immobile. It means that the AR shelf is not superimposed anymore with the real shelf.
After some researches, I found it shifts because ARfoundation is not able to instantiate point clouds in the area anymore, which means the algorythm doesn't know where he is and try desesperatly to keep the AR shelf immobile.
It might be caused by: light, camera quality, environnement (like a person moving), the distance between the user and the AR object or even ARfoundation AI failure.
By default there is no error message or such kind (to my knowledge) when the ar objects shifts, because it's "normal" for the algorythm to adjust its position everytime. But when no AR point cloud are detected, it goes crazy.
So I'm wondering if there's any way to detect those shifts, or even better, prevent them. Any help is appreciated. I hope my request is clear and might help other people that have the same issue as me. Don't hesitate to ask me any questions, I'll be glad to answer them. Have a nice day!
Technical informations:
Unity version : 2020.3.27f1
ARFoundation + ARCore XR Plugin version : 4.1.9
Android version : 11
Device model: Samsung Galaxy Tab A7 SM-T500
Shelf measurement : h:1.85m, l: 0.80m, d: 0.60m
Average distance to the AR shelf: 0.3m~
Assuming that you are using AR Foundation's image recognition to recognize the QR code. Have you tried adding an anchor to your hologram?
Ive successfully done image similar image recognition without the problems you mention.
i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.
I am trying to implement Extended tracking. Everything works fine,until user move device fastly. Basically I tracked an image and see 3d model. It remain in real world there If I move my camera here and there but at slow speed but if I move my device fastly 3d model will stick to view of my screen, which is not right. I guess its a bug in Vuforia.
Thanks,
Vanshika
It is not a bug. Extended tracking uses the visual information in the camera images from frame to frame to try and keep track of where the camera is, relative to the trackable -- there is no other way, a camera does not have position-tracking hardware. If the device is moved slowly, successive frames of the camera will partly contain 'the same' things, and can try to determine its own movement from that information (although there will be some drift). When the camera moves too fast, there is no information shared from frame to frame for the camera to use to determine its own viewpoint change. I believe the 3d model will only 'stick' to your screen if you do not disable it / its renderer when tracking of it is lost, i.e. in an OnTrackingLost() type of method, as found in the DefaultTrackableEventHandler.
So I am making a simple Basketball Shoot game, that use the swipe gesture (swipe down to up to throw balls).
When I use Unity remote 4 on my android device, everything works fine, when I swipe my finger, it throws a ball according to the swipe movement.
But when I build my app, and run it on my android device, the swipe gesture works, but it throw balls in other direction that doesn't match the swipe direction.
I run my app on two different devices, and they have different directions for the same swipe movement.
How can I fix that? do I need to setup something in unity player settings?
Thanks a lot :)
I am assuming that you are using RayCast to detect swipe gesture. If that so then update your question with code snippet. We have to play bit smartly while using RayCast.
I want to develop an augmented reality application for android that is capable of using markers to generate 3D objects and these 3D objects are interactive upon touch using the mobile's touch input.
I have browsed through the available SDKs like Vuforia , Junaio or Layar Player and found that they all support:
Marker detection with 3D virtual image overlay
Virtual buttons that get active when you make them invisible. (Vuforia)
Interactive video playback.
However, what I am looking for is:
Virtual object in AR that can be made interactive using mobile's touch.
I am pretty sure it is possible, as there are virtual video overlays that upon clicking/tap would start a video (similar to an interactive virtual element).
Q. Could someone suggest a library/toolkit best suited for this functionality that I'm looking for?
or
Q. Is there something that I apparently missed during my search with the aforementioned toolkits that already support the functionality I want?
According to your last description, what you need is supported by Vuforia, and there is a sample for pure Android (no Unity) as well.
You need to look at the Dominos sample, where they show how to drag an OpenGL domino object on the screen.
Look here for a quick description:
https://developer.vuforia.com/forum/faq/android-how-do-i-project-screen-touch-target
In case you run into some problems while trying to implement it yourself, you can search at the Vuforia forums for some answers to common problems others have faced with this. But basically, it works well in their sample.
Well, this is for Unity 5.x
First, go through Vuforia's Documentation to know more about Image Targets and AR Camera.
Import your 3D models to the scene so that all interactive objects are a child of the image target.
Read touch on mobile phone (I used android for my project)
if(Input.touchCount > 0 && Input.GetTouch(0).phase == TouchPhase.Began)
Convert touch point into a ray from the screen into the 3D world
Ray ray = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Create a plane in the scene (for the ray to hit)
Plane plane = new Plane(Vector3.up, Vector3.zero);
If, the ray hits the plane, get the x,y,z position. Value of pos will have the world position
if (plane.Raycast(ray, out distance)){
Vector3 pos = ray.GetPoint(distance);
}
Please modify the code according to your need. This is a very basic example.