Unity ARCore Object Tracking And Shifting Issue - android

I've been working on the following project:
I have an android app made with unity with the ARfoundation librairy + ARCore plugin, the goal is to scan a QRCode (with Zxing) and summon a shelf at its position.
To instantiate the shelf, I launch a raycast right to the center of the qrcode I've scanned and when it hits an AR point, it should instantiate the shelf.
It actually works, but when the AR shelf is instantiated, it tends to shift to an other direction instead of staying immobile. It means that the AR shelf is not superimposed anymore with the real shelf.
After some researches, I found it shifts because ARfoundation is not able to instantiate point clouds in the area anymore, which means the algorythm doesn't know where he is and try desesperatly to keep the AR shelf immobile.
It might be caused by: light, camera quality, environnement (like a person moving), the distance between the user and the AR object or even ARfoundation AI failure.
By default there is no error message or such kind (to my knowledge) when the ar objects shifts, because it's "normal" for the algorythm to adjust its position everytime. But when no AR point cloud are detected, it goes crazy.
So I'm wondering if there's any way to detect those shifts, or even better, prevent them. Any help is appreciated. I hope my request is clear and might help other people that have the same issue as me. Don't hesitate to ask me any questions, I'll be glad to answer them. Have a nice day!
Technical informations:
Unity version : 2020.3.27f1
ARFoundation + ARCore XR Plugin version : 4.1.9
Android version : 11
Device model: Samsung Galaxy Tab A7 SM-T500
Shelf measurement : h:1.85m, l: 0.80m, d: 0.60m
Average distance to the AR shelf: 0.3m~

Assuming that you are using AR Foundation's image recognition to recognize the QR code. Have you tried adding an anchor to your hologram?
Ive successfully done image similar image recognition without the problems you mention.

Related

ARCore: Emulator and unity

I'd like to test ARCore using Unity/c# before buying an android device - can I use Unity and ARCore emulator without having a device to put together an AR app but just using a camera from my PC, and does the camera require a specific spec?
I read Android Studio Beta now supports ARCore in the Emulator to test an app in a virtual environment right from the desktop, but can't tell if the update is integrated into Unity.
https://developers.googleblog.com/2018/02/announcing-arcore-10-and-new-updates-to.html
Any tips how people may be interacting with the app using a pc camera would be really helpful.
Thank you for your help !
Sergio
ARCore uses a combination of the device's IMU and camera. The camera tracks feature points in space and uses a cluster of those points to create a plane for your models. The IMU generates 3D sensor data which is passed to ARCore to track the device's movements.
Judging from the requirements above we can say a webcam just isn't going to work, since it lacks the IMU needed by ARCore. Just the camera won't be able to track the device's position, which may lead to objects drifting all over the place (If you managed to get it working at all). Even Google's page or reddit threads indicate that it just won't work.

Facial expressions identification like snapchat

I am working on app that detect eye blink of the user. I have been searching the web for 2 days but still don't have clear vision about how this can be done.
As far as i have knew is that the system supports face detection which is detecting if there is a face in the picture and locating it.
But this works only with images and detect only faces which is not what i need. I need to open an camera activity and directly detect the face of the user and locate his eyes and other facial parts and wait till he blinks, like when you long click on the screen on snap chat.
I have seen a lot about open-cv but still not sure what it is or how to use it or if it seize my goals.
Note: snap chat has no API released for the technology used, and even it doesn't let anyone to talk to the engineers behind this technology.
I know that openCV has the ability to allow image processing on the device's camera feed (as opposed to only being able to process still images).
Here is an introductory tutorial on eye detection using openCV:
http://romanhosek.cz/android-eye-detection-and-tracking-with-opencv/
If you can't find eye-blink detection tutorials in a google search, I think you'll have to create the code for eye-blink detection on your own, but I think openCV will be a helpful tool in doing so. There are lots of beginner openCV tutorials to help you get started.

Unity3d ARTookit5 Blurred camera on android mobile

i'm trying to do a simple AR scene with NFT image that i've created with genTextData. The result works fairly well in unity editor, but once compiled and run on an android device, the camera resolution is very bad and there's no focus at all.
My marker is rather small (3 cm picture), and the camera is so blurred that the AR cannot identify the marker from far away. I have to put the phone right in front of it (still verrrrryy blurred) and it will show my object but with a lot of flickering and jittering.
I tried playing with the filter fields (Sample rate/cutoff..), it helped just a little bit wit the flickering of the object, but it would never display it from far away..i always have to put my phone like right in front of it. The result that i want should be: detecting the small marker (sharp resolution or/and good focus) from a fair distance away from it..just like the distance from your computer screen to your eyes.
The problem could be camera resolution and focus, or it could be something else. But i'm pretty sure that the AR cannot identify the marker points because of the blurriness.
Any ideas or solutions about this problem ?
You can have a look here:
http://augmentmy.world/augmented-reality-unity-games-artoolkit-video-resolution-autofocus
I compiled the Unity plugin java part and set it to use the highest resolution from your phone. Also the auto focus mode is activated.
Tell me if that helps.

AR image tracker at distant?

I am working on an Augmented Reality app that requires image tracker placed at a distant. A target would be a bill board or scoreboard in a basketball game. I have tried Qualcomm's Vuforia SDK, it seems it only works when the marker is placed within 3 feet from the camera. When you move further, I think it loses detail and AR engine is not able to recognize the tracker any more.
In theory, if the marker is large and bright enough, and with a clearly defined details and border markings for tracking purpose, should it not work?
Also, is there anyway for an AR app to recognize ANY flat surface like a table or hardwood floor with variety of colors and textures, as long as it's a flat surface. Typical applications would be virtual keyboard or chess board.
thanks,
Joe
AR is about recognition markers, not shape. AR engine input is image from camera and there is no way to determine shape from it, so answer for Your second question is: NO.
PS: In my case (iOS) default marker is detected from about 1,5m and can be tracked to about 4m. I think, that resolution of the camera is important thing and can affect on tracking effiency.
The experience we have is that a marker in the size of about 20x20cm is readable by the Vuforia SDK in about 5 meter distance. That seems to be the very limit.

Vuforia & Unity 1.5 not rendering object on the scene on Android

I am very frustrated with this problem and the Unity3D community isn't very helpful because no one there is answering my question. I have done a ton of searching to find what the problem could be, but I didn't succeed. I install Qualcomm Vuforia 1.5 and Unity3D 1.5.0.f. I use the Unity extension. I imported their demo app called vuforia-imagetargets-android-1-5-10.unitypackage, put their wood chips image target on the scene, their AR camera, and added a box object on top of the image target. Then I built it and sent to my Samsung Galaxy tablet. However, when I open the app on the tablet and point the tablet to the image target, nothing shows up - the box isn't there. As if I didn't add any objects to the scene. I just see what the device camera sees.
Has anybody experienced this before? Do you have any ideas what could be wrong? No one online seems to be complaining about it.
Thank you!
Make sure you have your ImageTarget dataset activated and loaded for your ARCamera (in the inspector, under Data Set Load Behaviour) and that the checkbox next to the Camera subheading in the inspector is ticked.
Also ensure that the cube (or other 3d object) is a child of ImageTarget in the hierarchy, and that your ImageTarget object has it's "Image Target Behaviour" set to the intended image.
You may also have to point your ARCamera in such a way that your scene is not visible to it.
Okay. I got the solution. I asked on Qualcomm's forum and one gentleman was nice enough to explain to me that I missed setting up Data Set Load Behaviour in Unity's AR camera. I had to active the image target data set and load the corresponding image target. Once I set these two things up, built and deployed, all worked well.
Good luck!

Categories

Resources