I've been working on an AR application using unity3d with vuforia
whenever a marker is detected (image) it must show a specific 3d model related to that image
It works totally perfect in unity but when I export the app as .apk and run it on my android device, models don't appear, although it detects the marker I checked that by printing out a specific sentence related to each marker
I also tried before to load some models from the unity assets and it also worked fine and appeared on my android device
I'm using 3ds models each one is about 23 MB size
I thought it might be because of the size of app so I tried to place only one model but that didn't work either..
any solutions for that?
Related
I downloaded the AR Foundation samples from Unity (link: https://github.com/Unity-Technologies/arfoundation-samples/tree/4.1) , and made a build of one of the scenes called ImageTrackingWithMultiplePrefabs and put it on my android phone.
The build works fine when scanning markers, but as soon as the camera is covered, the object is glued to the camera, and I have to either look at the marker again, or restart the app to fix it.
This is going to be an issue because we're making an app and it could happen a lot, for example when the user puts their phone down.
I made a video to better explain the issue: https://youtu.be/BgCHbZeSWd4
More info:
Unity version: 2020.3.20f1
AR Foundation version: 4.1.9
phone: google pixel 5 (but it also happened on my Samsung galaxy S10)
I put a log in the PrefabPairImageManager.cs script and noticed that when we look away, the tracking changes to limited, and stays the same, even when the camera is covered. So there's no way for me to find out when the camera is covered in code based on the tracking info.
What am I doing wrong?
Thanks a lot
I just started working with ML5 and machine learning in general. I started by creating an app that classifies images from my webcam using 'MobileNet' image classifier, I then created a my own classifier using Teachable Machine which also worked great. I created my app using cordova, and I used the browser platform as I got started and all worked great.
However once I switched to try the app on android platform, whatever image I try to classify (whether it is taken with my phone camera, or even hard-coded to the html page) always returns the same result with the exact same confidence. But if I switch back to the browser, it works just fine again. I cant seem to find anyone else reporting this kind of problem. Does anyone have any idea what it might be?
I once had the same problem and it turned out that the Android version was having out-of-memory exceptions that weren't being caught or reported.
After debugging using Chrome Developer Tools (by following this), it appears the problem was that the images I was trying to classify were too large to be processed on android (error - webGL: INVALID_VALUE: texImage2D: width or height out of range).
Therefore reducing the size/quality of the images before classifying them resolved the problem.
I am using libgdx framework to develop my first android game but the problem here is when i run the game on desktop images looks big and on the correct location but when i run it on the device it looks a little bit small and on the different location. I am using ScreenViewport, my main concern is about the location of the images.
I am developing using Unity5.3.5f1 (64 Bit) and have a problem with a black screen on my Nexus 7 Android version 4.4.3 every time I try to launch the app.
I try to implement the Vuforia Object Recognition Unity Sample (https://developer.vuforia.com/library//articles/Training/Vuforia-Object-Recognition-Unity-Sample-Guide) so it should work while I followed the instructions.
I didn't modify the code, just followed the instructions to add the 3D target.
I followed the hints of the developer forum to set the player settings for building https://developer.vuforia.com/forum/issues-and-bugs/camera-not-working-when-app-installed-mobile but nothing changed the situation:
Setting Rendering Path to "Legacy Deffer"
changing the Minimum API Level to Android 4.2
enable "GPU skinning"
set graphics APIs to "OpenGLES2"
I have noticed a lot of problems between unity and mobile devices when building. There seems to be some residual meta data left on the device from unity so not all changes end up working properly (which holds the information of what objects the script is attached to). I had an instance where I deleted EVERYTHING in my scene and I had a debug script attached to some of those items that would output some text to the Xcode console in the scripts start() method. After cleaning out the entire scene and redeploying that debug script was sill outputting to the console. I deleted the .meta file associated with that script and redeployed. This deletion stopped the file from running. I have noticed that restarting the device can also fix this issue. Also deleting the meta, deploying, then undoing the delete and running again also fixes some of these issues. You will NOT notice this problem when developing in unity and breakpoints will never be hit, but it will still happen on the mobile device and even run scripts that you completely disabled.
I am currently using the JPCT-AE framework to build a 3D game for android. Because it does not have a class for displaying Fonts yet, it has created what is called an AGLFont class for displaying text on the android screen. Here is the link to the Forum with this class.
http://www.jpct.net/forum2/index.php/topic,1563.0.html
This class works great on the Android emulator, but it seems to not display text and not cause the game to break when it is used on the android phone. We have tried using this class on two Android phones [Droid X & Nexus S] with no success.
Does anyone know how to configure this code to work a physical phone or why their might be a difference between the emulator and actual phone?