I am about 3 months into the AR scene and I feel somewhat familiar with building an AR app with Vuforia in Unity. I want to add a DLC system to my AR app so that I can build and publish once, and update the content after the build. I will be doing future updates with new content for the app because it is a serialized book series with AR functions, and they file size will be huge with all the content on it.
My workflow is as follows:
PRefabs with Images Targets and AR content
1)I have prefabs that with ImageTarget => AR content to display (3D models, Animation, graphics, and audio) I have them selected as Addressable and Labled "ARPages"
Files Exported as Addressables
2)I also have my Vuforia Database (Images, .xml, and .dat) as Addressable.I want to be able to update and add to the database post build then sync it with new prefabs that I make Addressable.
ARCamera Scene with Empty GameObject
3)I build out the lightweight AR app without the AR content as prefabs, but I have an empty game object in my AR Scene (ARCamera => GameObject) with a script that calls an AssetLabelReference labeled "ARPages".
using UnityEngine.SceneManagement;
using UnityEngine;
using System;
using System.Collections.Generic;
using UnityEngine.AddressableAssets;
using UnityEngine.ResourceManagement;
using Vuforia;
public class GameManager1 : MonoBehaviour
{
public AssetLabelReference comicbookName;
private bool mLoaded = false;
private DataSet mDataset = null;
void Awake()
{
Addressables.DownloadDependenciesAsync(comicbookName);
}
void Start()
{
Addressables.InstantiateAsync(comicbookName);
}
void Update()
{
if (VuforiaRuntimeUtilities.IsVuforiaEnabled() && !mLoaded)
{
string externalPath = Application.persistentDataPath;
if (mDataset == null)
{
// First, create the dataset
ObjectTracker tracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
mDataset = tracker.CreateDataSet();
}
if (mDataset.Load(externalPath, VuforiaUnity.StorageType.STORAGE_ABSOLUTE))
{
mLoaded = true;
}
else
{
Debug.LogError("Failed to load dataset!");
}
}
}
}
Firebase Storage Files
4)Build Addressable packages and upload them to firebase Storage where they are downloaded to my device on startup Menu Scene using a script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Threading.Tasks;
using UnityEngine.AddressableAssets;
using Vuforia;
public class LoadAssetsFromRemote : MonoBehaviour
{
[SerializeField] private AssetLabelReference _label;
// Start is called before the first frame update
private void Start()
{
Get(_label);
}
private async Task Get(AssetLabelReference label)
{
var locations = await Addressables.LoadResourceLocationsAsync(label).Task;
foreach (var location in locations)
{
await Addressables.InstantiateAsync(location).Task;
}
}
// Update is called once per frame
void Update()
{
}
}
5)When I click and open the AR Scene, I use a script to load and initialize Vuforia and the Database, but nothing happens when I shine my phone over the image target.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
public class MultiRecoScript : MonoBehaviour, ITrackableEventHandler
{
public AudioClip musicFx;
public Animator[] myAnimator;
private TrackableBehaviour mTrackableBehaviour;
void Start()
{
//Fetch the Animator from your GameObject
myAnimator = GetComponentsInChildren<Animator>();
mTrackableBehaviour = GetComponent<TrackableBehaviour>();
if (mTrackableBehaviour)
{
mTrackableBehaviour.RegisterTrackableEventHandler(this);
}
}
public void OnTrackableStateChanged(
TrackableBehaviour.Status previousStatus,
TrackableBehaviour.Status newStatus)
{
if (newStatus == TrackableBehaviour.Status.DETECTED ||
newStatus == TrackableBehaviour.Status.TRACKED ||
newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
{
//lancio la musica
Debug.Log("sound");
AudioManagerScript.current.PlaySound(musicFx);
foreach (Animator animator in myAnimator)
{
animator.speed = 1f;
animator.gameObject.SetActive(true);
}
}
else
{
// stoppo la musica
AudioManagerScript.current.StopSound();
// Debug.Log("stoppo");
foreach (Animator animator in myAnimator)
{
animator.gameObject.SetActive(false);
}
}
}
}
I am stuck. Could use help and feedback. I want to be able to load new content to firebase using addressables system in unity, and have the app download the new content and implement the new content without having to rebuild the app everytime I want to update the content.
Related
Here I need some suggestion or a want to a way of doing this
Scenario: i want to scan a qr code in the ar scene and when i scan the qr code what ever content is there in qr code i will place in the ar scene here i dont want to use google vision instead i want to use the below package but the below package opens camera instead i want to use it in the AR scene it self
I used this package for qr scan https://github.com/zxing/zxing
below is my ar code
public class MainActivity extends AppCompatActivity {
private ArFragment arFragment;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.arFragment);
arFragment.setOnTapArPlaneListener((hitResult, plane, motionEvent) -> {
Anchor anchor = hitResult.createAnchor();
ModelRenderable.builder()
.setSource(this, Uri.parse("anchor.sfb"))
.build()
.thenAccept(modelRenderable -> addModelToScene(anchor,modelRenderable))
.exceptionally(throwable -> {
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(throwable.getMessage()).show();
return null;
});
});
}
private void addModelToScene(Anchor anchor,ModelRenderable modelRenderable){
AnchorNode anchorNode = new AnchorNode(anchor);
TransformableNode transformableNode = new TransformableNode(arFragment.getTransformationSystem());
transformableNode.setParent(anchorNode);
transformableNode.setRenderable(modelRenderable);
arFragment.getArSceneView().getScene().addChild(anchorNode);
transformableNode.select();
}
}
I recommend trying the existing Augmented Images feature in ARCore
What you think is a QR code, the AR software sees as a fiducial marker. These markers need to known beforehand. For example in the video on the ARCore page, the painting is a fiducial marker which allows the 3D image to be overlaid.
The ARCore feature I linked to supports up to 1000 reference images/markers per marker database and you can create and use new predefined marker databases.
As long as you know what QR codes will have 3D effects, you can prepare them in a marker database.
If you want/need to have dynamic QR code with ARCore, I would suggest trying to create fiducial around/next to the QR code so that you can scan and then hand off to AR Core to generate the 3d image, but may not work as QR code may be mixed in with the fiducial both need white space to work.
If you can't use ARCore, then you are in the world of OpenCV and various scene engines (3D renderers) like Ogre or you can draw the AR scene in OpenGL ES.
ARCore gives you the frame or use a wrapper like sceneform. In sceneform you are able to get your frame from ARFragment.
First get your fragment:
var arFragment = (supportFragmentManager.findFragmentById(R.id.your_ar_fragment) as ArFragment?)
arFragment.arSceneView?.scene?.addOnUpdateListener {
onUpdateFrame()
}
Analyze your frames for example read QR Codes
fun onUpdateFrame() = runBlocking {
launch {
analyze()
}
}
private fun analyze() {
frame = arFragment.arSceneView.arFrame
//Do fancy stuff with your frame, e.g. reading QR Codes
readingQRCodes(image)
}
private fun readingQRCodes() {
val mediaImage = InputImage.fromMediaImage(image, 0)
val scanner = BarcodeScanning.getClient()
scanner.process(mediaImage)
.addOnSuccessListener { qrCodes ->
//...
}
.addOnFailureListener {
//...
}
.addOnCompleteListener {
image.close()
}
}
Of course you can also use the ARCore library to get the camera frames.
I have a requirement to take a photograph of a user in Xamarin Forms without them having to press the shutter button. For example, when the app launches it should show a preview and count down from 5 seconds (to give the user chance to get in position) then take a picture automatically.
I have tried the Xamarin Media Plugin library however this stackoverflow post and this GitHub issue state that this feature is not a supported.
I have seen a number of dead discussions such as this with people asking similar questions without resoltion.
I tried the LeadTools AutoCapture sample but this only seems to work for documents/text and not people (unless I am missing something??).
I am now working my way through the Camera2Basic sample which is quite old and only targets Android via android.hardware.camera2.
Are there any samples out there (or 3rd party libraries) that can acheive this requirement? Ideally I would like it to be cross platform (iOS and Android) but currently the main focus is Android.
You can create the Custom View Renderer on Android to achieve that.
And based on this offical sample is more convenient, just modify code as follow can achieve your wants.
This official sample can preview camera view in Xamarin Forms App, we just need to add a Timer to call the Frame from Camera after 5 seconds.The modified Renderer code as follow:
public class CameraPreviewRenderer : ViewRenderer<CustomRenderer.CameraPreview, CustomRenderer.Droid.CameraPreview>, Camera.IPreviewCallback
{
CameraPreview cameraPreview;
byte[] tmpData;
public CameraPreviewRenderer(Context context) : base(context)
{
}
protected override void OnElementChanged(ElementChangedEventArgs<CustomRenderer.CameraPreview> e)
{
base.OnElementChanged(e);
if (e.OldElement != null)
{
// Unsubscribe
cameraPreview.Click -= OnCameraPreviewClicked;
}
if (e.NewElement != null)
{
if (Control == null)
{
cameraPreview = new CameraPreview(Context);
SetNativeControl(cameraPreview);
}
Control.Preview = Camera.Open((int)e.NewElement.Camera);
// Subscribe
cameraPreview.Click += OnCameraPreviewClicked;
}
}
protected override void OnAttachedToWindow()
{
base.OnAttachedToWindow();
// call the timer method to get the current frame.
Device.StartTimer(new TimeSpan(0, 0, 5), () =>
{
// do something every 5 seconds
Device.BeginInvokeOnMainThread(() =>
{
Console.WriteLine("get data"+tmpData);
// using MessagingCenter to pass data to forms
MessagingCenter.Send<object, byte[]>(this, "CameraData", tmpData);
cameraPreview.Preview.StopPreview();
cameraPreview.IsPreviewing = false;
// interact with UI elements
});
return false; // runs again, or false to stop
});
}
void OnCameraPreviewClicked(object sender, EventArgs e)
{
if (cameraPreview.IsPreviewing)
{
cameraPreview.Preview.StopPreview();
cameraPreview.IsPreviewing = false;
}
else
{
cameraPreview.Preview.SetPreviewCallback(this);
cameraPreview.Preview.StartPreview();
cameraPreview.IsPreviewing = true;
}
}
protected override void Dispose(bool disposing)
{
if (disposing)
{
Control.Preview.Release();
}
base.Dispose(disposing);
}
// get frame all the time
public void OnPreviewFrame(byte[] data, Camera camera)
{
tmpData = data;
}
}
Now, Xamarin Forms can receive the data from MessagingCenter:
MessagingCenter.Subscribe<object, byte[]>(this, "CameraData", async (sender, arg) =>
{
MemoryStream stream = new MemoryStream(arg);
if (stream != null)
{
//image is defined in Xaml
image.Source = ImageSource.FromStream(() => stream);
}
});
image is defined in XAML: <Image x:Name="image" WidthRequest="200" HeightRequest="200"/>
I followed the example code listed on the AugmentedImageController for ARCore unity on github at: https://github.com/google-ar/arcore-unity-sdk/blob/master/Assets/GoogleARCore/Examples/AugmentedImage/Scripts/AugmentedImageExampleController.cs. Even after I've followed the code on this example, it doesn't play the video from the video player as shown on the AugmentedImageVisualizer code below:
The video plays if I drag and drop the AugmentedImageVirtulizer onto the scene and put playOnAwake. However it doesn't play when I take the playOnAwake off, send the app to my phone, and then point the camera to the augmented image (in my case a empty milk bottle label). I want an object such as a ghost to appear coming out of the milk bottle.
using GoogleARCore;
using UnityEngine;
using UnityEngine.Video;
public class AugmentedImageVisualizer : MonoBehaviour {
private VideoPlayer vidPlayer;
public VideoClip[] vidClips;
public AugmentedImage Image;
// Use this for initialization
void Start () {
vidPlayer = GetComponent<VideoPlayer>();
vidPlayer.loopPointReached += OnStop;
}
private void OnStop(VideoPlayer source)
{
gameObject.SetActive(false);
}
// Update is called once per frame
void Update () {
if (Image == null || Image.TrackingState != TrackingState.Tracking)
{
return;
}
if (!vidPlayer.isPlaying)
{
vidPlayer.clip = vidClips[Image.DatabaseIndex];
vidPlayer.Play();
}
transform.localScale = new Vector3(Image.ExtentX, Image.ExtentZ,
1f);
}
}
no console errors showing, but no videos showing
Be sure that your camera's position is in the origo.
Is your videoplayer active? (videoplayer.setactive(true))
My working solution:
public class AugmentedImageVisualizer : MonoBehaviour
{
private const string TAG = "AugmentedImageVisualizer";
public AugmentedImage image;
public GameObject video;
public VideoClip[] videoClips;
private VideoPlayer videoPlayer;
private void Start() {
videoPlayer = video.GetComponent<VideoPlayer>();
}
private void Update() {
if (Session.Status != SessionStatus.Tracking) return;
if (image == null || image.TrackingState != TrackingState.Tracking) {
videoplayer.SetActive(false);
return;
}
UpdateVideo();
private void UpdateVideo() {
if (!videoPlayer.isPlaying) {
videoPlayer.clip = videoClips[image.DatabaseIndex];
videoPlayer.Play();
video.SetActive(true);
}
transform.localScale = new Vector3(image.ExtentX, 1, image.ExtentZ);
}
}
Don't forget to add videoplayer component to your gameobject.
EDIT: I had to edit answer to give more explanation:
I used the examples of augmented image that comes with GoogleARCore to create the AR. Here the controller needs an augmented image visualizer prefab. This prefab is a Quad (right click on hierarchy area, then 3D Object > Quad) and moved it to prefab folder (this creates prefab from quad). This quad/prefab has a videoplayer (added in inspector). This quad(prefab) also has a script (augmentedimagevisualizer) which contains your code snippet. So in the inspector (with augmentedimagevisualizer script) the quad already has videoclips where you can set your videos.
In hierarchy, there is an arcore device and inside is the camera. This camera has Tracked Pose Driver (set it in inspector) with Pose source: Color Camera and AR Core Background Renderer script.
The 2. videolink makes the same and contains your code as well, so this video is very descriptive regarding your question.
I found 2 similar video on youtube.
that shows 1 video: https://www.youtube.com/watch?v=yjj5dV2v9Fs
that shows multiple videos on multiple ref. images:
https://www.youtube.com/watch?v=GkzMFNmvums
The 2. was tricky for me, the guy in video created quad that he renamed to AugmentedImageVisualizer and then he placed it in Prefabs. Since I've realized this my videos appear on ref. images.
I used Unity 2019.3.15 and arcore-unity-sdk-1.17.0.unitypackage
I first encountered this in my app, but found it is reproducible in minimalist projects as well. I have a public database (read & write: true) and in Unity I use the following script to add a listener to a value in my database and to update some text on the screen when that value changes. I have a button in my scene which calls "ToggleValue()" to switch between existing and null values.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.UI;
using Firebase.Database;
public class DataTest : MonoBehaviour {
public Text displayText;
private bool toggleState = false;
void Start () {
FirebaseDatabase.DefaultInstance.GetReference("test").ValueChanged += HandleTestChanged;
}
void HandleTestChanged(object sender, ValueChangedEventArgs args) {
if (args.DatabaseError != null) {
displayText.text = args.DatabaseError.Message;
Debug.LogError(args.DatabaseError.Message);
return;
} else {
displayText.text = args.Snapshot.Value.ToString();
}
}
public void ToggleValue() {
toggleState = !toggleState;
Dictionary<string, object> testChanges = new Dictionary<string, object>();
if (toggleState) {
testChanges.Add("test", true);
} else {
testChanges.Add("test", null);
}
FirebaseDatabase.DefaultInstance.RootReference.UpdateChildrenAsync(new Dictionary<string, object>(testChanges));
}
}
This works perfectly fine in the editor. When clicking the button the text on screen displays "True" and "null" at the appropriate times. However, when I build to Android, I get different results. The first time the value is set to True, the text updates to read "True". However, on the second press, when the listener should trigger with a value of "null", nothing happens. The "HandleTestChanged" function is not called, no new value is passed in. Having the Firebase Realtime Database console window open on my desktop I can see that the value is still being set correctly, it's just not updating the clients.
Is anyone else able to reproduce this? Does anyone have a workaround for this?
I am using Unity 2018.2.1f1, Firebase Unity SDK 5.2.0, Android 7.1.1.
I am using Vuforia 6.2 AR SDK for in Unity. But while I test the application in Android phone the camera seems like blurry. I searched in Vuforia's developer website and found some camera focus mode but I can't implement because that guideline was for older Vuforia SDK, I can't find the script they mentioned in their website. Here is their code sample but it's not working. I created different script and run this line on Start() function, but still not working.
CameraDevice.Instance.SetFocusMode(
CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
try this
void Start ()
{
VuforiaARController.Instance.RegisterVuforiaStartedCallback(OnVuforiaStarted);
VuforiaARController.Instance.RegisterOnPauseCallback(OnPaused);
}
private void OnVuforiaStarted()
{
CameraDevice.Instance.SetFocusMode(
CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
}
private void OnPaused(bool paused)
{
if (!paused) // resumed
{
// Set again autofocus mode when app is resumed
CameraDevice.Instance.SetFocusMode(
CameraDevice.FocusMode.FOCUS_MODE_CONTINUOUSAUTO);
}
}
This code is the right code.
bool cameramode = false;
public void OnCameraChangeMode()
{
Vuforia.CameraDevice.CameraDirection currentDir = Vuforia.CameraDevice.Instance.GetCameraDirection();
if (!cameramode) {
RestartCamera(Vuforia.CameraDevice.CameraDirection.CAMERA_FRONT);
camBtnTxt.text = "Back Camera";
} else {
RestartCamera(Vuforia.CameraDevice.CameraDirection.CAMERA_BACK);
camBtnTxt.text = "Front Camera";
}
}
private void RestartCamera(Vuforia.CameraDevice.CameraDirection newDir)
{
Vuforia.CameraDevice.Instance.Stop();
Vuforia.CameraDevice.Instance.Deinit();
Vuforia.CameraDevice.Instance.Init(newDir);
Vuforia.CameraDevice.Instance.Start();
}