Problems with playing videos ARCore - android

I followed the example code listed on the AugmentedImageController for ARCore unity on github at: https://github.com/google-ar/arcore-unity-sdk/blob/master/Assets/GoogleARCore/Examples/AugmentedImage/Scripts/AugmentedImageExampleController.cs. Even after I've followed the code on this example, it doesn't play the video from the video player as shown on the AugmentedImageVisualizer code below:
The video plays if I drag and drop the AugmentedImageVirtulizer onto the scene and put playOnAwake. However it doesn't play when I take the playOnAwake off, send the app to my phone, and then point the camera to the augmented image (in my case a empty milk bottle label). I want an object such as a ghost to appear coming out of the milk bottle.
using GoogleARCore;
using UnityEngine;
using UnityEngine.Video;
public class AugmentedImageVisualizer : MonoBehaviour {
private VideoPlayer vidPlayer;
public VideoClip[] vidClips;
public AugmentedImage Image;
// Use this for initialization
void Start () {
vidPlayer = GetComponent<VideoPlayer>();
vidPlayer.loopPointReached += OnStop;
}
private void OnStop(VideoPlayer source)
{
gameObject.SetActive(false);
}
// Update is called once per frame
void Update () {
if (Image == null || Image.TrackingState != TrackingState.Tracking)
{
return;
}
if (!vidPlayer.isPlaying)
{
vidPlayer.clip = vidClips[Image.DatabaseIndex];
vidPlayer.Play();
}
transform.localScale = new Vector3(Image.ExtentX, Image.ExtentZ,
1f);
}
}
no console errors showing, but no videos showing

Be sure that your camera's position is in the origo.
Is your videoplayer active? (videoplayer.setactive(true))
My working solution:
public class AugmentedImageVisualizer : MonoBehaviour
{
private const string TAG = "AugmentedImageVisualizer";
public AugmentedImage image;
public GameObject video;
public VideoClip[] videoClips;
private VideoPlayer videoPlayer;
private void Start() {
videoPlayer = video.GetComponent<VideoPlayer>();
}
private void Update() {
if (Session.Status != SessionStatus.Tracking) return;
if (image == null || image.TrackingState != TrackingState.Tracking) {
videoplayer.SetActive(false);
return;
}
UpdateVideo();
private void UpdateVideo() {
if (!videoPlayer.isPlaying) {
videoPlayer.clip = videoClips[image.DatabaseIndex];
videoPlayer.Play();
video.SetActive(true);
}
transform.localScale = new Vector3(image.ExtentX, 1, image.ExtentZ);
}
}
Don't forget to add videoplayer component to your gameobject.

EDIT: I had to edit answer to give more explanation:
I used the examples of augmented image that comes with GoogleARCore to create the AR. Here the controller needs an augmented image visualizer prefab. This prefab is a Quad (right click on hierarchy area, then 3D Object > Quad) and moved it to prefab folder (this creates prefab from quad). This quad/prefab has a videoplayer (added in inspector). This quad(prefab) also has a script (augmentedimagevisualizer) which contains your code snippet. So in the inspector (with augmentedimagevisualizer script) the quad already has videoclips where you can set your videos.
In hierarchy, there is an arcore device and inside is the camera. This camera has Tracked Pose Driver (set it in inspector) with Pose source: Color Camera and AR Core Background Renderer script.
The 2. videolink makes the same and contains your code as well, so this video is very descriptive regarding your question.
I found 2 similar video on youtube.
that shows 1 video: https://www.youtube.com/watch?v=yjj5dV2v9Fs
that shows multiple videos on multiple ref. images:
https://www.youtube.com/watch?v=GkzMFNmvums
The 2. was tricky for me, the guy in video created quad that he renamed to AugmentedImageVisualizer and then he placed it in Prefabs. Since I've realized this my videos appear on ref. images.
I used Unity 2019.3.15 and arcore-unity-sdk-1.17.0.unitypackage

Related

How can i scan a qr code from ARScene Camera usin ARCORE

Here I need some suggestion or a want to a way of doing this
Scenario: i want to scan a qr code in the ar scene and when i scan the qr code what ever content is there in qr code i will place in the ar scene here i dont want to use google vision instead i want to use the below package but the below package opens camera instead i want to use it in the AR scene it self
I used this package for qr scan https://github.com/zxing/zxing
below is my ar code
public class MainActivity extends AppCompatActivity {
private ArFragment arFragment;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
arFragment = (ArFragment) getSupportFragmentManager().findFragmentById(R.id.arFragment);
arFragment.setOnTapArPlaneListener((hitResult, plane, motionEvent) -> {
Anchor anchor = hitResult.createAnchor();
ModelRenderable.builder()
.setSource(this, Uri.parse("anchor.sfb"))
.build()
.thenAccept(modelRenderable -> addModelToScene(anchor,modelRenderable))
.exceptionally(throwable -> {
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setMessage(throwable.getMessage()).show();
return null;
});
});
}
private void addModelToScene(Anchor anchor,ModelRenderable modelRenderable){
AnchorNode anchorNode = new AnchorNode(anchor);
TransformableNode transformableNode = new TransformableNode(arFragment.getTransformationSystem());
transformableNode.setParent(anchorNode);
transformableNode.setRenderable(modelRenderable);
arFragment.getArSceneView().getScene().addChild(anchorNode);
transformableNode.select();
}
}
I recommend trying the existing Augmented Images feature in ARCore
What you think is a QR code, the AR software sees as a fiducial marker. These markers need to known beforehand. For example in the video on the ARCore page, the painting is a fiducial marker which allows the 3D image to be overlaid.
The ARCore feature I linked to supports up to 1000 reference images/markers per marker database and you can create and use new predefined marker databases.
As long as you know what QR codes will have 3D effects, you can prepare them in a marker database.
If you want/need to have dynamic QR code with ARCore, I would suggest trying to create fiducial around/next to the QR code so that you can scan and then hand off to AR Core to generate the 3d image, but may not work as QR code may be mixed in with the fiducial both need white space to work.
If you can't use ARCore, then you are in the world of OpenCV and various scene engines (3D renderers) like Ogre or you can draw the AR scene in OpenGL ES.
ARCore gives you the frame or use a wrapper like sceneform. In sceneform you are able to get your frame from ARFragment.
First get your fragment:
var arFragment = (supportFragmentManager.findFragmentById(R.id.your_ar_fragment) as ArFragment?)
arFragment.arSceneView?.scene?.addOnUpdateListener {
onUpdateFrame()
}
Analyze your frames for example read QR Codes
fun onUpdateFrame() = runBlocking {
launch {
analyze()
}
}
private fun analyze() {
frame = arFragment.arSceneView.arFrame
//Do fancy stuff with your frame, e.g. reading QR Codes
readingQRCodes(image)
}
private fun readingQRCodes() {
val mediaImage = InputImage.fromMediaImage(image, 0)
val scanner = BarcodeScanning.getClient()
scanner.process(mediaImage)
.addOnSuccessListener { qrCodes ->
//...
}
.addOnFailureListener {
//...
}
.addOnCompleteListener {
image.close()
}
}
Of course you can also use the ARCore library to get the camera frames.

DLC system to AR App with Unity Addressables, Vuforia, and Firebase

I am about 3 months into the AR scene and I feel somewhat familiar with building an AR app with Vuforia in Unity. I want to add a DLC system to my AR app so that I can build and publish once, and update the content after the build. I will be doing future updates with new content for the app because it is a serialized book series with AR functions, and they file size will be huge with all the content on it.
My workflow is as follows:
PRefabs with Images Targets and AR content
1)I have prefabs that with ImageTarget => AR content to display (3D models, Animation, graphics, and audio) I have them selected as Addressable and Labled "ARPages"
Files Exported as Addressables
2)I also have my Vuforia Database (Images, .xml, and .dat) as Addressable.I want to be able to update and add to the database post build then sync it with new prefabs that I make Addressable.
ARCamera Scene with Empty GameObject
3)I build out the lightweight AR app without the AR content as prefabs, but I have an empty game object in my AR Scene (ARCamera => GameObject) with a script that calls an AssetLabelReference labeled "ARPages".
using UnityEngine.SceneManagement;
using UnityEngine;
using System;
using System.Collections.Generic;
using UnityEngine.AddressableAssets;
using UnityEngine.ResourceManagement;
using Vuforia;
public class GameManager1 : MonoBehaviour
{
public AssetLabelReference comicbookName;
private bool mLoaded = false;
private DataSet mDataset = null;
void Awake()
{
Addressables.DownloadDependenciesAsync(comicbookName);
}
void Start()
{
Addressables.InstantiateAsync(comicbookName);
}
void Update()
{
if (VuforiaRuntimeUtilities.IsVuforiaEnabled() && !mLoaded)
{
string externalPath = Application.persistentDataPath;
if (mDataset == null)
{
// First, create the dataset
ObjectTracker tracker = TrackerManager.Instance.GetTracker<ObjectTracker>();
mDataset = tracker.CreateDataSet();
}
if (mDataset.Load(externalPath, VuforiaUnity.StorageType.STORAGE_ABSOLUTE))
{
mLoaded = true;
}
else
{
Debug.LogError("Failed to load dataset!");
}
}
}
}
Firebase Storage Files
4)Build Addressable packages and upload them to firebase Storage where they are downloaded to my device on startup Menu Scene using a script
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Threading.Tasks;
using UnityEngine.AddressableAssets;
using Vuforia;
public class LoadAssetsFromRemote : MonoBehaviour
{
[SerializeField] private AssetLabelReference _label;
// Start is called before the first frame update
private void Start()
{
Get(_label);
}
private async Task Get(AssetLabelReference label)
{
var locations = await Addressables.LoadResourceLocationsAsync(label).Task;
foreach (var location in locations)
{
await Addressables.InstantiateAsync(location).Task;
}
}
// Update is called once per frame
void Update()
{
}
}
5)When I click and open the AR Scene, I use a script to load and initialize Vuforia and the Database, but nothing happens when I shine my phone over the image target.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
public class MultiRecoScript : MonoBehaviour, ITrackableEventHandler
{
public AudioClip musicFx;
public Animator[] myAnimator;
private TrackableBehaviour mTrackableBehaviour;
void Start()
{
//Fetch the Animator from your GameObject
myAnimator = GetComponentsInChildren<Animator>();
mTrackableBehaviour = GetComponent<TrackableBehaviour>();
if (mTrackableBehaviour)
{
mTrackableBehaviour.RegisterTrackableEventHandler(this);
}
}
public void OnTrackableStateChanged(
TrackableBehaviour.Status previousStatus,
TrackableBehaviour.Status newStatus)
{
if (newStatus == TrackableBehaviour.Status.DETECTED ||
newStatus == TrackableBehaviour.Status.TRACKED ||
newStatus == TrackableBehaviour.Status.EXTENDED_TRACKED)
{
//lancio la musica
Debug.Log("sound");
AudioManagerScript.current.PlaySound(musicFx);
foreach (Animator animator in myAnimator)
{
animator.speed = 1f;
animator.gameObject.SetActive(true);
}
}
else
{
// stoppo la musica
AudioManagerScript.current.StopSound();
// Debug.Log("stoppo");
foreach (Animator animator in myAnimator)
{
animator.gameObject.SetActive(false);
}
}
}
}
I am stuck. Could use help and feedback. I want to be able to load new content to firebase using addressables system in unity, and have the app download the new content and implement the new content without having to rebuild the app everytime I want to update the content.

Photon objects not syncing - Unity

I am working on a multlayer third person game and I am using motion controller for animations and photon for network manager.I ahve a problem: when I connect and join the room the other players don't move on others player screen. They move only on their devices. Here is what I deactivated:
using UnityEngine;
using com.ootii.Input;
using com.ootii.Actors;
using com.ootii.Actors.AnimationControllers;
public class netView : Photon.MonoBehaviour {
public Camera cam;
public UnityInputSource uis;
public GameObject canvas;
public ActorController ac;
public MotionController mc;
// Use this for initialization
void Start () {
if (photonView.isMine) {
cam.enabled = true;
uis._IsEnabled = true;
canvas.active = true;
ac.enabled = true;
mc.enabled = true;
} else {
cam.enabled = false;
uis._IsEnabled = false;
canvas.active = false;
ac.enabled = false;
mc.enabled = false;
}
}
}
Here is a video: https://youtu.be/mOaAejsVX04 . In it i am playing in editor and on my phone. In my device I move around and the editor player does not move. Also in editor, the player from the device just stays there, doesn't move while on phone is moveing around.
For input I am using CrossPlatformManager class. How can I repair it?
In your case I think the problem is that you don't synchronize the transform to begin with. You need either a PhotonTransformView Component attached to your network object, with a photonView observing that PhotonTransformView, or inside your network behaviour manually writing and reading to that network object stream.
I strongly encourage you do go through the basic tutorial which will show you all the above technique step by step:
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#trans_sync
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#beams
it doesn't matter the input technique you use, what matters is the synchronization of the transform.

Unity AR Multiple Image Target but not simultaneously

I am building this app, that will recognize painting and will display the info about it with the help of AR.
And I need to call multiple image target but not simultaneously, it will only call the image target if it is detected by AR camera.
*Ive tried creating many scenes with Image target on it but I cant call different imagetarget it keeps on reverting to only 1 imagetarget.
This is wat you can see in menu,
Main menu
Start AR camera(This part should have many image target but not detecting it simultaneously)
Help(how to use the app)
Exit
*Im using vuforia in creating AR
Thanks in advance for those who will help me.
This is the imagetarget and its Database
View post on imgur.com
Run the multi target scene sample. There are three target (stone, wood and road).
Each contains the TrackableBehaviour component.
Grab it and disable it in Start. If you do it in Awake it will be set back to active most likely in the Awake of the component itself or via some other manager.
public class TrackerController:MonoBehaviour
{
private IDictionary<string,TrackableBehaviours> trackers = null;
private void Start()
{
this.trackers = new Dictionary<string,TrackableBehaviour>();
var trackers = FindObjectsOfType<TrackableBehaviour>();
foreach(TrackingBehaviour tb in trackers)
{
this.trackers.Add(tb.TrackableName, tb);
tb.enabled = false;
}
}
public bool SetTracker(string name, bool value)
{
if(string.IsNullOrEmpty(name) == true){ return false; }
if(this.trackers.ContainsKey(name) == false){ return false; }
this.trackers[name].enabled = value;
return true;
}
}
The method finds all TrackableBehaviour and places them in a dictionary for easy access. The setting method return boolean, you can change it to throw exception or else.

CharacterController.SimpleMove not working on Daydream View

I am trying to use a LookToWalk script in my Unity VR app that should run on my Daydream View. In the "Game" Mode to preview the changes everything works as expected (I configured the script to run forward once the user camera faces 30.0 degrees downwards or more.
However when I try to build the daydream app and install it on my Google Pixel the CharacterController.SimpleMove doesn't seem to work any more.
The logs were showing that the 30.0 degree stuff was triggered as expected but no movement was seen on the daydream.
Do you know why this could be happening? Seems really strange that it runs on the "emulator" but not the 'real' device.
using UnityEngine;
using System.Collections;
public class GVRLookWalk : MonoBehaviour {
public Transform vrCamera;
public float toggleAngle = 30.0f;
public float speed = 3.0f;
private bool shouldWalk;
private CharacterController cc;
// Use this for initialization
void Start () {
cc = GetComponent<CharacterController>();
}
// Update is called once per frame
void Update () {
if (vrCamera.eulerAngles.x >= toggleAngle && vrCamera.eulerAngles.x < 90.0f){
shouldWalk = true;
} else {
shouldWalk = false;
}
if (shouldWalk) {
Vector3 forward = vrCamera.TransformDirection (Vector3.forward);
cc.SimpleMove (forward * speed);
}
}
Is the Camera a child of another transform? You cannot move the camera directly. "you cannot move the camera directly in Unity. Instead, the camera must be a child of another GameObject, and changes to the position and rotation must be applied to the parent’s Transform."
https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr

Categories

Resources