CharacterController.SimpleMove not working on Daydream View - android

I am trying to use a LookToWalk script in my Unity VR app that should run on my Daydream View. In the "Game" Mode to preview the changes everything works as expected (I configured the script to run forward once the user camera faces 30.0 degrees downwards or more.
However when I try to build the daydream app and install it on my Google Pixel the CharacterController.SimpleMove doesn't seem to work any more.
The logs were showing that the 30.0 degree stuff was triggered as expected but no movement was seen on the daydream.
Do you know why this could be happening? Seems really strange that it runs on the "emulator" but not the 'real' device.
using UnityEngine;
using System.Collections;
public class GVRLookWalk : MonoBehaviour {
public Transform vrCamera;
public float toggleAngle = 30.0f;
public float speed = 3.0f;
private bool shouldWalk;
private CharacterController cc;
// Use this for initialization
void Start () {
cc = GetComponent<CharacterController>();
}
// Update is called once per frame
void Update () {
if (vrCamera.eulerAngles.x >= toggleAngle && vrCamera.eulerAngles.x < 90.0f){
shouldWalk = true;
} else {
shouldWalk = false;
}
if (shouldWalk) {
Vector3 forward = vrCamera.TransformDirection (Vector3.forward);
cc.SimpleMove (forward * speed);
}
}

Is the Camera a child of another transform? You cannot move the camera directly. "you cannot move the camera directly in Unity. Instead, the camera must be a child of another GameObject, and changes to the position and rotation must be applied to the parent’s Transform."
https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr

Related

Problems with playing videos ARCore

I followed the example code listed on the AugmentedImageController for ARCore unity on github at: https://github.com/google-ar/arcore-unity-sdk/blob/master/Assets/GoogleARCore/Examples/AugmentedImage/Scripts/AugmentedImageExampleController.cs. Even after I've followed the code on this example, it doesn't play the video from the video player as shown on the AugmentedImageVisualizer code below:
The video plays if I drag and drop the AugmentedImageVirtulizer onto the scene and put playOnAwake. However it doesn't play when I take the playOnAwake off, send the app to my phone, and then point the camera to the augmented image (in my case a empty milk bottle label). I want an object such as a ghost to appear coming out of the milk bottle.
using GoogleARCore;
using UnityEngine;
using UnityEngine.Video;
public class AugmentedImageVisualizer : MonoBehaviour {
private VideoPlayer vidPlayer;
public VideoClip[] vidClips;
public AugmentedImage Image;
// Use this for initialization
void Start () {
vidPlayer = GetComponent<VideoPlayer>();
vidPlayer.loopPointReached += OnStop;
}
private void OnStop(VideoPlayer source)
{
gameObject.SetActive(false);
}
// Update is called once per frame
void Update () {
if (Image == null || Image.TrackingState != TrackingState.Tracking)
{
return;
}
if (!vidPlayer.isPlaying)
{
vidPlayer.clip = vidClips[Image.DatabaseIndex];
vidPlayer.Play();
}
transform.localScale = new Vector3(Image.ExtentX, Image.ExtentZ,
1f);
}
}
no console errors showing, but no videos showing
Be sure that your camera's position is in the origo.
Is your videoplayer active? (videoplayer.setactive(true))
My working solution:
public class AugmentedImageVisualizer : MonoBehaviour
{
private const string TAG = "AugmentedImageVisualizer";
public AugmentedImage image;
public GameObject video;
public VideoClip[] videoClips;
private VideoPlayer videoPlayer;
private void Start() {
videoPlayer = video.GetComponent<VideoPlayer>();
}
private void Update() {
if (Session.Status != SessionStatus.Tracking) return;
if (image == null || image.TrackingState != TrackingState.Tracking) {
videoplayer.SetActive(false);
return;
}
UpdateVideo();
private void UpdateVideo() {
if (!videoPlayer.isPlaying) {
videoPlayer.clip = videoClips[image.DatabaseIndex];
videoPlayer.Play();
video.SetActive(true);
}
transform.localScale = new Vector3(image.ExtentX, 1, image.ExtentZ);
}
}
Don't forget to add videoplayer component to your gameobject.
EDIT: I had to edit answer to give more explanation:
I used the examples of augmented image that comes with GoogleARCore to create the AR. Here the controller needs an augmented image visualizer prefab. This prefab is a Quad (right click on hierarchy area, then 3D Object > Quad) and moved it to prefab folder (this creates prefab from quad). This quad/prefab has a videoplayer (added in inspector). This quad(prefab) also has a script (augmentedimagevisualizer) which contains your code snippet. So in the inspector (with augmentedimagevisualizer script) the quad already has videoclips where you can set your videos.
In hierarchy, there is an arcore device and inside is the camera. This camera has Tracked Pose Driver (set it in inspector) with Pose source: Color Camera and AR Core Background Renderer script.
The 2. videolink makes the same and contains your code as well, so this video is very descriptive regarding your question.
I found 2 similar video on youtube.
that shows 1 video: https://www.youtube.com/watch?v=yjj5dV2v9Fs
that shows multiple videos on multiple ref. images:
https://www.youtube.com/watch?v=GkzMFNmvums
The 2. was tricky for me, the guy in video created quad that he renamed to AugmentedImageVisualizer and then he placed it in Prefabs. Since I've realized this my videos appear on ref. images.
I used Unity 2019.3.15 and arcore-unity-sdk-1.17.0.unitypackage

Photon objects not syncing - Unity

I am working on a multlayer third person game and I am using motion controller for animations and photon for network manager.I ahve a problem: when I connect and join the room the other players don't move on others player screen. They move only on their devices. Here is what I deactivated:
using UnityEngine;
using com.ootii.Input;
using com.ootii.Actors;
using com.ootii.Actors.AnimationControllers;
public class netView : Photon.MonoBehaviour {
public Camera cam;
public UnityInputSource uis;
public GameObject canvas;
public ActorController ac;
public MotionController mc;
// Use this for initialization
void Start () {
if (photonView.isMine) {
cam.enabled = true;
uis._IsEnabled = true;
canvas.active = true;
ac.enabled = true;
mc.enabled = true;
} else {
cam.enabled = false;
uis._IsEnabled = false;
canvas.active = false;
ac.enabled = false;
mc.enabled = false;
}
}
}
Here is a video: https://youtu.be/mOaAejsVX04 . In it i am playing in editor and on my phone. In my device I move around and the editor player does not move. Also in editor, the player from the device just stays there, doesn't move while on phone is moveing around.
For input I am using CrossPlatformManager class. How can I repair it?
In your case I think the problem is that you don't synchronize the transform to begin with. You need either a PhotonTransformView Component attached to your network object, with a photonView observing that PhotonTransformView, or inside your network behaviour manually writing and reading to that network object stream.
I strongly encourage you do go through the basic tutorial which will show you all the above technique step by step:
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#trans_sync
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#beams
it doesn't matter the input technique you use, what matters is the synchronization of the transform.

Android, Dronekit

I am develop an application for Rover to management by joystick. I need to create remote control to set direction of moving rover (left, right, forward, back). I'm trying to realize it by following code:
ControlApi.getApi(mDrone).enableManualControl(true, new ControlApi.ManualControlStateListener() {
#Override
public void onManualControlToggled(boolean b) {
}
});
ControlApi.getApi(mDrone).manualControl(0, -0.5f, 0, new AbstractCommandListener() {
#Override
public void onSuccess() {
}
#Override
public void onError(int i) {
}
#Override
public void onTimeout() {
}
});
But I'm getting an error 3 in onError block. Also before that I can't enable manual control it always return false. Can someone tell me what I'm doing wrong, or maybe guide me to the right way?
I would realy apriciate if someone can help me. Regards!
UPDATED
Still no result
Now I'm trying to use
MavLinkCommands.sendManualControl(drone, x, y, z, r, button, new ICommandListener(){...});
But can't run this method because drone is null.
I generate it like this:
MavLinkDroneManager mavLinkDroneManager =
new MavLinkDroneManager(mContext, mConnectionParameter, mHandler);
MavLinkDrone drone = mavLinkDroneManager.getDrone();
and method getDrone always return null.
SOLVED
If someone will encounter a similar problem, then the solution is quite simple. In total, you need to read the python documentation in more detail :)
All you need to do is override the channels using MAVLink command msg_rc_channels_override(); and the code with the solution will look like this:
VehicleApi.getApi(mDrone).setVehicleMode(VehicleMode.ROVER_MANUAL); //be sure the vehicle is in manual mode
VehicleApi.getApi(mDrone).arm(true); // and arm need to be true
by default chanel 1 is left-right, chanel 3 is forward-back.
If it does not work, check on the drones in which channels they are connected.
To move vehicle left set chanel 1 to 2000, right - 1000;
To move forward set chanel 3 - 2000, bacck - 1000;
I tested it on ArduRover but I think it should work with ArduPilot and ArduCopter.
msg_rc_channels_override rc_override = new msg_rc_channels_override();
rc_override.chan1_raw = 1000 //right; 2000 //left
rc_override.chan3_raw = 1000 //back; 2000 //forward
rc_override.target_system = 0;
rc_override.target_component = 0;
ExperimentalApi.getApi(mDrone).sendMavlinkMessage(new MavlinkMessageWrapper(rc_override));
For more instruction check this link

Unity AR Multiple Image Target but not simultaneously

I am building this app, that will recognize painting and will display the info about it with the help of AR.
And I need to call multiple image target but not simultaneously, it will only call the image target if it is detected by AR camera.
*Ive tried creating many scenes with Image target on it but I cant call different imagetarget it keeps on reverting to only 1 imagetarget.
This is wat you can see in menu,
Main menu
Start AR camera(This part should have many image target but not detecting it simultaneously)
Help(how to use the app)
Exit
*Im using vuforia in creating AR
Thanks in advance for those who will help me.
This is the imagetarget and its Database
View post on imgur.com
Run the multi target scene sample. There are three target (stone, wood and road).
Each contains the TrackableBehaviour component.
Grab it and disable it in Start. If you do it in Awake it will be set back to active most likely in the Awake of the component itself or via some other manager.
public class TrackerController:MonoBehaviour
{
private IDictionary<string,TrackableBehaviours> trackers = null;
private void Start()
{
this.trackers = new Dictionary<string,TrackableBehaviour>();
var trackers = FindObjectsOfType<TrackableBehaviour>();
foreach(TrackingBehaviour tb in trackers)
{
this.trackers.Add(tb.TrackableName, tb);
tb.enabled = false;
}
}
public bool SetTracker(string name, bool value)
{
if(string.IsNullOrEmpty(name) == true){ return false; }
if(this.trackers.ContainsKey(name) == false){ return false; }
this.trackers[name].enabled = value;
return true;
}
}
The method finds all TrackableBehaviour and places them in a dictionary for easy access. The setting method return boolean, you can change it to throw exception or else.

"Auto-Focus" feature for the "Camera" module of my Android application

I have been working with the CAMERA module for my application since few days.
I have customized the complete camera module instead of invoking the hardware inbuilt mobile camera through an intent. I have used the call backs for shutter, picture etc
Now I am trying to add the ZOOM and AUTO-FOCUS features to this customized camera. Can anybody please let me know the way to add the ZOOM and AUTO-FOCUS features along with the required permissions which should be mentioned in the manifest file..hope i will be helped as soon as possible.
Couple of observations from my end.
1) Camera.autoFocus is a one-time call, applicable when
Camera.getParameters.getFocusMode() is either FOCUS-MODE-AUTO or
FOCUS-MODE-MACRO, in other cases you don't need to invoke the
autoFocus method. See the API Docs and follow them devotedly.
2) By one-time call, it means that this method does not register the
AutoFocusCallback instance to receive notifications continuously.
3) Rather, FOCUS-MODE-AUTO isn't even a dynamic and continuous focus
constant. Instead, you might want to use FOCUS-MODE-EDOF or
FOCUS-MODE-CONTINUOUS-PICTURES depending on the API Level and the
SDK version that you are using and building for.
4) There is every
possibility that the actual Device Camera may not support some
FOCUS-MODE constants, such as EDOF or INFINITE. Always make sure
when you are creating the camera-parameters, you check for
getSupportedFocusModes and use the applicable constants.
5) Calling
camera.autoFocus just before camera.takePicture can bloat the
resulting jpeg-byte-array in the PictureCallBack to at least 50%
more than it's original size. Not calling autoFocus() explicitly may
sometimes cause the previous autoFocus() to end at a very
low-resolution that may result a jpeg-byte-array length of only a
10K bytes, resulting in a null image bitmap from the BitmapFactory.
6) Regarding auto-focus permissions, see the API Docs.
7) Regarding
Zoom, it is not as complicated as implementing the Auto-focus
feature. Depending on screen-interaction such as slider, or hardware
keys such as volume-keys, you could implement a ZoomChangeListener
that you can register with the Camera as soon as the Camera instance
is received from open(int cameraId).
For zoom (2x):
Camera.Parameters parameters = camera.getParameters();
parameters.set("zoom", "2.0");
parameters.set("taking-picture-zoom", "20");
For api level > 5 use the api's like setZoom() etc
For autofocussing (taken from zxing)
public final boolean onKeyDown(int keyCode, KeyEvent event) {
synchronized(this) {
if (!bIsPictureTaking) {
if (keyCode == KeyEvent.KEYCODE_DPAD_CENTER || keyCode == KeyEvent.KEYCODE_CAMERA) {
if (!bIsPictureTaking && !bIsAutoFocusStarted){
YourAutoFocusCallback autoFocusCallBack = new YourAutoFocusCallback();
camera.autoFocus(autoFocusCallBack);
.
final class YourAutoFocusCallback implements Camera.AutoFocusCallback {
private static final long AUTOFOCUS_INTERVAL_MS = 1500L;
private final CameraConfigurationManager configManager;
private boolean reinitCamera;
private Handler autoFocusHandler;
private int autoFocusMessage;
AutoFocusCallback(CameraConfigurationManager configManager) {
this.configManager = configManager;
}
void setHandler(Handler autoFocusHandler, int autoFocusMessage) {
this.autoFocusHandler = autoFocusHandler;
this.autoFocusMessage = autoFocusMessage;
}
public void onAutoFocus(boolean success, Camera camera) {
if (autoFocusHandler != null) {
Message message = autoFocusHandler.obtainMessage(autoFocusMessage, success);
autoFocusHandler.sendMessageDelayed(message, AUTOFOCUS_INTERVAL_MS);
autoFocusHandler = null;
configManager.setDesiredCameraParameters(camera);
} else {
}
}
}

Categories

Resources