I am working on a multlayer third person game and I am using motion controller for animations and photon for network manager.I ahve a problem: when I connect and join the room the other players don't move on others player screen. They move only on their devices. Here is what I deactivated:
using UnityEngine;
using com.ootii.Input;
using com.ootii.Actors;
using com.ootii.Actors.AnimationControllers;
public class netView : Photon.MonoBehaviour {
public Camera cam;
public UnityInputSource uis;
public GameObject canvas;
public ActorController ac;
public MotionController mc;
// Use this for initialization
void Start () {
if (photonView.isMine) {
cam.enabled = true;
uis._IsEnabled = true;
canvas.active = true;
ac.enabled = true;
mc.enabled = true;
} else {
cam.enabled = false;
uis._IsEnabled = false;
canvas.active = false;
ac.enabled = false;
mc.enabled = false;
}
}
}
Here is a video: https://youtu.be/mOaAejsVX04 . In it i am playing in editor and on my phone. In my device I move around and the editor player does not move. Also in editor, the player from the device just stays there, doesn't move while on phone is moveing around.
For input I am using CrossPlatformManager class. How can I repair it?
In your case I think the problem is that you don't synchronize the transform to begin with. You need either a PhotonTransformView Component attached to your network object, with a photonView observing that PhotonTransformView, or inside your network behaviour manually writing and reading to that network object stream.
I strongly encourage you do go through the basic tutorial which will show you all the above technique step by step:
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#trans_sync
https://doc.photonengine.com/en-us/pun/current/demos-and-tutorials/pun-basics-tutorial/player-networking#beams
it doesn't matter the input technique you use, what matters is the synchronization of the transform.
Related
I am using flutter and have disabled normal apps from recording the screen.
Here is the code
getWindow().setFlags(WindowManager.LayoutParams.FLAG_SECURE,
WindowManager.LayoutParams.FLAG_SECURE);
The problem is there are some phones where screen recordings apps are pre-installed and above code can't stop them from recording the screen.
So is there any other way to stop these apps from recording the screen?
On other answers I saw that this was not possible but there are some apps on playstore which successfully achieve this. So there must be a way.
I was thinking, as screen recording apps are drawn over , they might be detected through a piece of code hence we can show a pop up while screen recording app is drawn over.
Is it possible ? If yes how can we detect if the app is drawn over our app.
Thanks.
As far as I'm aware, there is no official way to universally prevent screen grabs/recordings.
This is because FLAG_SECURE just prevents capturing on non-secure displays:
Window flag: treat the content of the window as secure, preventing it from appearing in screenshots or from being viewed on non-secure displays.
But apps that have elevated permissions can create a secure virtual display and use screen mirroring to record your screen, which does not respect the secure flag.
Read this article for more info:
That would mean that an Android device casting to a DRM-protected display like a TV would always display sensitive screens, since the concept of secure really means “copyrighted”. For apps, Google forestalled this issue by preventing apps not signed by the system key from creating virtual “secure” displays
Regarding how some apps still manage to do it, you could try these:
Check if there are any external/virtual displays connected, and hide/show your content based on that. see this
Don't show your content on rooted devices
Adding this code to my MainActivity.java solved the problem:
protected void onCreate(#Nullable Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
if (!this.setSecureSurfaceView()) {
Log.e("MainActivity", "Could not secure the MainActivity!");
}
}
private final boolean setSecureSurfaceView() {
ViewGroup content = (ViewGroup) this.findViewById(android.R.id.content);
//Intrinsics.checkExpressionValueIsNotNull(content, "content");
if (!this.isNonEmptyContainer((View) content)) {
return false;
} else {
View splashView = content.getChildAt(0);
//Intrinsics.checkExpressionValueIsNotNull(splashView, "splashView");
if (!this.isNonEmptyContainer(splashView)) {
return false;
} else {
View flutterView = ((ViewGroup) splashView).getChildAt(0);
//Intrinsics.checkExpressionValueIsNotNull(flutterView, "flutterView");
if (!this.isNonEmptyContainer(flutterView)) {
return false;
} else {
View surfaceView = ((ViewGroup) flutterView).getChildAt(0);
if (!(surfaceView instanceof SurfaceView)) {
return false;
} else {
((SurfaceView) surfaceView).setSecure(true);
this.getWindow().setFlags(8192, 8192);
return true;
}
}
}
}
}
private final boolean isNonEmptyContainer(View view) {
if (!(view instanceof ViewGroup)) {
return false;
} else {
return ((ViewGroup) view).getChildCount() >= 1;
}
}
Import the required things.
I have an application that occasionally speaks via the systems text to speech(TTS) system, but if there's a background service (like an audiobook, or music stream) running at the same time they overlap.
I would like to pause the media, play my TTS, then unpause the media. I've looked, but can't find any solutions.
I believe if I were to play actual audio from my app, it would pause the media until my playback was complete (if I understand what I've found correctly). But TTS doesn't seem to have the same affect. The speech is totally dynamic, so I can't just record all the options.
Using the latest Xamarin.Forms, I've looked into all the media nuget packages I could find, and they all seem pretty centered on controlling media from files.
My only potential thought (I don't like it), is to maybe play an empty audio file while the TTS is running. But would like a more elegant solution if it exists.
(I don't care about iOS at the moment, so if it's an android only solution, I'm okay with it. And if it's native (java/kotlin), I can convert/incorporate it.)
Agree with rbonestell said, you can use DependencyService and AudioFocus to achieve it, when you record the audio, you can create interface in PCL.
public interface IControl
{
void StopBackgroundMusic();
}
When you record the audio, you can executed the DependencyService with following code.
private void Button_Clicked(object sender, EventArgs e)
{
DependencyService.Get<IControl>().StopBackgroundMusic();
//record the audio
}
In android folder, you can create a StopMusicService to achieve that.
[assembly: Dependency(typeof(StopMusicService))]
namespace TTSDemo.Droid
{
public class StopMusicService : IControl
{
AudioManager audioMan;
AudioManager.IOnAudioFocusChangeListener listener;
public void StopBackgroundMusic()
{
audioMan = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
listener = new MyAudioListener(this);
var ret = audioMan.RequestAudioFocus(listener, Stream.Music, AudioFocus.Gain);
}
}
internal class MyAudioListener :Java.Lang.Object, AudioManager.IOnAudioFocusChangeListener
{
private StopMusicService stopMusicService;
public MyAudioListener(StopMusicService stopMusicService)
{
this.stopMusicService = stopMusicService;
}
public void OnAudioFocusChange([GeneratedEnum] AudioFocus focusChange)
{
// throw new NotImplementedException();
}
}
}
Thanks to Leon Lu - MSFT, I was able to go in the right direction. I took his implementation (which has some deprecated calls to the Android API), and updated it for what I needed.
I'll be doing a little more work making sure it's stable and functional. I'll also see if I can clean it up a little too. But here's what works on my first test:
[assembly: Dependency(typeof(MediaService))]
namespace ...Droid.Services
{
public class MediaService : IMediaService
public async Task PauseBackgroundMusicForTask(Func<Task> onFocusGranted)
{
var manager = (AudioManager)Android.App.Application.Context.GetSystemService(Context.AudioService);
var builder = new AudioFocusRequestClass.Builder(AudioFocus.GainTransientMayDuck);
var focusRequest = builder.Build();
var ret = manager.RequestAudioFocus(focusRequest);
if (ret == AudioFocusRequest.Granted)
{
await onFocusGranted?.Invoke();
manager.AbandonAudioFocusRequest(focusRequest);
}
}
}
}
I am building this app, that will recognize painting and will display the info about it with the help of AR.
And I need to call multiple image target but not simultaneously, it will only call the image target if it is detected by AR camera.
*Ive tried creating many scenes with Image target on it but I cant call different imagetarget it keeps on reverting to only 1 imagetarget.
This is wat you can see in menu,
Main menu
Start AR camera(This part should have many image target but not detecting it simultaneously)
Help(how to use the app)
Exit
*Im using vuforia in creating AR
Thanks in advance for those who will help me.
This is the imagetarget and its Database
View post on imgur.com
Run the multi target scene sample. There are three target (stone, wood and road).
Each contains the TrackableBehaviour component.
Grab it and disable it in Start. If you do it in Awake it will be set back to active most likely in the Awake of the component itself or via some other manager.
public class TrackerController:MonoBehaviour
{
private IDictionary<string,TrackableBehaviours> trackers = null;
private void Start()
{
this.trackers = new Dictionary<string,TrackableBehaviour>();
var trackers = FindObjectsOfType<TrackableBehaviour>();
foreach(TrackingBehaviour tb in trackers)
{
this.trackers.Add(tb.TrackableName, tb);
tb.enabled = false;
}
}
public bool SetTracker(string name, bool value)
{
if(string.IsNullOrEmpty(name) == true){ return false; }
if(this.trackers.ContainsKey(name) == false){ return false; }
this.trackers[name].enabled = value;
return true;
}
}
The method finds all TrackableBehaviour and places them in a dictionary for easy access. The setting method return boolean, you can change it to throw exception or else.
I am trying to use a LookToWalk script in my Unity VR app that should run on my Daydream View. In the "Game" Mode to preview the changes everything works as expected (I configured the script to run forward once the user camera faces 30.0 degrees downwards or more.
However when I try to build the daydream app and install it on my Google Pixel the CharacterController.SimpleMove doesn't seem to work any more.
The logs were showing that the 30.0 degree stuff was triggered as expected but no movement was seen on the daydream.
Do you know why this could be happening? Seems really strange that it runs on the "emulator" but not the 'real' device.
using UnityEngine;
using System.Collections;
public class GVRLookWalk : MonoBehaviour {
public Transform vrCamera;
public float toggleAngle = 30.0f;
public float speed = 3.0f;
private bool shouldWalk;
private CharacterController cc;
// Use this for initialization
void Start () {
cc = GetComponent<CharacterController>();
}
// Update is called once per frame
void Update () {
if (vrCamera.eulerAngles.x >= toggleAngle && vrCamera.eulerAngles.x < 90.0f){
shouldWalk = true;
} else {
shouldWalk = false;
}
if (shouldWalk) {
Vector3 forward = vrCamera.TransformDirection (Vector3.forward);
cc.SimpleMove (forward * speed);
}
}
Is the Camera a child of another transform? You cannot move the camera directly. "you cannot move the camera directly in Unity. Instead, the camera must be a child of another GameObject, and changes to the position and rotation must be applied to the parent’s Transform."
https://unity3d.com/learn/tutorials/topics/virtual-reality/movement-vr
I need to detect if the phone has a front facing camera, and if so, I need to calculate the megapixels. The same thing goes for a rear facing camera.
I know how to get the megapixels of a "Camera" object, but I don't know how to check for the other things.
P.s.: I would also be nice if you know a way to check if the Camera has flash or not, and other cool statistics about the camera
I always try to create helpers
check if you have a front Camera:
public static boolean checkCameraFront(Context context) {
if(context.getPackageManager().hasSystemFeature(PackageManager.FEATURE_CAMERA_FRONT)) {
return true;
} else {
return false;
}
}
Check if you have Camera in your device
public static boolean checkCameraRear() {
int numCamera = Camera.getNumberOfCameras();
if(numCamera > 0) {
return true;
} else {
return false;
}
}
http://developer.android.com/reference/android/hardware/Camera.html#getNumberOfCameras() , introduced in API lvl 9. This gets you the number of cameras
http://developer.android.com/reference/android/hardware/Camera.CameraInfo.html contains information of its facing direction.
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getPictureSize() is megapixels, if counted
http://developer.android.com/reference/android/hardware/Camera.Parameters.html#getFlashMode() returns null if no flash..
many other parameters can be gotten from the camera object too
http://developer.android.com/reference/android/hardware/Camera.html has step by step instructions for using camera. You can follow these instructions if you understand any object oriented language.