I'm having a bit of a problem with OnTriggerEnter when I'm using my mobile as a test device.
I have some touch code that successfully lets me drag objects around the screen.
I am then having the objects collide with other objects on the screen.
This was working perfectly until I turned the objects into prefabs. ( I'm needing to do this as the objects are being randomly generated at runtime)
Now, I can still move the objects around the screen but they no longer collide with the other objects, which are also prefabs. It does however still work fine when running it on my laptop in the unity editor.
All my objects have colliders on them with trigger checked, and the moving objects have rigidbodies.
On trigger enter code
public void OnTriggerEnter(Collider other)
{
Debug.Log ("here");
Debug.Log(this.gameObject.tag +"is this");
Debug.Log(other.gameObject.tag + "is other");
if (this.gameObject.tag == other.gameObject.tag)
{
Debug.Log("here2)");
Reftomanager.miniGameScore++;
Reftomanager.updateScore();
Destroy(this.gameObject);
}
}
touch code
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
switch(touch.phase)
{
case TouchPhase.Began:
Ray ray = Camera.main.ScreenPointToRay (touch.position);
if (Physics.Raycast(ray,out hit))
{
thisObject = hit.collider.gameObject;
touchPos = Camera.main.ScreenToWorldPoint (touch.position);
if(thisObject.name!="circle")
{
draggingMode = true;
}
}
break;
case TouchPhase.Moved:
if (draggingMode)
{
touchPos = Camera.main.ScreenToWorldPoint (touch.position);
newCentre = touchPos;
thisObject.transform.position = touchPos;
}
break;
case TouchPhase.Ended:
draggingMode = false;
break;
}
}
}
I'm completely stumped so any help would be amazing.
Thanks
Just got this same error recently. I suggest using
If(other.gameObject.CompareTag ("YourTagName"))
Also if you recently added a tag or edited any tags, I found that unity has a bug where your tags will not register on your android build unless you restart unity.
GL.
Since your using 3D colliders, is it possible that the position you are assigning them is different? Touch.position is a Vector2, which means ScreenToWorldPoint would be using 0 for z. If you are using a Vector3 with a z value other than 0 to get the world point in the editor (Standalone Input), it could get you a different value even if x and y are the same.
Another possibility is that there is a platform specific error happening somewhere else in the code, upon object instantiate. Your movement code would still work fine, if it isn't in the same Monobehavior.
If you have an Android, you can use Android Monitor with the Unity tag to check for error messages.
Related
I'm using Unity 3D for developing 3D desfense game.
I made a monster that can take agro from intruders of map, making them occupied while being vulnerable to other attacks. I wanted to make monster can be moved when it's touched once.
Process: touch the monster → touch the destination → monster moves to the destination
I wanted my monster to move through the entire map, but can take intruder's agro
only when they're on the road. I'm currently using Raycast to make this happen, and I'm stuck.
It works just fine on Unity, but when it's on a build and played on the phone, monster can't recognizes the touch or can't recognizes the point where I touched.
if (IsClickMonster)
{
if (Input.GetMouseButtonDown(0))
{
Ray MoveClick = cam.ScreenPointToRay(Input.mousePosition);
if (Physics.Raycast(MoveClick, out ClickedPosition, float.MaxValue, whatCanbeClickedOn) && ClickedPosition.collider.gameObject.CompareTag("monster") == false)
{
Debug.Log("Monster is Moving!");
clickEffect.transform.position = new Vector3(ClickedPosition.point.x, ClickedPosition.point.y + 1.0f, ClickedPosition.point.z);
StopCoroutine("MonsterMove");
IsMoving = true;
StartCoroutine("MonsterMove");
}
else
{
clickEffect.SetActive(false);
StopCoroutine("MonsterMove");
IsClickMonster = false;
return;
}
}
}
else
{
return;
}
↑ Picture of Moving the Skull Soldier. White dot(clickEffect) is where Player touched(ClickedPosition).
Physics.Raycast(MoveClick, out ClickedPosition, float.MaxValue, whatCanbeClickedOn)
Even though I'm using the "whatCanbeClickedOn" layermask, I wonder what can I do to avoid this from happening on mobile build.
In my experience with android, when something doesnt work well in android could be either you have some error or warning on execution that did not trigger on unity just by chance or maybe the phone is not capable of handling the game like it moves too many polygons or lights and it cant handle it well then starts making odd things, try checking the warnings in unity and see if any error appears too of course, thats my best bet
I'm stretching my very limited ARCore knowledge.
My question is similar (but different) to this question
I want to work out if my device camera node intersects/overlaps with my other nodes, but I've not been having any luck so far
I'm trying something like this (the camera is another node):
scene.setOnUpdateListener(frameTime -> {
Node x = scene.overlapTest(scene.getCamera());
if (x != null) {
Log.i(TAG, "setUpArComponents: CAMERA HIT DETECTED at: " + x.getName());
logNodeStatus(x);
}
});
Firstly, does this make sense?
I can detect all node collisions in my scene using:
for (Node node : nodes) {
...
ArrayList<Node> results = scene.overlapTestAll(node);
...
}
Assuming that there isn't a renderable for the Camera node (so no default collision shape), I tried to set my own collision shape, but this was actually catching all the tap events I was trying to perform, so I figured I must be doing this wrong.
I'm thinking about things like fixing a deactivated node in front of the camera.
I may be asking for too much of ARCore, but has anyone found a way to detect a collision between the "user" (i.e. camera node) and another node? Or should I be doing this "collision detection" via indoor positioning instead?
Thanks in advance :)
UPDATE: it's really hacky and performance-heavy, but you can actually compare the camera's and node's world space positions from within onUpdate inside a node, you'll probably have to manage some tolerance and other things to smooth out interactions.
One idea to do the same thing is to use a raycast to hit the objects and if they are close do something. You could use something like this in the onUpdateListener:
Camera camera = arSceneView.getScene().getCamera();
Ray ray = new Ray(camera.getWorldPosition(), camera.getForward());
HitTestResult result = arSceneView.getScene().hitTest(ray);
if (result.getNode() != null && result.getDistance() <= SOME_THRESHOLD) {
// Hit something
doSomething (result.getNode());
}
I have been working out on google ARCore, and got stuck on how to move the game object with the inputs coming from the android device.
The canvas that i have created is precisely with 4 buttons, which as AxisTouchButton script from cross platform input covering vertical and horizontal. I have tried out lean touch to scale, translate and rotate seems to works perfectly.But when i am trying to apply force or velocity to the game object, it moves perfectly for the first time, then when i again axis the buttons, it starts to float in that particular direction unless any other button is pressed.
The below code is for the movement of the game object attached to the Andy prefab in HelloAR scene from examples :
Vector3 offset=Vector3.zero;
offset.x = CrossPlatformInputManager.GetAxis("Horizontal");
offset.z= CrossPlatformInputManager.GetAxis("Vertical");
rb.velocity=(offset * speed ) ;
I'm not sure why your prefab is drifting with the code snippet you've provided,
Try resetting the velocity to zero after you are done with movement of prefab.
rb.velocity = new Vector3(0,0,0);
Or maybe it is due to the fact that you are moving the prefab too far away from its parent anchor, or maybe away from the plane detected by arcore.
But I've another tested way to move a prefab using touch input on the planes detected by arcore and as it allows you to move the prefab only on the planes detected so you can easily reset its anchor after you are done with replacing prefab.
I'd modified the HelloARController.cs script in the following way.
bool move = false; //handle move with some button calls
void Update(){
//add this in your update method to call MoveObject() method
//handle move with some buttons
if(move){
MoveObject();
}
}
void MoveObject(){
if(Input.touchCount == 1){
Touch touch = Input.GetTouch(0);
TrackableHit hit;
TrackableHitFlags raycastFilter = TrackableHitFlags.PlaneWithinPolygon | TrackableHitFlags.FeaturePointWithSurfaceNormal;
if (Frame.Raycast (touch.position.x, touch.position.y, raycastFilter, out hit)) {
if ((hit.Trackable is DetectedPlane) && Vector3.Dot (firstPersonCamera.transform.position - hit.Pose.position, hit.Pose.rotation * Vector3.up) < 0) {
Debug.Log ("Hit at back of the current detected plane");
}
else {// KEY CODE SNIPPET : moves the selectedObject at the location of touch on detected planes
selectedObject.transform.position = hit.Pose.position;
}
}
else {
Debug.Log ("Not moving");
}
}
}
here selectedObject is you andy prefab of whatever you are instantiating.
Make sure that you are instantiating only one prefab at a time and refer it to selectedObject.
Try out the new ARCore Manipulation System. Working like a charm (for newbies).
They forgot to add a collider on the prefab, so don't forget to add it before running the example.
ARCore Unity SDK v1.13.0
I am adroid proggrammer,because of many object in scene my game has lagging
i have theory for remove lagging in my game.
if i can control rendering in unity i can remove lagging.
using UnityEngine;
using System.Collections;
public class Enemy : MonoBehaviour {
void Update(){
void Start(){
GetComponent<Renderer>().enabled = false;
}
object2 = GameObject.Find("TR");
var distance = Vector3.Distance(gameObject.transform.position, object2.transform.position);
print (distance);
if(distance <= 80){
GetComponent<Renderer>().enabled = true;
}
}
}
Don't work.how can i have boolean render that when have collision will render
else remove.
i want have zone that all object in my zone rendered and allthing outside do not render.
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
GetComponent<Renderer>().enabled = true;
}
else{
GetComponent<Renderer>().enabled = false;
}
don't work
void OnTriggerEnter(Collider collision)
{
if(collision.gameObject.tag == "zone")
{
gameObject.SetActive(false);
}
else{
gameObject.SetActive(true);
}
This is either implemented in Unity or implementing it is a bad idea because raycasts are expensive and you need a lot of them. Try finding other problems which cause lagging in your game, disable feature by feature and write how many frames you have, this will get you best overview of what's the problem. Look online which methods are expensive (Instantiating, Destroy, try merging all models you have, smaller amount of shaders, fast shaders, less textures to load, FindGameObjectByName (or tag...)).
Here you will find a great document about optimization. It's preapared for mobile devices but i hope you will find what you need: Unity Optimization Guide for x86 Android
I would recommend having your blue blobs in an object pool, and the ones leaving your screen getting disabled.
You know your position and you know the position of the objects in the pool, you can math your distance in one direction, for instance behind you and disable after x amount.
Raycasting or collisions are abundant.
On your terrain generation scripts, check for disabled pool objects and if one exist, it should be put ahead in the level and repositioned or w/e logic you have there.
Don't instantiate and destroy unless you really need it, do it on level-load instead of on the fly.
(It's expensive.)
There's some really good tutorials on the unity page, have a look there.
They cover things like endless-runners.
I write a very simple android application, that I can draw something on the pad. Touch the screen with a finger, you will see a green ball, move your finger, you will see a red line.
But I found a very strange thing: If I touch the screen with two fingers one by one very fast, it will draw a line between them! (Imaging you are pressing two keys jkjkjkjkjkjjkjkjkjkjkjkj on the keyboard)
The key code is pretty simple:
public boolean onTouch(View v, MotionEvent event) {
int action = event.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN:
multiTouch = false;
id = event.getPointerId(0);
PointF p = getPoint(event, 0);
path = new Path();
path.moveTo(p.x, p.y);
paths.add(path);
points.add(copy(p));
break;
case MotionEvent.ACTION_POINTER_DOWN:
multiTouch = true;
for (int i = 0; i < event.getPointerCount(); i++) {
int tId = event.getPointerId(i);
if (tId != id) {
points.add(getPoint(event, i));
}
}
break;
case MotionEvent.ACTION_MOVE:
if (!multiTouch) {
p = getPoint(event, 0);
path.lineTo(p.x, p.y);
}
break;
}
invalidate();
return true;
}
The full source is here: https://github.com/freewind/TouchTest/blob/master/src/com/example/MyImageView.java
And it's a working demo: https://github.com/freewind/TouchTest
Or you can just download the signed apk on your android device, and test it yourself: https://github.com/freewind/TouchTest/blob/master/TouchTest.apk?raw=true
You can see in my code, I have checked if it's multi touch and disabled drawing on that case.
My android version is 4.0, and my code target is 2.3.3
There is a picture on my android pad:
You can see there are some lines but it should not be, there should be a green ball on the left of the red line instead.
I'm not sure why android treat fast single touch as moving, I considered 3 reasons:
My code has something wrong
Android sdk has something wrong
My android pad has something wrong, e.g. missing a ACTION_DOWN event
How to find out the real reason?
UPDATE
One of my friend used his android mobile(android 2.1) to test this app and found there is no red line, another used android 2.3.5 and found there are red lines.
Please review my code, I have checked multi-touch by ACTION_POINTER_DOWN, and will do nothing on ACTION_MOVE if there are more than 1 points. So the id of point is not needed. (Actually, in my first version of this code, I used id but have the same issue).
And I don't think this is an expected behavior, because it made the development of touching programs hard. I found this issue because in my another application(user can drag/zoom/rotate an image by fingers), the image sometimes "jump" on screen.
I even tried a popular game (Fruit Ninja) on my android pad and iTouch, and found android version has the issue but iTouch doesn't.
Now I'm sure there is something wrong (missing an ACTION_UP event when the first finger ups), but I still don't know what causes it. My android pad? Or Android sdk?
That is way it works for multitouch. When you press fast android handle it as gesture, and you will have 2 pressed pointers. To avoid it try to handle action_up, or use action_pointer_down instead
you can check the id of the touch , so that you will handle only the first touch alone.
alternatively , you can monitor all touches and handle them together .