I add UI object Button and add the C# script with public function.
To button I add component Event Trigger, do events(Pointer Click and Pointer Down) and redirect to my function public void onClick()
On PC code works, but when I upload game to android and touch the object, code not works.
How to do onTouch event?
I think OnMouseDown will check every frame if there is a mouse input it's like update so you have to cheek touch in Update & with touch you will have more control like Touch Phase to detect if the touch begins or lifted or moved etc...
you need to check
if(input.touchCount > 0)
void Update() {
if (Input.touchCount > 0){
print("exist a touch");
if(Input.GetTouch(0).phase == TouchPhase.Began){
print("Touch begans");
}
if(Input.GetTouch(0).phase == TouchPhase.Ended){
print("Touch Ended");
}
}
}
chCount > 0)& inside this you can cheek for touch Phase
Touches aren't Clicks
In order to handle touch input you need to check Input.touchCount and then query each touch with Input.GetTouch. Note that each Touch has an an ID that will be unique per finger and consistent across frames.
There are no easy OnClick like methods for touches, as touches can be a lot more complex (tap, long tap, drag, etc), so you will have to check inside Update() and handle the conversion from touch data to mouse analogs yourself.
Related
We have pointerInput for detecting tap, drag and pan events, and it also provides a handy awaitPointerEventScope, the pointer being the finger, for mobile devices here. Now, we do have a awaitFirstDown() for detecting when the finger first makes contact with the screen, but I can't seem to find an upper equivalent to this method.
I have a little widget that I wish to detect taps on, but the thing is that the app is made for such a use-case that the user might be in weird positions during its use, and so I wished to have it trigger the required action on just a touch and lift of the finger. The paranoia is that the user might accidentally 'drag' their finger (even by a millimeter, android still picks it up), and I do not want that to be the case. I could implement a tap as well as a drag listener, but none of them offer a finger-lift detection, as far as I know.
What solution, if there is one as of now, is suitable for the use-case while adhering to and leveraging the declarative nature of Compose while keeping the codebase to a minimum?
Better way, and what is suggested by Android code if you are not using interoperability with existing View code is Modifier.pointerInput()
A special PointerInputModifier that provides access to the underlying
MotionEvents originally dispatched to Compose. Prefer pointerInput and
use this only for interoperation with existing code that consumes
MotionEvents. While the main intent of this Modifier is to allow
arbitrary code to access the original MotionEvent dispatched to
Compose, for completeness, analogs are provided to allow arbitrary
code to interact with the system as if it were an Android View.
val pointerModifier = Modifier
.pointerInput(Unit) {
forEachGesture {
awaitPointerEventScope {
awaitFirstDown()
// ACTION_DOWN here
do {
//This PointerEvent contains details including
// event, id, position and more
val event: PointerEvent = awaitPointerEvent()
// ACTION_MOVE loop
// Consuming event prevents other gestures or scroll to intercept
event.changes.forEach { pointerInputChange: PointerInputChange ->
pointerInputChange.consumePositionChange()
}
} while (event.changes.any { it.pressed })
// ACTION_UP is here
}
}
}
This answer explains in detail how it works, internals and key points to consider when creating your own gestures.
Also this is a gesture library you can check out for onTouchEvent counterpart and 2 for detectTransformGestures with onGestureEnd callback and returns number of pointers down or list of PointerInputChange in onGesture event. Which can be used as
Modifier.pointerMotionEvents(
onDown = {
// When down is consumed
it.consumeDownChange()
},
onMove = {
// Consuming move prevents scroll other events to not get this move event
it.consumePositionChange()
},
onUp= {}
delayAfterDownInMillis = 20
)
Edit
As of 1.2.0-beta01, partial consumes like
PointerInputChange.consemePositionChange(),
PointerInputChange.consumeDownChange(), and one for consuming all changes PointerInputChange.consumeAllChanges() are deprecated
PointerInputChange.consume()
is the only one to be used preventing other gestures/event.
pointerInteropFilter is the way to go
Item(
Modifier.pointerInteropFilter {
if (it.action == MotionEvent.ACTION_UP) {
triggerAction()
}
true // Consume touch, return false if consumption is not required here
}
)
I have created a button in my 2d game using Unity3d, added box collider 2d with name PADBASE to detect touch events this way:
if(Input.touchCount > 0)
{
for(int i = 0; i < Input.touchCount; i++)
{
Vector3 mouseWorldPos3D = Camera.main.ScreenToWorldPoint(Input.GetTouch(i).position);
Vector2 mousePos2D = new Vector2(mouseWorldPos3D.x, mouseWorldPos3D.y);
Vector2 dir = Vector2.zero;
RaycastHit2D hit = Physics2D.Raycast(mousePos2D, dir);
Touch t = Input.GetTouch(i);
if (hit.transform != null)
{
if(Physics2D.Raycast (hit.transform.position , hit.transform.forward))
{
GameObject recipient = hit.transform.gameObject;
if(t.phase == TouchPhase.Began) //poczatek dotyku
{
// button clicked
// change colour to red to visually show that button is being pressed
}
else if (t.phase == TouchPhase.Ended)
{
// change collor to its default colour to visually show that its no longer pressed
}
}
}
}
}
And lets assume I will change button colour to red when player touched the button, and back to its default colour when he released his finger (for example)
Now it will obviously work only when player will release his finger, as long as his finger is actually inside bounds of the box collider, what I am trying to do is "bind touch events(?)" to still catch touch event (slide or ended) even if player moved his finger outside of the collider without releasing his finger (for example accidentally)
I am looking forward for some suggestions, thanks.
In my game I will have multiple buttons so multi touch is necessary.
Solved, it actually turns to be really easy, you need to store and compare touch finger ID.
Solution:
- on touch began:
check if touch is withing game objects box collider like in the code attached above.
get touch finger ID and pass it to your button or keep reference of it
- on touch ended:
compare touch finger ID with your stored touch id, if they are equal, set your stored finger id to -1 and perform your code that should execute on touch edned
- on touch moved or (||) stationary:
do the same comparison of the touch id like in the touch ended, and perform your code that should be executed if player will slide/move his finger, but do not modify your finger id variable.
That`s it, works well with multiple buttons with multi touch.
I develop a game that need user to have a single touch to draw over to link multiple objects. Currently using mouse button as touch input with this code:
if(Input.GetButton("Fire1"))
{
mouseIsDown = true;
...//other codes
}
It works fine with mouse but it will give me problem when I compile it for multi touch device. In multitouch device, if you have 2 fingers down, the middle point will be taken as the input. But if I already touch an object then with second finger down to screen, it will mess up my game input and give me havoc design.
What I want is to limit it to 1 touch only. If the second finger come touching, ignore it. How can I achieve that?
Below is all you need:
if ((Input.touchCount == 1) && (Input.GetTouch (0).phase == TouchPhase.Began)) {
mouseIsDown = true;
...//other codes
}
It will only fire when one finger is on the screen because of Input.touchCount == 1
You are using Input.GetButton. A button is not a touch. You probably want to look at Input.GetTouch
If you need to support multiple input-type devices, you may want to consider scripting your own manager to abstract this out somewhat.
I would roll with a bit of polling!
so in your Update() I typically do something like this:
foreach (Touch touch in Input.touches)
{
// Get the touch id of the current touch if you're doing multi-touch.
int pointerID = touch.fingerId;
if (touch.phase == TouchPhase.Began)
{
// Do something off of a touch.
}
}
If you're looking for more info check this out:
TouchPhases in Unity
In an AndEngine game, I have implemented onAreaTouched() so that when an sprite is tapped, I detect and respond accordingly. Also, I have implemented onSceneTouchEvent() so that when user taps screen, I move the character. But I am having an issue. When user taps on some sprite, the onAreaTouched() is called but after that onSceneTouchEvent() is also being called resulting in two actions against one tap. I want that when sprite is tapped, only onAreaTouched() should be called and vice versa. I also tried to return true/false from onAreaTouched() but it did not worked. Any idea what do am am missing?
If the onAreaTouched() function is being called before the onSceneTouchEvent() function, you could just have a member boolean to control whether or not you want run the code in onSceneTouchEvent()
OnAreaTouched()
{
mIsSpriteTouched = true;
// then run all relevant code when the sprite is touched
}
onSceneTouchEvent()
{
if (!mIsSpriteTouched)
{
//execute code that should run when the sprite isn't touched
}
mIsSpriteTouched = false; //reset for next event;
}
What is the difference of events of onFling() and onScroll() of android.view.GestureDetector.OnGestureListener?
link text
onScroll() happens after the user puts his finger down on the screen and slides his finger across the screen without lifting it. onFling() happens if the user scrolls and then lifts his finger. A fling is triggered only if the motion was fast enough.
Actually onFling has nothing to do with the speed at which the movement ocurred. It's the user, via the velocityX and velocityY parameters that determine if the speed (or distance, via the MotionEvent parameters) was good enough for their purposes.
The onScroll is constantly called when the user is moving his finger, where as the onFling is called only after the user lifts his finger.
You can see the code of framework/base/core/java/android/view/GestureDetector.java, at the onTouchEvent() method. onFling() is called in case of MotionEvent.ACTION_UP and velocityY > mMinimumFlingVelocity or velocityX > mMinimumFlingVelocity. onScroll() is called in case of MotionEvent.ACTION_MOVE.
You can differentiate between the two after the onFling() happens. First, in onDown() store the current coordinates of the image as class variables. The onScroll() will work as expected but if the onFling() determines that this is a fling event, just restore the original coordinates that were stored in onDown(). I found this works very well.
#Override
public boolean onDown(MotionEvent e) {
// remember current coordinates in case this turns out to be a fling
mdX = imageView.dX;
mdY = imageView.dY;
return false;
}