Touch out from app window AIR for Android - android

I create app for android in Adobe Flash Professional.
It is fragment of code.
stage.addEventListener( TouchEvent.TOUCH_OUT, _out );
function _out( e:TouchEvent):void
{
trace( "OUT!" );
}
When I move on some view object I obtain message. When I move on the screen and then move out the area of the screen I'm not receiving messages. What do?

Just to be sure, you are trying to trigger a function whenever the cursor is rolled out of the stage. In such a case, a naive option is check the coordinate of the mouse an to check if its on the stage or not. Whenever the cursor crosses the stage dimensions, the function can be triggered.
Another way is to use a transparent object on the stage and check collision of the mouse with that. Whenever the collision detection returns false, the function will be triggered.

TOUCH_OUT will not work on Windows debugging sessions, but it will work on your Android. Don't worry.
To avoid the event being triggered by on-stage objects, just set the property mouseChildren of all your MovieClips to false.

Related

react-native: TouchableOpacity'a onPress() doesn't get called in modals on slower Android devices

On older and slower Android devices, the onPress() method on TouchableOpacity might not be called even though the button is pressed and you can see the opacity effect on the view.
What's weird is that it's statistical, on a Samsung Galaxy A8 around 20% of the time the press did work, on my Pixel 6 Pro the press works 100% of the time.
onPressIn() is always called but onPress() is uncertain.
tl;dr Don't use react-native-modals, it's buggy, find an alternative (I use react-native-modal instead)
I spent a while trying to figure out why onPressIn() was called successfully while onPress() wasn't, I read the logic in react-native's code, which uses the Gesture Responder System on determining whether a callback should be called.
When the press works, those are the signals I see from the touch events system:
RESPONDER_GRANT // (touch detected on View, causes onPressIn() to be called)
DELAY // (can determine whether we want onPress() or onPressLong())
RESPONDER_RELEASE // (finger lifted, depending on DELAY now onPress() / onPressOut() are called)
and when the press doesn't work:
RESPONDER_RELEASE
RESPONDER_TERMINATED // (nothing happens)
RESPONDER_TERMINATED means that someone else took control of the gesture responder system, why does it happen? I'm not sure, I couldn't figure out why react-native-modals caused it to happen but eventually I tried using react-native-modal instead and it acted correctly, nothing hijacked my presses!

Unity canvas scaler hiding animations in high resolution display

I instantiate the following gameObject, which contains an Animator with the mode "always animate" on, the animation goes for 340ms, after that time I destroy the gameObject.
The gameObject Inspector properties:
I instantiate it using the following code:
instancia = (Instantiate(cardAnimation, new Vector3(0, 0, 0), Quaternion.identity) as GameObject).GetComponent<Image>();
instancia.rectTransform.SetParent(transform);
StartCoroutine(KillOnAnimationEnd());
Here is the Coroutine:
private IEnumerator KillOnAnimationEnd()
{
yield return new WaitForSeconds(0.34f);
DestroyImmediate(instancia);
}
Here is how the animation looks like when simulating in Unity (PC-Windows):
But on android after I open the chest it waits 340ms with nothing happening and then show the information above, does this have something to do with the plataform or is some unity or perhaps code related issue?
NOTE: I also have another animation in another scene that is just a already instantiated gameObject in the Hierarchy with always animated on and it works on Android.
--EDIT--
So I have ran the newest version of the app in a emulator which is almost about 1080x480 and the animation showed just as the PC, also running on a 720p smartphone did the job, the only problem I'm still having is with my QuadHD resolution from Galaxy S6, everything else shows but the animation, I have even tried making the animation run without any script so it runs in a loop, but it doesn't show up in galaxy screen.
Given the news about the issue I think this might change a little bit the perspective of answers and perhaps help someone else solve the same problem in the future.
Okay, figured out the problem, its something to do with "rotation" in animations using Unity3D in 2D mode, gonna be reporting it form Unity so it is fixed.
The solution: Animate your UI only using scale/position, if used rotation it will not show on high resolution display.
I am pretty sure your WaitForSeconds(0.34f) is not working properly because there is no thing such as yield keyword in Java. I recommend you to use a invoke method instead to call your method that destroys your GameObject.

Take a screenshot during dragging in Appium

In Android, I need to drag an object in my app and take a screenshot WHILE STILL HOLDING the object.
I know that there are two ways of using touch actions (I'm not even considering the higher-level methods such as swipe() as they give me much less control over my touch actions):
new TouchAction(driver).press(element).moveTo(x,y).release().perform();
and
driver.performTouchAction(new TouchAction(driver).press(element).moveTo(x,y).release());
When I try to divide my touch action into two parts, and inserting a screenshot capture in between as in the code below:
new TouchAction(driver)
.press(x,y)
.moveTo(newX,newY)
.perform();
takeScreenshot(); // My own implementation for readability
new TouchAction(driver)
.release()
.perform();
I get the following error:
org.openqa.selenium.WebDriverException: ERROR running Appium command: Cannot read property 'x' of null
Command duration or timeout: 14 milliseconds
The program fails during the second touch action, i.e., the screenshot is being successfully taken, but I have no way of releasing the object after grabbing it in this manner.
Any ideas?
By looking at your question
new TouchAction(driver)
.release()
.perform();
for release provide some x and y location to release have a try might work

Manually setting the SCAN_WIDTH and SCAN_HEIGHT causes ZXing to crash

I'm using the popular ZXing project to enable barcode scanning on my Android application.
I want to manually set the width and height of my viewfinder, so I used the following:
intent.putExtra("SCAN_WIDTH", 400);
intent.putExtra("SCAN_HEIGHT", 300);
Before sending my intent. However, the app crashes due to a NullPointerException at line 279 in CameraManager.java. I did some debugging and it looks like the screenResolution member of configManager is never initialized. I debugged some more, to find that surfaceCreated() is not called in time (this is supposed to be done through a Callback). At least, that is what it seems like to me, since surfaceCreated() in CaptureActivity.java is responsible for initializing those members of configManager. I did some searching on here and Google but it doesn't seem like people use those intent extras SCAN_WIDTH and SCAN_HEIGHT. They are manually setting the MIN and MAX width/height values within the ZXing code, which I am trying to avoid. Any help would be appreciated.
The scanner works fine when I am not setting those width/height values via intent.
EDIT: After updating my version of the ZXing library, this is no longer an issue. It also fixed the front camera issue I was having with the 2012 Nexus 7.
screenResolution is definitely set, in initFromCameraParameters. It happens when the driver opens. It's OK if surfaceCreated happens a bit later since the onResume method registers a callback to initialize the camera after the surface is created, if it's not already available.
onResume calls setManualFramingRect even if it's not initialized, but, in that case it just saves the request in requestedFramingRectWidth and requestedFramingRectHeight and sets it later.
I think this case is handled correctly, but as ever I can't be 100% sure there's not an oversight. Maybe you can say more about where you think the problem is given this info.

Recenter or Reorient view with Cardboard SDK on Unity

With Unity, the CardboardHead script is added to the main camera and that handles everything quite nicely, but I need to be able to "recenter" the view on demand and the only option I see so far is to rorate the entire scene and it seems like this is something the would address first-hand and I can't find anything in the docs.
With Oculus Mobile SDK (GearVR), it would be OVRCamera.ResetCameraPositionOrientation(Vector3.one, Vector3.zero, Vector3.up, Vector3.zero); though they handle it nicely each time the viewer is put on so it's rarely needed there.
There's a "target" parameter on the CardboardHead that lets you use to another gameobject as a reference for rotation. Or you can use a dummy parent gameobject. Either way, when you want to recenter, you set this reference object's rotation so that the CardboardHead is now pointing forward. Add this function to an script on the CardboardHead (or just add it into that script):
public void Recenter() {
Transform reference = target != null ? target : transform.parent;
if (reference != null) {
reference.rotation = Quaternion.Inverse(transform.rotation) * reference.rotation;
// next line is optional -- try it with and without
reference.rotation = Quaternion.FromToRotation(reference.up, Vector3.up) * reference.rotation;
}
}
Cardboard.SDK.Recenter (); should do the trick.
Recenter orientation Added Recenter() function to Cardboard.SDK, which resets the head tracker so the phone's current heading becomes the forward direction (+Z axis).
Couldn't find the docs for the API/SDK but it's in the release notes for the v0.4.5 Update.
You can rotate the Cardboard Main to point in a certain direction.
This is what worked for me when I wanted the app to start up pointing a certain way. Since the CardboardHead points at Vector3.zero on startup if no target is assigned, I ran a function during Start() for the CardboardMain that would point in the direction I wanted.
Of course, if you're already rotating CardboardMain for some other reason, it may be possible to use this same method by creating a parent of the CardboardHead (child of CardboardMain) and doing the same thing.
This question is a bit old but for Google VR SDK 1.50+ you can do
transform.eulerAngles = new Vector3(newRot.x, newRot.y, newRot.z);
UnityEngine.VR.InputTracking.Recenter();
also, if you don't want to get confused you also need to catch the GvrEditorEmulator instance and Recenter it as well.
#if UNITY_EDITOR
gvrEditorEmulator.Recenter();
#endif
Recentering GvrEditorEmulator though doesn't seem to work very well at the moment but if you disable it you'll see the recentering works for the main camera.

Categories

Resources