I have this
int MainWindow::messageBox( QString button, QMessageBox::ButtonRole buttons, QString info, QMessageBox::Icon icon )
{
QFont f;
f.setPointSize(6);
QMessageBox *message = new QMessageBox(this);
message->setWindowModality(Qt::WindowModal);
message->setFont(f);
message->setText(info);
message->addButton( button, buttons );
message->setWindowTitle("MainWindow");
message->setIcon(icon);
message->move( this->width() / 2, this->height() / 2 );
return message->exec();
}
But I can't make the qmessagebox go to the center of the screen, I also tried using setGeometry, but it doesn't work. Any ideas on this?
I solved using show() before moving it. This is the code:
int MainWindow::messageBox( QString button, QMessageBox::ButtonRole buttons, QString info, QMessageBox::Icon icon )
{
QFont f;
QMessageBox *message = new QMessageBox(this);
QDesktopWidget *win = new QDesktopWidget();
f.setPointSize(6);
message->setWindowModality(Qt::WindowModal);
message->setFont(f);
message->setText(info);
message->addButton( button, buttons );
message->setWindowTitle("MainWindow");
message->setIcon(icon);
message->show();
message->move( win->width() / 2 - message->width() / 2, win->height() / 2 - message->height() / 2 );
return message->exec();
}
A QMessageBox is created with the window flag Qt::Dialog (and indirectly, Qt::Window). This means that it will be treated like a system window even though it has a parent assigned. When you call move() on it, it will be positioned in desktop coordinates.
When you move the message box in your code above, you are telling it to appear at desktop coordinates equal to half the width and height of your main application window size offset from origin (the top left corner of you desktop).
If your main application window has a size of 400x200, then your message box will appear at desktop coordinates 200,100 no matter where your main application window is located.
If you make your application window full screen and then display the message box, the message box should appear (roughly) at the center of your desktop display. I say roughly because you are specifying the position of the top left corner of the message box, not where the center of the message box will appear to be.
If you want the message box to always appear at the center of the screen, then you need to use the information provided by QDesktopWidget to determine what the correct screen coordinates should be.
Related
TLDR: I need a way to disable Android 10 gesture navigation programmatically so that they don't accidentally go back when they swipe from the sides
The backstory: Android 10 introduced gesture navigation as opposed to the buttons at the bottom. So now on Android 10 devices that have it enabled, they can swipe from either side of the screen to go back and swipe from the bottom to navigate home or between apps. However, I am working on an implementation in AR and want to lock the screen to portrait but allow users to go landscape.
If a user turns their phone to landscape but the activity is locked to portrait, the back gesture navigation is now a swipe from the top which is a common way to access the status bar in a full screen app (which this one is) so users will inadvertently go back and leave the experience if they are used to android navigations.
Does anybody know how to either a) disable the gesture navigation (but then how does the user go back/to home?) for Android 10 programmatically or b) know how to just change the orientation for the gestures without needing your activity to support landscape?
It's very easy to block the gestures programmatically, but you can't do that for entire edges on both side.
SO you have to decide on how much portion of the screen you want to disable the gestures?
Here is the code :
Define this code in your Utils class.
static List<Rect> exclusionRects = new ArrayList<>();
public static void updateGestureExclusion(AppCompatActivity activity) {
if (Build.VERSION.SDK_INT < 29) return;
exclusionRects.clear();
Rect rect = new Rect(0, 0, SystemUtil.dpToPx(activity, 16), getScreenHeight(activity));
exclusionRects.add(rect);
activity.findViewById(android.R.id.content).setSystemGestureExclusionRects(exclusionRects);
}
public static int getScreenHeight(AppCompatActivity activity) {
DisplayMetrics displayMetrics = new DisplayMetrics();
activity.getWindowManager().getDefaultDisplay().getMetrics(displayMetrics);
int height = displayMetrics.heightPixels;
return height;
}
public static int dpToPx(Context context, int i) {
return (int) (((float) i) * context.getResources().getDisplayMetrics().density);
}
Check if your layout is set in that activity where you want to exclude the edge getures and then apply this code.
// 'content' is the root view of your layout xml.
ViewTreeObserver treeObserver = content.getViewTreeObserver();
treeObserver.addOnGlobalLayoutListener(new
ViewTreeObserver.OnGlobalLayoutListener() {
#Override
public void onGlobalLayout() {
content.getViewTreeObserver().removeOnGlobalLayoutListener(this);
SystemUtil.updateGestureExclusion(MainHomeActivity.this);
}
});
We are adding the exclusion rectangle width to 16dp to fetch the back gesture which you can change according to your preferrences.
Here are some things to note :-
You must not block both side gestures. If you do so, it'll be the worst user experience.
"getScreenHeight(activity)" is the height of the rectangle, So if you want to block the gesture in left side & top half of the screen then simply replace it with getScreenHeight(activity)/2
1st argument in new Rect() is 0 because we want the gestures on left sdie, If you want it right side then simply put - "getScreenWidth(activity) - SystemUtil.dpToPx(activity, 16)"
Hope this will solve your problem permanently. :)
Remember:
setSystemGestureExclusionRects() must be called in doOnLayout() for your view
My implementation:
binding.root.apply { // changing gesture rects for root view
doOnLayout {
// updating exclusion rect
val rects = mutableListOf<Rect>()
rects.add(Rect(0,0,width,(150 * resources.displayMetrics.density).toInt()))
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q) {
systemGestureExclusionRects = rects
}
}
}
I excluded gestures for 150dp from the top, and for the entire width (just to test)
While #Dev4Life's answer was helpful, I had no success until visiting the documentation:
Android Docs: setSystemGestureExclusionRects
I've seen countless topics about this, but as for now I cannot find anything working. I have portrait app with a form containing mostly input fields.
I've created a ScrollView and inside content I've added all necessary fields. I've added Vertical Layout Group and Content Size Fitter. Now, when I run the app, content moves nicely, but when I open keyboard it overlaps the content and I cannot see lower input fields when editing them. It doesn't look well. So, I'm looking for a script/plugin to enable this feature for both android and iOS. I've only seen iOS-specific solutions and universal, but none worked for me. I can link all the codes I've found, but I think it'll just create an unnecessary mess. Most of these topics are old and may have worked before, but doesn't work now.
Ideally, I'd prefer a global solution, which just shrinks whole app, when keyboard is opened and expands it back, when keyboard is closed.
Btw. I'm using Unity 2019.2.15f1 and running on Pixel 3XL with Android 10, if that matters.
edit:
I've created small demo project, where you can test Keyboard size script:
https://drive.google.com/file/d/1vj2WG2JA1OHPc3uI4PNyAeYHtuHTLUXh/view?usp=sharing
It contains 3 scripts:
ScrollContent.cs - it attaches InputH.cs script to every input field programatically.
InputH.cs - it handles starting (OnPointerClick) and ending (onEndEdit) of edit single input field by calling OpenKeyboard/CloseKeyboard from KeyboardSize script.
KeyboardSize.cs - script from #Remy_rm, slightly modified (added some logs, IsKeyboardOpened method and my trial of adjust the keyboard scroll position).
The idea looks fine, but there are few issues:
1) "added" height seems to be working, however scrolled content should also be moved (my attempt to fix this is in the last line in GetKeyboard height method, but it doesn't work). It you scroll to the bottom Input Field and tap it, after keyboard is opened this field should be just above it.
2) when I tap second time on another input field, while first one is edited, it causes onEndEdit to be called and keyboard is closed.
Layout hierarchy looks like this:
This answer is made assuming you are using the native TouchScreenKeyboard.
What you can do is add an image as the last entry of your Vertical Layout Group (let's call it the "buffer"), that has its alpha set to 0 (making it invisible) and its height set to 0. This will make your Layout Group look unchanged.
Then when you open the keyboard you set the height of this "buffer" image to the height of the keyboard. Due to the content size fitter and the Vertical Layout Group the input fields of your form will be pushed to above your keyboard, and "behind" your keyboard will be the empty image.
Getting the height of the keyboard is easy on iOS TouchScreenKeyboard.area.height should do the trick. On Android however this returns an empty rect. This answer answer explains how to get the height of an Android keyboard.
Fully implemented this would look something like this (Only tested this on Android, but should work for iOS as well). Note i'm using the method from the before linked answer to get the Android keyboard height.
using System.Collections;
using UnityEngine;
using UnityEngine.UI;
public class KeyboardSize : MonoBehaviour
{
[SerializeField] private RectTransform bufferImage;
private float height = -1;
/// <summary>
/// Open the keyboard and start a coroutine that gets the height of the keyboard
/// </summary>
public void OpenKeyboard()
{
TouchScreenKeyboard.Open("");
StartCoroutine(GetKeyboardHeight());
}
/// <summary>
/// Set the height of the "buffer" image back to zero when the keyboard closes so that the content size fitter shrinks to its original size
/// </summary>
public void CloseKeyboard()
{
bufferImage.GetComponent<RectTransform>().sizeDelta = Vector2.zero;
}
/// <summary>
/// Get the height of the keyboarding depending on the platform
/// </summary>
public IEnumerator GetKeyboardHeight()
{
//Wait half a second to ensure the keyboard is fully opened
yield return new WaitForSeconds(0.5f);
#if UNITY_IOS
//On iOS we can use the native TouchScreenKeyboard.area.height
height = TouchScreenKeyboard.area.height;
#elif UNITY_ANDROID
//On Android TouchScreenKeyboard.area.height returns 0, so we get it from an AndroidJavaObject instead.
using (AndroidJavaClass UnityClass = new AndroidJavaClass("com.unity3d.player.UnityPlayer"))
{
AndroidJavaObject View = UnityClass.GetStatic<AndroidJavaObject>("currentActivity").Get<AndroidJavaObject>("mUnityPlayer").Call<AndroidJavaObject>("getView");
using (AndroidJavaObject Rct = new AndroidJavaObject("android.graphics.Rect"))
{
View.Call("getWindowVisibleDisplayFrame", Rct);
height = Screen.height - Rct.Call<int>("height");
}
}
#endif
//Set the height of our "buffer" image to the height of the keyboard, pushing it up.
bufferImage.sizeDelta = new Vector2(1, height);
}
StartCoroutine(CalculateNormalizedPosition());
}
private IEnumerator CalculateNormalizedPosition()
{
yield return new WaitForEndOfFrame();
//Get the new total height of the content gameobject
var newContentHeight = contentParent.sizeDelta.y;
//Get the local y position of the selected input
var selectedInputHeight = InputH.lastSelectedInput.transform.localPosition.y;
//Get the normalized position of the selected input
var selectedInputfieldHeightNormalized = 1 - selectedInputHeight / -newContentHeight;
//Assign the button's normalized position to the scroll rect's normalized position
scrollRect.verticalNormalizedPosition = selectedInputfieldHeightNormalized;
}
I've edited your InputH to keep track of the last selected input by adding a static GameObject that is assigned to when an inputfield is clicked like so:
public class InputH : MonoBehaviour, IPointerClickHandler
{
private GameObject canvas;
//We can use a static GameObject to track the last selected input
public static GameObject lastSelectedInput;
// Start is called before the first frame update
void Start()
{
// Your start is unaltered
}
public void OnPointerClick(PointerEventData eventData)
{
KeyboardSize ks = canvas.GetComponent<KeyboardSize>();
if (!ks.IsKeyboardOpened())
{
ks.OpenKeyboard();
//Assign the clicked button to the lastSelectedInput to be used inside KeyboardSize.cs
lastSelectedInput = gameObject;
}
else
{
print("Keyboard is already opened");
}
}
One more thing that had to be done for (atleast my solution) to work is that I had to change the anchor on your "Content" GameObject to top center, instead of the stretch you had it on.
I'm using the Corona sdk to create a simple Android game. I have it almost completely done, just trying to polish now. This is something that I'm having trouble with:
This is my game's setup page. Players are able to tap the up/down arrows to increase/decrease game attributes. I want the arrows to align with the numbers. My current code does not account for this:
local numPlayerstxt = display.newText("Number of players: "..NUMPLAYERS, w/2,7*h/20,0,0,"Quattrocento-Regular",24)
numPlayerstxt:setFillColor(0,0,0)
sceneGroup:insert(numPlayerstxt)
local numMafiatxt = display.newText("Number of Mafia: "..MAFIA, w/2,12*h/20,0,0,"Quattrocento-Regular",24)
numMafiatxt:setFillColor(0,0,0)
sceneGroup:insert(numMafiatxt)
local upTotal = display.newImage( "arrow1.png")
upTotal:translate(33*w/40, 6*h/20)
upTotal:scale(0.08, 0.08)
sceneGroup:insert(upTotal)
upTotal:addEventListener("tap", increasePlayers)
local downTotal = display.newImage( "arrow1.png")
downTotal:translate(33*w/40, 8*h/20)
downTotal:scale(0.08, 0.08)
downTotal.rotation = 180
sceneGroup:insert(downTotal)
downTotal:addEventListener("tap", decreasePlayers)
local upMafia = display.newImage( "arrow1.png")
upMafia:translate(32*w/40, 11*h/20)
upMafia:scale(0.08, 0.08)
sceneGroup:insert(upMafia)
upMafia:addEventListener("tap", increaseMafia)
local downMafia = display.newImage( "arrow1.png")
downMafia:translate(32*w/40, 13*h/20)
downMafia:scale(0.08, 0.08)
downMafia.rotation = 180
sceneGroup:insert(downMafia)
downMafia:addEventListener("tap", decreaseMafia)
As you can see, my code currently only takes into account screen width and height values in order to reasonably estimate the approximate locations. Inevitably this code fails to create any illusion of polish when the app is tried on different device screens.
Any suggestions on what I might do to align the arrows with the numbers? For instance, is there a way in Corona to find the end point of a textview, and align it in that way?
Thanks for reading.
try changing the anchorX of each arrows like this:
-- suppose one of your arrows is upArrow
upArrow.anchorX = 0.4 -- even try each float between 0 - 1
this move your object's anchor in x axis of the object.
I have recently started learning uiautomator for the UI testing of various Android devices. Currently I am testing on Galaxy S4.
I am looking for any class or method which can be used to automate the unlock pattern that user draws to unlock the phone. For example, I have letter N as a "draw pattern" to unlock the phone. How can I automate this unlock pattern in uiautomator?
Suppose you have letter "N" as unlock pattern, then first you have find the co-ordinates of each point of that N shape in your device. As you mentioned, the entire pattern lock will have 9 dots, you have to get (x,y) co-ordinates of 4 dots. To get the co-ordinates, you can use the
same method mentioned earlier in one of the answer.
Go to 'Settings' -> 'Developer Options'.
Under 'INPUT' section -> you will find a option 'Pointer Location' -> enable that option.
Once you get your 4 dots' co-ordinates, use swipe(Point[] segments, int segmentSteps) method of UiAutomator Framework.
The input for this method is the 4 set of co-ordinates that you got from your device screen as Point array. This will give continuous swipe through the points.
I have given a sample script below for your understanding.
import android.graphics.Point;
public void unlockpatternlock() throws UiObjectNotFoundException, Exception {
Point[] cordinates = new Point[4];
cordinates[0] = new Point(248,1520);
cordinates[1] = new Point(248,929);
cordinates[2] = new Point(796,1520);
cordinates[3] = new Point(796,929);
getUiDevice().wakeUp();
getUiDevice().swipe(cordinates, 10);
}
Above script would draw N shape smoothly. Remember input the co-ordinates according to your device screen.
This is the only way I know to do it, but it can be tedious trying to find your x and y coordinates.
UiDevice.getInstance().swipe(int startX, int startY, int endX, int endY, int steps)
The only probelm I see is to do an "N", you would need 3 of these swipe's. To unlock, it needs to be one continuous swipe.
Give it a show. Finding your x and y will be tough. I would go to my "apps home" page and look at apps (with the uiautomatorviewer) that are in relatively the same spot, find their coords, then go from there.
NOTE The int steps is how fast and "smooth" you want to swipe. I like to use 5 or 10. It seems pretty natural.
To find out the co-ordinates of the screen , you can follow this :
[1] Go to 'Settings' -> 'Developer Options'.
[2] Under 'INPUT' section -> you will find a option 'Pointer Location' -> enable that option.
After that if you touch anywhere on the screen -> you can view the exact screen coordinates of that point on top of the screen on your device.
And after you get the coordinates , you can use swipe method say like this -
UiDevice.getInstance().swipe(390, 1138, 719, 1128, 40);
method easily giving the exact co-ordinates where to drag from and till what point.
I have already used this and it works!
I would like to know if there is some sort of way using maybe views or something to have a background service detect if the foreground app is in full screen or not. meaning the status bar is hidden or not hidden.
I was thinking maybe using constant strings to detect if a view is shown or not? But im not sure exactly. Root is an option if needed.
Thanks guys!!
How about using a broadcast receiver or something like that. That would be ideal, however there is no such broadcast for a full screen request.
Here is what I did do however.
I created an invisible overlay, this invisible overlay is 0 x 0 in size :)
I then use View.getLocationOnScreen(int[]);
This returns the int array with the values of the coordinates in x and y. I then test these coordinates (only really focusing on the y value) if it equals 0, then the current viewable activity is in full screen, (because it is at the highest most area on the screen) if the status bar is showing, then the view will return with (50 pixels on my device) meaning the view is 50 pixels from the top of the screens edge.
I place all this in a service which has a timer and when the timer expires, it gets the location, runs the tests, and does what I need to do. The timer is then cancelled when the screen shuts off. Upon screen on, timer is restarted.
I checked how much CPU it uses, and it says 0.0% in System Tuner.
I like the idea of Seth, so I made the code snippet:
/**
* Check if fullscreen is activated by a position of a top left View
* #param topLeftView View which position will be compared with 0,0
* #return
*/
public static boolean isFullscreen(View topLeftView) {
int location[] = new int[2];
topLeftView.getLocationOnScreen(location);
return location[0] == 0 && location[1] == 0;
}
Have you tried getWindow().getAttributes().flags?
public static boolean fullScreen = (getWindow().getAttributes().flags & WindowManager.LayoutParams.FLAG_FULLSCREEN) != 0;
public static boolean forceNotFullScreen = (getWindow().getAttributes().flags & WindowManager.LayoutParams.FLAG_FORCE_NOT_FULLSCREEN) != 0;
public static boolean actionbarVisible = getActionBar().isShowing();
Reference:
if( MyActivity.fullScreen){
// full screen
}
else{
// not full screen
}