Game maker pause menu clicking - android

I have an obj pause controller. It used to work when using up/down arrow then enter to click something in the pause menu. BUT how exactly do I get it use left mouse clicks (touch screen) instead of the enter key. I have this code as my enter key.
if (interest == "resume")
{
instance_destroy();
}
else if (interest == "levels")
{
room_goto(worlds);
}
else if (interest == "main_menu")
{
room_goto(main);
}
And this in my draw gui
draw_sprite(background, 0, 640, 360);
draw_sprite(spr_border, 0, 640, 360);
if (interest == "resume")
{
draw_sprite(spr_resume, 0, 640, 360 - 100);
draw_sprite(spr_levels, 0, 640, 360);
draw_sprite(spr_main_menu, 0, 640, 360 + 100);
}
else if (interest == "levels")
{
draw_sprite(spr_resume, 0, 640, 360 - 100);
draw_sprite(spr_levels, 0, 640, 360);
draw_sprite(spr_main_menu, 0, 640, 360 + 100);
}
else if (interest == "main_menu")
{
draw_sprite(spr_resume, 0, 640, 360 - 100);
draw_sprite(spr_levels, 0, 640, 360);
draw_sprite(spr_main_menu, 0, 640, 360 + 100);
}
I tried using this in my step event to get the clicking (touching) to work but its not working at all (nothing activates when touching or clicking). Is this even right?
if(device_mouse_check_button_released(0, mb_left)){
if (device_mouse_x(0) > 640 && device_mouse_x(0) < 640 + sprite_get_width(spr_resume)
&& device_mouse_y(0) > 260 && device_mouse_y(0) < 260 + sprite_get_height(spr_resume)){
//RESUME IS TOUCHED
}
}
if(device_mouse_check_button_released(0, mb_left)){
if (device_mouse_x(0) > 640 && device_mouse_x(0) < 640 + sprite_get_width(spr_levels)
&& device_mouse_y(0) > 260 && device_mouse_y(0) < 260 + sprite_get_height(spr_levels)){
}
}
if(device_mouse_check_button_released(0, mb_left)){
if (device_mouse_x(0) > 640 && device_mouse_x(0) < 640 + sprite_get_width(spr_main_menu)
&& device_mouse_y(0) > 260 && device_mouse_y(0) < 260 + sprite_get_height(spr_main_menu)){
}
}

I see a number of problems with your code that you will all need to address for anything to work:
Your interest variable is checked for changes, but nowhere is it actually changed (I would expect it to be changed in your step event script, but it doesn't do anything).
Your draw gui script draws exactly the same thing for three different values of interest. There's no visible change whatsoever.
You jump to a new room and display the selected interest at the same time (when releasing the selection), which means you'll never see the selection. You should a) draw the selected interest on press and b) jump to the room on release.
Furthermore, keep in mind that more than one finger could touch the screen simultaneously -you're only checking for the first touch where there can be 5.
A last suggestion in the GameMaker context is to use objects to check for finger press and release events (the mouse events translate as finger events on touch screens) rather than scanning screen areas for presses. It's what makes GameMaker so much easier to use as it automatically checks the instance's sprite area for collisions, a built-in behavior.

Related

QML performance problems when moving widgets affect each other's movements

Here is a minimal version of a code revealing the problem which is:
Moving the racket when playing the game on the Desktop kit (Windows) doesn't affect the speed of ball's movement but when run on an Android device, moving the racket affects the speed of ball's movement as though their movements have been tied together.
What solution is there for that, please?
main.qml:
import QtQuick 2.9
import QtQuick.Window 2.2
Window {
visible: true
width: 720
height: 620
title: qsTr("Movement Test")
Rectangle {
id: table
anchors.fill: parent
color: "gray"
Rectangle {
id: ball
property double xincrement: Math.random() + 0.5
property double yincrement: Math.random() + 0.5
width: 15
height: width
radius: width / 2
color: "white"
x: 300; y: 300
}
Racket {
id: myRacket
x: table.width - 50
y: table.height/3
color: "blue"
}
Timer {
interval: 5; repeat: true; running: true
function collision() {
if((ball.x + ball.width >= myRacket.x &&
ball.x < myRacket.x + myRacket.width) &&
(ball.y + ball.height >= myRacket.y &&
ball.y <= myRacket.y + myRacket.height))
return true
return false
}
onTriggered: {
if(ball.x + ball.width >= table.width)
running = false
else if(ball.x <= 0)
ball.xincrement *= -1
else if (collision())
ball.xincrement *= -1
ball.x = ball.x + (ball.xincrement * 1.5);
ball.y = ball.y + (ball.yincrement * 1.5);
if(ball.y <= 0 || ball.y + ball.height >= table.height)
ball.yincrement *= -1
}
}
}
}
Racket.qml:
import QtQuick 2.9
Rectangle {
id: root
width: 15; height: 65
property int oldY: y
property bool yUwards: false
property bool yDwards: false
onYChanged: {
if(y > oldY) yDwards = true
else if (y < oldY) yUwards = true
oldY = y
}
MouseArea {
anchors.fill: parent
anchors.margins: -root.height
drag.target: root
focus: true
hoverEnabled: true
pressAndHoldInterval: 0
drag.axis: Drag.YAxis
drag.minimumY: table.y
drag.maximumY: table.height - root.height - 10
}
}
Qt tries to render every ~16-17ms. If you set your timer to 5ms, it will try to trigger it 3 - 4 times per frame.
Other things that are happening might hinder it from keeping that pace. If the device is less powerfull this effect might be more visible than on other devices.
To see, whether the Timer achievs the set rate, you can print the current ms part of the time with:
console.log(Qt.formatTime(new Date(), "zzz"))
The logged values shall be 5 appart, if the Timer achives full speed. It will be different from 5 if it doesn't.
The easiest way would be to set a target location where the ball will move to, and animate that movement (using a Animation-Type). The Animation shall take care, that the movement speed will be kept, even in cases where the frame rate might drop.
If you want to do it manually, instead of using the timer, you should use the onAfterRendering-slot (I think of Window). This will be triggered when ever something has moved on the screen and triggered rendering. This is also the ideal moment to check collisions.
You then need to calculate the new position depnding on its velocity, the current position and the elapsed time. Latter you can get either from the JS Date()-object, or you expose it somehow form C++ using a QElapsedTimer

Compass - Track number of full 360 degree rotations

Suppose a person is using this compass, and beginning from 90 degrees they start rotating either clockwise or counterclockwise. What's the best way to keep count of how many full 360 degree rotations they complete? Assuming they'll be rotating either only clockwise or only counterclockwise from beginning to end.
I kept coming up with solutions where if the beginning bearing is, for example, 90 degrees I keep checking the next bearing when the sensor data changes, and if it's consistently moving in one direction I know they're rotating. And if they keep rotating in that direction and make it back to 90 degrees, that counts as one rotation. My way seems very convoluted and inefficient and I'm having a hard time coming up with a better way.
In this scenario, I'd be expecting multiple full rotations.
I'd appreciate any help. Thank you!
I found this related answer and am trying to put together a code sample for that. If someone has already done something similar, please post it!
#Override
public void onSensorChanged(SensorEvent event)
{
switch(event.sensor.getType())
{
case Sensor.TYPE_GRAVITY:
{
mValuesAccelerometer = lowPass(event.values.clone(), mValuesAccelerometer);
break;
}
case Sensor.TYPE_MAGNETIC_FIELD:
{
mValuesMagneticField = lowPass(event.values.clone(), mValuesMagneticField);
break;
}
}
boolean success = SensorManager.getRotationMatrix(
mMatrixR,
mMatrixI,
mValuesAccelerometer,
mValuesMagneticField);
if (success)
{
SensorManager.getOrientation(mMatrixR, mMatrixValues);
float azimuth = toDegrees(mMatrixValues[0]);
float pitch = toDegrees(mMatrixValues[1]);
float roll = toDegrees(mMatrixValues[2]);
if (azimuth < 0.0d)
{
//The bearing in degrees
azimuth += 360.0d;
}
}
}
If you're sure that they'll be moving in only 1 direction, to optimize your code you can have checkpoints for degrees instead of continuously monitoring if they're still moving in the right direction.
Here's a rough algo to do that
//You noted 90 degree as starting point
// checkpoint 1 will be 180 keep it as a boolean
// now you've reached 180 if the meter gets to 180 before going to next checkpoint
// which is 270 then make 180 false. it means they turned back.
// if they make it to 270 then wait for 0 degrees and do the same.
// if they make it back to 90 like that. You got a rotation and hopefully
// a bit of complexity is reduced as you're just checking for 4 checkpoints
I don't have any code handy at the moment.
This is a tracking problem with a reading that overflows. You need to keep track of the last reading and hope the user doesn't do more than a half turn between each reading.... (because of the Nyquist theorem)
Here is the basic pseudo code.
var totalChange = 0;
var lastAzimuth = -1000;
function CountTurns(az)
{
if (az > 180) az -= 360; // do this if your azimuth is always positive i.e. 0-360.
if (lastAzimuth == -1000)
{
lastAzimuth = az;
}
diff = az - lastAzimuth;
if (diff > 180)
diff -= 360;
if (diff < -180)
diff += 360;
lastAzimuth = az;
totalChange += diff;
return totalChange / 360;
}
Create 3 integers
int rotationCount=0
int currentDegrees=0
int previousDegrees=89
not a java programmer so i dont know how you handle the onSensorChanged event but basically perform a check within a while loop
while (currentDegrees + 90 < 360)
{
if (currentDegrees + 90 == 0)
{
if (previousDegrees == 359)
{
rotationCount = rotationCount + 1
}
}
else if (currentDegrees + 90 == 359)
{
if (previousDegrees == 0)
{
rotationCount = rotationCount - 1
}
}
previousDegrees = currentDegrees + 90
}
sorry about the syntax, this is just an example of how to do so..
Visualize what I will say and you'll definitely hit your goal in no time.
As you don't need to think of the full 360 degree, but you can take half of that and use the signs differences to your advantage.
Take a look at this figure :
We have a circle that is divided to two sides (left and right).
The left side will take negative 180 degree. (West Side).
The right side will take positive 180 degree. (East Side).
Current positing will be always 0 as (North) and positive 180 as (South).
IF the compass goes positive (meaning goes to the right direction)
Then add +1 on each turn.
IF the compass goes negative (meaning goes to the left direction).
Then subtract -1 on each turn
IF the compass hit OR is 0, then it's current position (NORTH).
IF the compass hit OR is 90, then it's (East).
IF the compass hit OR is 180, then it's (South)
IF the compass hit OR is -90, then it's (West).
This will turn out that whenever the person goes East, the counter will add +1 until it reaches 180, Then it'll change from positive to negative, which will subtract -1 on each turn until it reaches 0. That would be a full 360 rotation.

Zoom in/out limits on AS3 for iOS programming

I'm having a trouble with my code. I've made the basic zoom with AS3, using the two fingers to zoom it. But I have a trouble;
I need the zoom in stop in 2 for example (the normal size is 1), and then, I need to zoom out max to 1. Here is my code, but if I zoom fast, the zoom goes more than 2.
I need to limit the zoom, between 1, and 2.
Multitouch.inputMode = MultitouchInputMode.GESTURE;
escenario.addEventListener(TransformGestureEvent.GESTURE_PAN, fl_PanHandler);
stage.addEventListener(TransformGestureEvent.GESTURE_ZOOM, fl_ZoomHandler);
function fl_PanHandler(event:TransformGestureEvent):void
{
event.currentTarget.x += event.offsetX;
event.currentTarget.y += event.offsetY;
}
function fl_ZoomHandler(event:TransformGestureEvent):void
{
if (event.scaleX && event.scaleY >= 1 && escenario.scaleX && escenario.scaleY <= 2)
{
escenario.scaleX *= event.scaleX;
escenario.scaleY *= event.scaleY;
trace(escenario.scaleX);
}
}
Since you're doing a times/equals (*=) your value can easily go above the threshold of 2 in your if statement since you are multiplying that value after the if statement. You could just do this:
function fl_ZoomHandler(event:TransformGestureEvent):void {
var scale:Number = escenario.scaleX * event.scaleX; //the proposed new scale amount
//you set both the scaleX and scaleY in one like below:
escenario.scaleY = escenario.scaleX = Math.min(Math.max(1,scale), 2);
//^^^^ inside the line above,
//Math.max(1, scale) will return whatever is bigger, 1 or the proposed new scale.
//Then Math.min(..., 2) will then take whatever is smaller, 2 or the result of the previous Math.max
trace(escenario.scaleX);
}

OpenCV crop function fatal signal 11

Hello I am doing an android app which uses OpenCV to detect rectangles/squares, to detect them I am using functions (modified a bit) from squares.cpp. Points of every square found I am storing in vector> squares, then i pass it to the function which choose the biggest one and store it in vector theBiggestSq. The problem is with the cropping function which code i will paste below (i will post the link to youtube showing the problem too). If the actual square is far enough from the camera it works ok but if i will close it a bit in some point it will hang. I will post the print screen of the problem from LogCat and there are the points printed out (the boundaries points taken from theBiggestSq vector, maybe it will help to find the solution).
void cutAndSave(vector<Point> theBiggestSq, Mat image){
RotatedRect box = minAreaRect(Mat(theBiggestSq));
// Draw bounding box in the original image (debug purposes)
//cv::Point2f vertices[4];
//box.points(vertices);
//for (int i = 0; i < 4; ++i)
//{
//cv::line(img, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 1, CV_AA);
//}
//cv::imshow("box", img);
//cv::imwrite("box.png", img);
// Set Region of Interest to the area defined by the box
Rect roi;
roi.x = box.center.x - (box.size.width / 2);
roi.y = box.center.y - (box.size.height / 2);
roi.width = box.size.width;
roi.height = box.size.height;
// Crop the original image to the defined ROI
//bmp=Bitmap.createBitmap(box.size.width / 2, box.size.height / 2, Bitmap.Config.ARGB_8888);
Mat crop = image(roi);
//Mat crop = image(Rect(roi.x, roi.y, roi.width, roi.height)).clone();
//Utils.matToBitmap(crop*.clone()* ,bmp);
imwrite("/sdcard/OpenCVTest/1.png", bmp);
imshow("crop", crop);
}
video of my app and its problems
cords printed respectively are: roi.x roi.y roi.width roi.height
Another problem is that the boundaries drawn should have a green colour but as you see on the video they are distorted (flexed like those boundaries would be made from glass?).
Thank you for any help. I am new in openCV doing it from only one month so please be tolerant.
EDIT:
drawing code:
//draw//
for( size_t i = 0; i < squares.size(); i++ )
{
const Point* p = &squares[i][0];
int n = (int)squares[i].size();
polylines(mBgra, &p, &n, 1, true, Scalar(255,255,0), 5, 10);
//Rect rect = boundingRect(cv::Mat(squares[i]));
//rectangle(mBgra, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
}
This error basically tells you the cause - your ROI exceeds the image dimensions. This means that when you are extracting Rect roi from RotatedRect box then either x or y are smaller than zero, or the width/height pushes the dimensions outside the image. You should check this using something like
// Propose rectangle from data
int proposedX = box.center.x - (box.size.width / 2);
int proposedY = box.center.y - (box.size.height / 2);
int proposedW = box.size.width;
int proposedH = box.size.height;
// Ensure top-left edge is within image
roi.x = proposedX < 0 ? 0 : proposedX;
roi.y = proposedY < 0 ? 0 : proposedY;
// Ensure bottom-right edge is within image
roi.width =
(roi.x - 1 + proposedW) > image.cols ? // Will this roi exceed image?
(image.cols - 1 - roi.x) // YES: make roi go to image edge
: proposedW; // NO: continue as proposed
// Similar for height
roi.height = (roi.y - 1 + proposedH) > image.rows ? (image.rows - 1 - roi.y) : proposedH;

detecting the side of a cube that is facing the camera in android opengl es

So I started creating an app to learn openGL es on adroid. First I went through a chapter that explained how to construct a cube and get it to rotate using the system timer. I then mapped each side to a different segment of one image. For development purposes each side is textured with a number. I then implemented the drag feature to allow the user to rotate the cube up/down or left/right depending how they swiped.
First here is some background on my problem:
I want to keep track of which side is facing the camera because each face is being rotated on the axis it started on. For example given a cube that has the unfolded layout as follows.
2
4
3 1 5
6
Where 1 is the side facing the screen, 2 is the opposite (or back face), 4 is up, 5 is right, 6 is down, and 3 is left.This means 3/5 are on the x-axis, 4/6 on the y-axis, and 1/2 on the z axis.
Here is my issue:
The cube rotates correctly if I only rotate around 1 axis (i.e. I only go left/righ or up/down until I go 360) but if I only go to 90 180 or 270 then the axis I should be rotating around have switched. This happens because of what is stated above about each side of the cube being stuck to the axis it started on.
If you rotate right once so 5 is facing, then the z axis from the users perspective is the x-axis of the cube. This gets even more convoluted when you start going 90 degrees left/right then 90 degrees up/down, etc.
I tried to keep track of the faces with an array of numbers listed clockwise from the top number, but depending which number you came from the new directions for the surrounding numbers have changed.
For Example:
I mapped out the numbers surrounding each number as it faces the screen if it was rotated to from 1 so
4 2 4 1 4
3 1 5 3 4 5 1 5 2 3 6 5 2 3 1
6 1 6 2 6
and 2 is a wild card because it can be gotten to from any direction so there is no real initial number layout from 1
6
3 2 5
4
So my arrays would be
sidesOfFace1[] = {4,5,6,3}
sidesOfFace2[] = {6,5,4,3}
sidesOfFace3[] = {4,1,6,2}
sidesOfFace4[] = {2,5,1,3}
sidesOfFace5[] = {4,2,6,1}
sidesOfFace6[] = {1,5,2,3}
And MOVE can have the values
UP = 1 RIGHT = 2 DOWN = 3 LEFT = 4
Then by keeping track of the previous face, curr face, and last move I tried to figure out a formula to get an offset that would help me select what the next face would be given a direction to move to a new face. Roughly I came out with this interpretation:
prevface MOVE currFace
1 -> UP(1) -> 4 -> RIGHT(2) -> 5
offset = opposite direction of MOVE - (sidesOfFace.indexOf(prevFace)+1)
So first I get
1) DOWN - sidesOfFace4.indexOf(1)+1 => 3 - 3 = 0
2) LEFT - sidesOfFace5.indexOf(4)+1 => 4 - 1 = 3
1) This tells me that the sides around 4 are in the same order as the array sidesOfFace, in clockwise starting at the top. So when the user swipes again I can know which side we are going to. This is imperative in being able to set up the right rotations of the cube since they change for the viewer as the cube gets turned.
2) This shows that there is an offset, if we look at the array the index of 4 should be 0 but the cube has been rotated such that the UP side is now at index 3, RIGHT is 0, DOWN is 1, and LEFT is 2.
Besides needing to know which side is facing the screen for other functionality in my app, I also have to know because depending on which side is facing the screen I have to rotate the cube along the correct axis. I am keeping track of the xRot and yRot but these rotations have to happen according the the camera/users view, not the cubes axis.
For Example I found that:
axis for front face up/down axis right/left axis (as seen by the camera)
1 +z +x +y
2 -z -x -y
4 +y +x +z
6 -y -x -z
5 +x +z +y
3 -x -z -y
This means that depending on which side is facing the screen I have to do the xRotations around the up/down axis and the yRotations around the right/left axis.
A friend said something about possibly checking the 4 vertices that are closest to the camera but since I am using glRotate funcs I wasn't sure where I could get that information from. But I still need to know which numbers are on which side of the front face so I can automate the rotations of the cube.
If you actually sat down and read all this I truely do appreciate it, If you could steer me in the right direction. Maybe a link, or better yet a known solution to this problem would be amazing. I have been struggling with this for a few days now and I was just wondering if this was already a problem that had a solution.
Thanks all,
Alan
I'll be honest I didn't completely read the last 3/4 of your post, it's looking way more complicated than it needs to be.
But if you just want to detect which side of the cube is nearest the camera, you should just have to do the following:
With your unrotated cube, create a vector for each direction:
vec3 left(-1, 0, 0)
vec3 right (1, 0, 0)
vec3 up(0, 1, 0)
etc...
Then acquire the current modelview matrix of the cube. If you transform the normal by the cube's modelview, you will get the resulting vectors in eye space.
vec3 leftInEyeSpace = modelView * left;
vec3 upInEyeSpace = modelView * up;
...
This will be the direction of each vector relative to your eye.
Then define a vector from the center of the cube pointing into the camera:
vec3 cubeToCamera= -normalize((modelView * vec4(0,0,0,1)).xyz);
Then you want to take the dot product of each vector with your 'cubeToCamera' vector. Because the dot product decreases as the angle between the vectors increases, the dot product with the greatest magnitude will be the one most facing the camera.
float leftDot = dot(cubeToCamera, leftInEyeSpace)
float rightDot = dot(cubeToCamera, rightInEyeSpace)
...
A bit convoluted, but could something like this work? I left out a few methods but hopefully you get my general idea, you should be able to use the boolean variables to work out which side is facing the user.
private Boolean right, left, spin;
private Boolean up, down, upsideDown;
private Boolean rightSide, leftSide;
// All false by default.
public void swipe(int swipeDirection)
{
// Swipe Direction - 0 = Right, 1 = Left, 2 = Up, 3 = Down
switch (swipeDirection)
{
case 0:
if (upsideDown)
{
swipeLeft();
}
else if (rightSide)
{
swipeDown();
}
else if (leftSide)
{
swipeUp();
}
else
{
swipeRight();
}
break;
case 1:
if (upsideDown)
{
swipeRight();
}
else if (rightSide)
{
swipeUp();
}
else if (leftSide)
{
swipeDown();
}
else
{
swipeLeft();
}
break;
case 2:
if (upsideDown)
{
swipeDown();
}
else if (rightSide)
{
swipeRight();
}
else if (leftSide)
{
swipeLeft();
}
else
{
swipeUp();
}
break;
case 3:
if (upsideDown)
{
swipeUp();
}
else if (rightSide)
{
swipeLeft();
}
else if (leftSide)
{
swipeRight();
}
else
{
swipeDown();
}
break;
}
}
private void swipeRight()
{
if (right)
{
right = false;
spin = true;
}
else if (left)
{
left = false;
}
else if (spin)
{
spin = false;
left = true;
}
else if (up)
{
up = false;
if (rightSide)
{
rightSide = false;
upsideDown = true;
}
else if (leftSide)
{
leftSide = false;
}
else
{
rightSide = true;
}
}
else if (down)
{
down = false;
if (leftSide)
{
leftSide = false;
upsideDown = true;
}
else if (rightSide)
{
rightSide = false;
}
else
{
leftSide = true;
}
}
else
{
right = true;
}
}
private void swipeUp()
{
if (down)
{
down = false;
}
else if (up)
{
upsideDown = !upsideDown;
}
else if (upsideDown)
{
upsideDown = false;
up = true;
}
}

Categories

Resources