I am new to andengine and even android game development. i have created sprite as a box. this box is now draggable by using this coding. it works fine.
but i want multitouch on this which i want to rotate a sprite with 2 finger in that box and even it should be draggable. .... plz help someone...
i am trying this many days but no idea.
final float centerX = (CAMERA_WIDTH - this.mBox.getWidth()) / 2;
final float centerY = (CAMERA_HEIGHT - this.mBox.getHeight()) / 2;
Box= new Sprite(centerX, centerY, this.mBox,
this.getVertexBufferObjectManager()) {
public boolean onAreaTouched(TouchEvent pSceneTouchEvent,
float pTouchAreaLocalX, float pTouchAreaLocalY) {
this.setPosition(pSceneTouchEvent.getX() - this.getWidth()/ 2,
pSceneTouchEvent.getY() - this.getHeight() / 2);
float pValueX = pSceneTouchEvent.getX();
float pValueY = CAMERA_HEIGHT-pSceneTouchEvent.getY();
float dx = pValueX - gun.getX();
float dy = pValueY - gun.getY();
double Radius = Math.atan2(dy,dx);
double Angle = Radius * 360 ;
Box.setRotation((float)Math.toDegrees(Angle));
return true;
}
Make sure you enabled multi touch in your game. You can use the same code used in the MultiTouchExample in the onLoadEngine method.
The algorithm is quite simple, similar to what you've posted here.
Keep track of up to 2 pointer IDs you get in onAreaTouched method. (You can get the pointer ID by calling pSceneTouchEvent.getPointerID()).
Keep track of the pointers' state (Currently touching/not touching) and location (pTouchAreaLocalX and pTouchAreaLocalY).
Whenever 2 pointers are touching (You received ACTION_DOWN for both), save the initial angle. (Math.tan2(pointer1Y - pointer2Y, pointer1X - pointer2X)).
As long as ACTION_UP is not called for the pointers, update the new angle in every ACTION_MOVE event of the pointers, and get the angle delta (delta = currentAngle - initialAngle). Then call setRotation(Math.toDegrees(delta)).
To make the sprite dragable with 2 pointers, you need to move your sprite the lesser of the distance each pointer has moved. For example, if:
pointer1.dX = 50;
pointer1.dY = -20;
pointer2.dX = 40;
pointer2.dY = -10;
the sprite should move +40 units in the X axis, and -10 units in the Y axis.
Related
I'm creating a AR game with ARfoundation where the player can swipe a ball to whatever direction.
The problem I encounter is that when the ball is spawned the first time all the other balls that are spawned, will load on the same position as the first ball whatever direction I'm aiming at with my camera.
I want it to get it always spawned in front of the camera no matter the location, position and direction of my phone's camera. Tried using tags to get the location of the camera en the cup of where the ball need to get thrown in but still the ball doesn't get loaded in front of the camera no matter the location. I guess this is not the right way to achieve this and I was wondering what is?
The code I'm using for spawning the ball on a certain location
[SerializeField]
GameObject ball;
public float distanceX;
public float distanceY;
public float distanceZ;
public float ballX;
public float ballY;
public float ballZ;
public void Spawn()
{
distanceX = GameObject.FindGameObjectWithTag("Cup").transform.position.x - GameObject.FindGameObjectWithTag("MainCamera").transform.position.x;
distanceY = GameObject.FindGameObjectWithTag("Cup").transform.position.y - GameObject.FindGameObjectWithTag("MainCamera").transform.position.y;
distanceZ = GameObject.FindGameObjectWithTag("Cup").transform.position.z - GameObject.FindGameObjectWithTag("MainCamera").transform.position.z;
ballX = distanceX / 4;
ballY = distanceY / 4;
ballZ = distanceZ / 4;
Instantiate(ball, new Vector3(ballX, ballY, 10f), Quaternion.identity);
}
A bit hard to tell how your camera and this Cup Object are related and where exactly your object shall be spawned.
But in general for spawning something in front of your main camera you can do e.g.
public void Spawn()
{
// this basically does FindObjectWithTag("Main camera")
var camera = Camera.main.transform;
Instantiate(ball, camera.position + camera.forward * 10f, Quaternion.identity);
}
If you need a different distance then 10 e.g. based on that Cup object you just replace that or add another offset vector.
Either way you should not store it in individual floats just to build a new vector again ;)
E.g. instead of
distanceX = GameObject.FindGameObjectWithTag("Cup").transform.position.x - GameObject.FindGameObjectWithTag("MainCamera").transform.position.x;
distanceY = GameObject.FindGameObjectWithTag("Cup").transform.position.y - GameObject.FindGameObjectWithTag("MainCamera").transform.position.y;
distanceZ = GameObject.FindGameObjectWithTag("Cup").transform.position.z - GameObject.FindGameObjectWithTag("MainCamera").transform.position.z;
ballX = distanceX / 4;
ballY = distanceY / 4;
ballZ = distanceZ / 4;
you would rather do
Vevtor3 distance = GameObject.FindGameObjectWithTag("Cup").transform.position - GameObject.FindGameObjectWithTag("MainCamera").transform.position;
Vector3 ballDelta = distanceX / 4f;
...
I am facing problem in getting the touch point of the circle for the game i was developing
I tried to solve this by getting the points as below
public Actor hit(float x, float y, boolean touchable){
if(!this.isVisible() || this.getTouchable() == Touchable.disabled)
return null;
// Get center-point of bounding circle, also known as the center of the Rect
float centerX = _texture.getRegionWidth() / 2;
float centerY = _texture.getRegionHeight() / 2;
// Calculate radius of circle
float radius = (float) (Math.sqrt(centerX * centerX + centerY * centerY))-5f;
// And distance of point from the center of the circle
float distance = (float) Math.sqrt(((centerX - x) * (centerX - x))
+ ((centerY - y) * (centerY - y)));
// If the distance is less than the circle radius, it's a hit
if(distance <= radius) return this;
// Otherwise, it isn't
return null;}
I am getting hit positions inside circle but also the points around it near black spots, i only need the touch points near circle.
Would some body suggest the approach for achieving this.
Im guessing that you are comparing local rect coordinates (ie centerX, centerY) with screen coordinates x,y parameters that you are feeding to the function.
So you probably want to subtract the rect's x,y position from the parameters x,y so your parameters are in local coordinates.
So:
float lLocalX = x-rectX (assuming this is the rects x position on the screen)
float lLocalY = y-rectY (assuming this is the rects y position on the screen)
now you can compare them!
float distance = (float) Math.sqrt(((centerX - lLocalX ) * (centerX - lLocalX ))
+ ((centerY - lLocalY ) * (centerY - lLocalY )));
You can have a Circle object in your Actor: http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/math/Circle.html
Then check if the circle contains that point using the circle.contains(float x, float y) function.
Basically it'll look something like this:
public Actor hit(float x, float y, boolean touchable){
if(!this.isVisible() || this.getTouchable() == Touchable.disabled)
return null;
if (circle.contains(x,y)) return this;
return null;
}
Of course the downside is that if this is a dynamic object and it moves around a lot, then you'd have to constantly update the circles position. Hope this helps :)
I am trying to take a touch event and move a shape to wherever the touch event moves.
public boolean onTouchEvent(MotionEvent e) {
mRenderer.setPosition(e.getX(), e.getY());
return true;
}
The problem is the coordinates I get from the MotionEvent are the screen location in pixels, not the normalized coordinates [-1, 1]. How do I translate screen coordinates to the normalized coordinates? Thanks in advance!
float x = e.getX();
float y = e.getY();
float screenWidth;
float screenHeight;
float sceneX = (x/screenWidth)*2.0f - 1.0f;
float sceneY = (y/screenHeight)*-2.0f + 1.0f; //if bottom is at -1. Otherwise same as X
To add a bit more general code:
/*
Source and target represent the 2 coordinate systems you want to translate points between.
For this question the source is some UI view in which top left corner is at (0,0) and bottom right is at (screenWidth, screenHeight)
and destination is an openGL buffer where the parameters are the same as put in "glOrtho", in common cases (-1,1) and (1,-1).
*/
float sourceTopLeftX;
float sourceTopLeftY;
float sourceBottomRightX;
float sourceBottomRightY;
float targetTopLeftX;
float targetTopLeftY;
float targetBottomRightX;
float targetBottomRightY;
//the point you want to translate to another system
float inputX;
float inputY;
//result
float outputX;
float outputY;
outputX = targetTopLeftX + ((inputX - sourceTopLeftX) / (sourceBottomRightX-sourceTopLeftX))*(targetBottomRightX-targetTopLeftX);
outputY = targetTopLeftY + ((inputY - sourceTopLeftY) / (sourceBottomRightY-sourceTopLeftY))*(targetBottomRightY-targetTopLeftY);
With this method you can translate any point between any N-dimensional orthogonal systems (for 3D just add the same for Z as is for X and Y). In this example I used the border coordinates of the view but you can use ANY 2 points in the scene, for instance this method will work all the same if using screen center and top-right corner. The only limitaions are
sourceTopLeftX != sourceBottomRightX for every dimension
Please see each section below for a description of my problem described in three separate ways. Hopefully should help people to answer.
Problem: How do you find a pair of coordinate expressed in canvas/userspace when you only have the coordinate expressed in terms of a zoomed image, given the original scale point & scale factor?
Problem in practice:
I'm currently trying to replicate the zoom functionality used in apps such as the gallery / maps, when you can pinch to zoom/zoom out with the zoom moving towards the midpoint of the pinch.
On down I save the centre point of the zoom (which is in X,Y coordinates based on the current screen). I then have this function act when a "scale" gesture is detected:
class ImageScaleGestureDetector extends SimpleOnScaleGestureListener {
#Override
public boolean onScale(ScaleGestureDetector detector) {
if(mScaleAllowed)
mCustomImageView.scale(detector.getScaleFactor(), mCenterX, mCenterY);
return true;
}
}
The scale function of the CustomImageView look like this:
public boolean scale(float scaleFactor, float focusX, float focusY) {
mScaleFactor *= scaleFactor;
// Don't let the object get too small or too large.
mScaleFactor = Math.max(MINIMUM_SCALE_VALUE, Math.min(mScaleFactor, 5.0f));
mCenterScaleX = focusX;
mCenterScaleY = focusY;
invalidate();
return true;
}
The drawing of the scaled image is achieved by overriding the onDraw method which scales the canvas around the centre ands draw's the image to it.
#Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);
canvas.save();
canvas.translate(mCenterScaleX, mCenterScaleY);
canvas.scale(mScaleFactor, mScaleFactor);
canvas.translate(-mCenterScaleX, -mCenterScaleY);
mIcon.draw(canvas);
canvas.restore();
}
This all works fine when scaling from ScaleFactor 1, this is because the initial mCenterX and mCenterY are coordinates which are based on the device screen. 10, 10 on the device is 10, 10 on the canvas.
After you have already zoomed however, then next time you click position 10, 10 it will no longer correspond to 10, 10 in the canvas because of the scaling & transforming that has already been performed.
Problem in abstraction:
The image below is an example of a zoom operation around centre point A. Each box represents the position and size of the view when at that scale factor (1, 2, 3, 4, 5).
In the example if you scaled by a factor of 2 around A then you clicked on position B, the X, Y reported as B would be based on the screen position - not on the position relative to 0,0 of the initial canvas.
I need to find a way of getting the absolute position of B.
So, after redrawing the problem I've found the solution I was looking for. It's gone through a few iteration's but here's how I worked it out:
B - Point, Center of the scale operation
A1, A2, A3 - Points, equal in user-space but different in canvas-space.
You know the values for Bx and By because they are always constant no matter what the scale factor (You know this value in both canvas-space and in user-space).
You know Ax & Ay in user-space so you can find the distance between Ax to Bx and Ay to By. This measurement is in user-space, to convert it to a canvas-space measurement simply divide it by the scale factor. (Once converted to canvas-space, you can see these lines in red, orange and yellow).
As point B is constant, the distance between it and the edges are constant (These are represented by Blue Lines). This value is equal in user-space and canvas-space.
You know the width of the Canvas in canvas-space so by subtracting these two canvas space measurements (Ax to Bx and Bx to Edge) from the total width you are left with the coordinates for point A in canvas-space:
public float[] getAbsolutePosition(float Ax, float Ay) {
float fromAxToBxInCanvasSpace = (mCenterScaleX - Ax) / mScaleFactor;
float fromBxToCanvasEdge = mCanvasWidth - Bx;
float x = mCanvasWidth - fromAxToBxInCanvasSpace - fromBxToCanvasEdge;
float fromAyToByInCanvasSpace = (mCenterScaleY - Ay) / mScaleFactor;
float fromByToCanvasEdge = mCanvasHeight - By;
float y = mCanvasHeight - fromAyToByInCanvasSpace - fromByToCanvasEdge;
return new float[] { x, y };
}
The above code and image describe when you're clicking to the top left of the original centre. I used the same logic to find A no matter which quadrant it was located in and refactored to the following:
public float[] getAbsolutePosition(float Ax, float Ay) {
float x = getAbsolutePosition(mBx, Ax);
float y = getAbsolutePosition(mBy, Ay);
return new float[] { x, y };
}
private float getAbsolutePosition(float oldCenter, float newCenter, float mScaleFactor) {
if(newCenter > oldCenter) {
return oldCenter + ((newCenter - oldCenter) / mScaleFactor);
} else {
return oldCenter - ((oldCenter - newCenter) / mScaleFactor);
}
}
Here is my solution based on Graeme's answer:
public float[] getAbsolutePosition(float Ax, float Ay) {
MatrixContext.drawMatrix.getValues(mMatrixValues);
float x = mWidth - ((mMatrixValues[Matrix.MTRANS_X] - Ax) / mMatrixValues[Matrix.MSCALE_X])
- (mWidth - getTranslationX());
float y = mHeight - ((mMatrixValues[Matrix.MTRANS_Y] - Ay) / mMatrixValues[Matrix.MSCALE_X])
- (mHeight - getTranslationY());
return new float[] { x, y };
}
the parameters Ax and Ay are the points which user touch via onTouch(), I owned my static matrix instance in MatrixContext class to hold the previous scaled/translated values.
Really sorry this is a brief answer, in a rush. But I've been looking at this recently too - I found http://code.google.com/p/android-multitouch-controller/ to do what you want (I think - I had to skim read your post). Hope this helps. I'll have a proper look tonight if this doesn't help and see if I can help further.
So I have an ImageView using a Matrix to scale the Bitmap I'm displaying. I can double-tap to zoom to full-size, and my ScaleAnimation handles animating the zoom-in, it all works fine.
Now I want to double-tap again to zoom out, but when I animate this with ScaleAnimation, the ImageView does not draw the newly exposed areas of the image (as the current viewport shrinks), instead you see the portion of visible image shrinking in. I have tried using ViewGroup.setClipChildren(false), but this only leaves the last-drawn artifacts from the previous frame - leading to an trippy telescoping effect, but not quite what I was after.
I know there are many zoom-related questions, but none cover my situation - specifically animating the zoom-out operation. I do have the mechanics working - ie aside from the zoom-out animation, double-tapping to zoom in and out works fine.
Any suggestions?
In the end I decided to stop using the Animation classes offered by Android, because the ScaleAnimation applies a scale to the ImageView as a whole which then combines with the scale of the ImageView's image Matrix, making it complicated to work with (aside from the clipping issues I was having).
Since all I really need is to animate the changes made to the ImageView's Matrix, I implemented the OnDoubleTapListener (at the end of this post - I leave it as an "exercise to the reader" to add the missing fields and methods - I use a few PointF and Matrix fields to avoid excess garbage creation). Basically the animation itself is implemented by using View.post to keep posting a Runnable that incrementally changes the ImageView's image Matrix:
public boolean onDoubleTap(MotionEvent e) {
final float x = e.getX();
final float y = e.getY();
matrix.reset();
matrix.set(imageView.getImageMatrix());
matrix.getValues(matrixValues);
matrix.invert(inverseMatrix);
doubleTapImagePoint[0] = x;
doubleTapImagePoint[1] = y;
inverseMatrix.mapPoints(doubleTapImagePoint);
final float scale = matrixValues[Matrix.MSCALE_X];
final float targetScale = scale < 1.0f ? 1.0f : calculateFitToScreenScale();
final float finalX;
final float finalY;
// assumption: if targetScale is less than 1, we're zooming out to fit the screen
if (targetScale < 1.0f) {
// scaling the image to fit the screen, we want the resulting image to be centred. We need to take
// into account the shift that is applied to zoom on the tapped point, easiest way is to reuse
// the transformation matrix.
RectF imageBounds = new RectF(imageView.getDrawable().getBounds());
// set up matrix for target
matrix.reset();
matrix.postTranslate(-doubleTapImagePoint[0], -doubleTapImagePoint[1]);
matrix.postScale(targetScale, targetScale);
matrix.mapRect(imageBounds);
finalX = ((imageView.getWidth() - imageBounds.width()) / 2.0f) - imageBounds.left;
finalY = ((imageView.getHeight() - imageBounds.height()) / 2.0f) - imageBounds.top;
}
// else zoom around the double-tap point
else {
finalX = x;
finalY = y;
}
final Interpolator interpolator = new AccelerateDecelerateInterpolator();
final long startTime = System.currentTimeMillis();
final long duration = 800;
imageView.post(new Runnable() {
#Override
public void run() {
float t = (float) (System.currentTimeMillis() - startTime) / duration;
t = t > 1.0f ? 1.0f : t;
float interpolatedRatio = interpolator.getInterpolation(t);
float tempScale = scale + interpolatedRatio * (targetScale - scale);
float tempX = x + interpolatedRatio * (finalX - x);
float tempY = y + interpolatedRatio * (finalY - y);
matrix.reset();
// translate initialPoint to 0,0 before applying zoom
matrix.postTranslate(-doubleTapImagePoint[0], -doubleTapImagePoint[1]);
// zoom
matrix.postScale(tempScale, tempScale);
// translate back to equivalent point
matrix.postTranslate(tempX, tempY);
imageView.setImageMatrix(matrix);
if (t < 1f) {
imageView.post(this);
}
}
});
return false;
}