Drag and Drop Libgdx, without a Target - android

Im trying to implement the Drag And Drop method/class in libgdx, but what´s different for me is that I dont have a "Target" to drop my actor. I simply want to drag it around all over the screen. I haven´t really got any further than this.
dd = new DragAndDrop();
dd.addSource(new Source(stoneImage){
#Override
public Payload dragStart(InputEvent event, float x, float y, int pointer) {
Payload payload = new Payload();
payload.setObject("some payload");
payload.setDragActor(stoneImage);
return payload;
}
});
Here I set the Image as an actor:
stageMove.addActor(stoneImage);
At the moment it won´t move when I try to drag it.

I suggest you 2 easy ways to solve your problem.
1) Add a DragListener to your actor:
actor.addListener(new DragListener() {
public void drag(InputEvent event, float x, float y, int pointer) {
actor.moveBy(x - actor.getWidth() / 2, y - actor.getHeight() / 2);
}
});
2) Create a generic actor which size is filled to the stage, and use it as the target actor.

Related

Android GDX game detect the right shape?

I am developing a Gdx game but I have got stuck on some part and I will explain briefly about it:
I have 9 balls organized on 3*3 and I need to detect the ball that I'm touching as shown in the image below:
enter image description here
and I typed this code:
for (int i : listBalls) {
touchPoint = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0);
rectangle = new Rectangle(sprite[i].getX(), sprite[i].getY(), spriteSize, spriteSize);
if (rectangle.contains(touchPoint.x, touchPoint.y)) {
Gdx.app.log("Test", "Touched dragged " + String.valueOf(i));
pointer = i;
}
}
In case of touching any ball of the above row, it detects the opposite ball in the bottom row. For example in the above image, if I touch ball no 2 on the top it will point to ball no 8, and same if touching any of the bottom balls.
In case of touching any ball of the middle ball, it gives the right on.
I hope I could explain clearly my issue. Please help.
As explained here: LibGDX input y-axis flipped the coordinate system of the input is inverted on the y-axis, try substracting the screen height of your device or camera
int y = screenHeight - Gdx.input.getY();
Keep in mind that using a Camera and unprojecting the input coordinates is the most recommended way to go about detecting input in LibGDX. Example:
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Vector3 unprojected = camera.unproject(new Vector3(screenX, screenY,0));
}
You don't even have to invert the y-coordinate, unproject() does this automatically for you. To use the correct x and y coordinates you then could use:
float x = unprojected.x;
float y = unprojected.y;

MPAndroid - snapping x position when scrolling

After few hours of trying I'm looking for some hints on how to add snap-scroll mechanism to MPAndroid. Basically I want the 5 visible bars to align so they are fully visible and centered. I now imported the library source code because it looks like there's no other way to change the code in computeScroll (BarLineChartTouchListener).
Edit:
To clarify - I'm showing around 20 bars but chart is zoomed so user can scroll horizontally. What bothers me it is not getting aligned automatically so first visible bar might be clipped in half. I'm looking for snapping effect where it will round the position to the nearest multiplication of the bar width, leaving 5 fully visible bars.
I ended up adding the following function in BarLineChartBase.java. I know it's far from elegant, but seems to do the job. It's limited to targetApi > 11, because of the ValueAnimator. For lower API (which I don't cater for) you might need to have a look at nineoldandroids or some other animation loop technique.
#TargetApi(Build.VERSION_CODES.HONEYCOMB)
public void alignX() {
int count = this.getValueCount();
int xIndex = this.getLowestVisibleXIndex() + Math.round( (this.getHighestVisibleXIndex() - this.getLowestVisibleXIndex()) / 2.0f );
float xsInView = this.getXAxis().getValues().size() / this.getViewPortHandler().getScaleX();
Transformer mTrans = this.getTransformer(YAxis.AxisDependency.LEFT);
float[] pts = new float[] { xIndex - xsInView / 2f, 0 };
mTrans.pointValuesToPixel(pts);
final Matrix save = new Matrix();
save.set(this.getViewPortHandler().getMatrixTouch());
final float x = pts[0] - this.getViewPortHandler().offsetLeft();
final int frames = 20;
ValueAnimator valueAnimator = new ValueAnimator().ofInt(0, frames);
valueAnimator.setDuration(500);
valueAnimator.addUpdateListener(new ValueAnimator.AnimatorUpdateListener() {
int prev = -1;
#Override
public void onAnimationUpdate(ValueAnimator animation) {
if( (int) animation.getAnimatedValue() > prev ) {
save.postTranslate( -x / (float)frames, 0);
BarLineChartBase.this.getViewPortHandler().refresh(save, BarLineChartBase.this, true);
}
prev = (int) animation.getAnimatedValue();
}
});
valueAnimator.start();
}
I trigger it at the end of computeScroll function in BarLineChartTouchListener.
I kept names of variables as I copied code from functions like MoveViewJob, ViewPortHandler etc. Since it's only aligning in x axis - I removed Y axis calculations and used zeros instead. Any optimizations welcome, especially from the author #PhilippJahoda.

Android - How to overlay one path on top of another

I currently have the following code:
private void drawGreen(Canvas canvas) {
greenPaint.setColor(0xFF00AA00);
if (start) {
greenPath = new Path();
greenPath.reset();
greenPath.moveTo(pathArrayX.get(0), pathArrayY.get(0));
start = false;
}
if (isInsideCircle(pathArrayX.get(pathIndex), pathArrayY.get(pathIndex), curX, curY, TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, 25, getResources().getDisplayMetrics()))) {
greenPath.lineTo(pathArrayX.get(pathIndex), pathArrayY.get(pathIndex));
canvas.drawPath(greenPath, greenPaint);
pathIndex++;
}
}
private boolean isInsideCircle(float x, float y, float centerX, float centerY, float radius) {
return Math.pow(x - centerX, 2) + Math.pow(y - centerY, 2) < Math.pow(radius, 2);
}
In my app, I at first draw a red path, with its coordinates stored in the ArrayLists pathArrayX and pathArrayY. I am tracking the X and Y coordinates of a circular ImageView being moved underneath a mouse, and would like to overlay the red path with a green path when the user hovers over the path from beginning to end. As the user hovers over the red path, the portion of the red path that they already completed would be overlaid by a green path along the same segment. The X and Y coordinates of the ImageView (curX and curY) are being calculated from a running thread.
However, my app doesn't appear to be drawing the green path at all. Is there anything I am doing wrong here?
Is this function even being called?
Assuming it's being called inside onDraw(Canvas), it looks like it might be missing the outer code for a loop. Seeing that you're doing pathIndex++ at the end, were you using a while loop? If you're just going to loop through point, use a for-loop as while-loop is more prone to dropping into an endless loop if you forgot to increment counter or doing wrongly, or do it in multiple places.
Side notes: if the boolean start flag is only being used to lazily initialise greenPath, you should scrap that and just use if (greenPath == null){ as a general practise. Use states that you can directly infer from objects and not use flags if you can help it, this makes code cleaner.

Mapping A "Touch Region" to a Bitmap

I am trying to gain some more familiarity with the Android SurfaceView class, and in doing so am attempting to create a simple application that allows a user to move a Bitmap around the screen. The troublesome part of this implementation is that I am also including the functionality that the user may drag the image again after it has been placed. In order to do this, I am mapping the bitmap to a simple set of coordinates that define the Bitmap's current location. The region I am mapping the image to, however, does not match up with the image.
The Problem
After placing an image on the SurfaceView using canvas.drawBitmap(), and recording the coordinates of the placed image, the mapping system that I have set up misinterprets the Bitmap's coordinates somehow and does not display correctly. As you can see in this image, I have simply used canvas.drawLine() to draw lines representing the space of my touch region, and the image is always off and to the right:
The Code
Here, I shall provide the relevant code excerpts to help answer my question.
CustomSurface.java
This method encapsulates the drawing of the objects onto the canvas. The comments clarify each element:
public void onDraw(Canvas c){
//Simple black paint
Paint paint = new Paint();
//Draw a white background
c.drawColor(Color.WHITE);
//Draw the bitmap at the coordinates
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
//Draws the actual surface that is receiving touch input
c.drawLine(g.left, g.top, g.right, g.top, paint);
c.drawLine(g.right, g.top, g.right, g.bottom, paint);
c.drawLine(g.right, g.bottom, g.left, g.bottom, paint);
c.drawLine(g.left, g.bottom, g.left, g.top, paint);
}
This method encapsulates how I capture touch events:
public boolean onTouchEvent(MotionEvent e){
switch(e.getAction()){
case MotionEvent.ACTION_DOWN:{
if(g.contains((int) e.getX(), (int) e.getY()))
item_selected = true;
break;
}
case MotionEvent.ACTION_MOVE:{
if(item_selected)
g.move((int) e.getX(), (int) e.getY());
break;
}
case MotionEvent.ACTION_UP:{
item_selected = false;
break;
}
default:{
//Do nothing
break;
}
}
return true;
}
Graphic.java
This method is used to construct the Graphic:
//Initializes the graphic assuming the coordinate is in the upper left corner
public Graphic(Bitmap image, int start_x, int start_y){
resource = image;
left = start_x;
top = start_y;
right = start_x + image.getWidth();
bottom = start_y + image.getHeight();
}
This method detects if a user is clicking inside the image:
public boolean contains(int x, int y){
if(x >= left && x <= right){
if(y >= top && y <= bottom){
return true;
}
}
return false;
}
This method is used to move the graphic:
public void move(int x, int y){
left = x;
top = y;
right = x + resource.getWidth();
bottom = y + resource.getHeight();
}
I also have 2 methods that determine the center of the region (used for redrawing):
public int getCenterX(){
return (right - left) / 2 + left;
}
public int getCenterY(){
return (bottom - top) / 2 + top;
}
Any help would be greatly appreciated, I feel as though many other StackOverflow users could really benefit from a solution to this issue.
There's a very nice and thorough explanation of touch/multitouch/gestures on Android Developers blog, that includes free and open source code example at google code.
Please, take a look. If you don't need gestures -- just skip that part, read about touch events only.
This issue ended up being much simpler than I had thought, and after some tweaking I realized that this was an issue of image width compensation.
This line in the above code is where the error stems from:
c.drawBitmap(g.getResource(), g.getCenterX(), g.getCenterY(), null);
As you can tell, I manipulated the coordinates from within the Graphic class to produce the center of the bitmap, and then called canvas.drawBitmap() assuming that it would draw from the center outward.
Obviously, this would not work because the canvas always drops from the top left of an image downwards and to the right, so the solution was simple.
The Solution
Create the touch region with regards to the touch location, but draw it relative to a distance equal to the image width subtracted from the center location in the x and y directions. I basically changed the architecture of the Graphic class to implement a getDrawX() and getDrawY() method that would return the modified x and y coordinates of where it should be drawn in order to have the center_x and center_y values (determined in the constructor) actually appear to be at the center of the region.
It all comes down to the fact that in an attempt to compensate for the way the canvas draws bitmaps, I unfortunately incorporated some bad behaviors and in the end had to handle the offset in a completely different way.

Get MotionEvent.getRawX/getRawY of other pointers

Can I get the value of MotionEvent.getRawX()/getRawY() of other pointers ?
MotionEvent.getRawX() api reference
The api says that uses getRawX/getRawY to get original raw X/Y coordinate, but it only for 1 pointer(the last touched pointer), is it possible to get other pointer's raw X/Y coordinate ?
Indeed, the API doesn't allow to do this, but you can compute it. Try that :
public boolean onTouch(final View v, final MotionEvent event) {
int rawX, rawY;
final int actionIndex = event.getAction() >> MotionEvent.ACTION_POINTER_ID_SHIFT;
final int location[] = { 0, 0 };
v.getLocationOnScreen(location);
rawX = (int) event.getX(actionIndex) + location[0];
rawY = (int) event.getY(actionIndex) + location[1];
}
A solution worth trying for most use cases is to add this to the first line of the onTouchEvent it simply finds the difference between the raw and processed, and shifts the location of the MotionEvent event, by that amount. So that all the getX(int) values are now the raw values. Then you can actually use the getX() getY() stuff as the Raw values.
event.offsetLocation(event.getRawX()-event.getX(),event.getRawY()-event.getY());
While Ivan's point is valid, it's simply the case that applying a matrix directly to the view itself sucks so bad you likely shouldn't do it. It's weird and inconsistent between devices, cause the touch events to fall out of view and get declined, etc. If you are moving a view around like that you are better off simply overloading the onDraw() and applying that matrix to the canvas, then applying the inverse matrix to the MotionEvent so everything meshes up right. Then you can properly react to the events with proper and fine grain control. And, if you do that, my solution here wouldn't be subject to Ivan's objection.
It might be not enough to just shift local coordinates by a view's location if the view is rotated. In this case you need something like this:
void getRowPoint(MotionEvent ev, int index, PointF point){
final int location[] = { 0, 0 };
getLocationOnScreen(location);
float x=ev.getX(index);
float y=ev.getY(index);
double angle=Math.toDegrees(Math.atan2(y, x));
angle+=getRotation();
final float length=PointF.length(x,y);
x=(float)(length*Math.cos(Math.toRadians(angle)))+location[0];
y=(float)(length*Math.sin(Math.toRadians(angle)))+location[1];
point.set(x,y);
}
The getLocationOnScreen answer works most of the time, but I was seeing it return incorrect values sometimes (when I was repositioning and re-parenting the view while the touch event was taking place), so I found an alternate approach that works more reliably.
If you look at the implementation of getRawX, it calls a private native function that accepts a pointerIndex, but the MotionEvent class only ever calls it with index 0:
public final float getRawX() {
return nativeGetRawAxisValue(mNativePtr, AXIS_X, 0, HISTORY_CURRENT);
}
Unfortunately, nativeGetRawAxisValue is private, but you can hack around that by using reflection to give yourself access to everything you need. Here's what the code looks like:
private Point getRawCoords(MotionEvent event, int pointerIndex) {
try {
Method getRawAxisValueMethod = MotionEvent.class.getDeclaredMethod(
"nativeGetRawAxisValue", long.class, int.class, int.class, int.class);
Field nativePtrField = MotionEvent.class.getDeclaredField("mNativePtr");
Field historyCurrentField = MotionEvent.class.getDeclaredField("HISTORY_CURRENT");
getRawAxisValueMethod.setAccessible(true);
nativePtrField.setAccessible(true);
historyCurrentField.setAccessible(true);
float x = (float) getRawAxisValueMethod.invoke(null, nativePtrField.get(event),
MotionEvent.AXIS_X, pointerIndex, historyCurrentField.get(null));
float y = (float) getRawAxisValueMethod.invoke(null, nativePtrField.get(event),
MotionEvent.AXIS_Y, pointerIndex, historyCurrentField.get(null));
return new Point((int)x, (int)y);
} catch (NoSuchMethodException|IllegalAccessException|InvocationTargetException|
NoSuchFieldException e) {
throw new RuntimeException(e);
}
}
Of course, the MotionEvent internals aren't documented, so this approach might crash on past or future versions of the SDK, but it seems to be working for me.
Edit: It looks like the type of mNativePtr and the nativePtr param changed from int to long in API level 20, so if you're targeting API level 19 or earlier, the above code will crash because getDeclaredMethod won't find anything. To fix this in my code, I just fetched the method by name instead of full type signature, which happens to work in this case. There isn't a way to directly look up methods with a given name, so I looped through the declared methods at static init time and saved the matching one to a static field. Here's the code:
private static final Method NATIVE_GET_RAW_AXIS_VALUE = getNativeGetRawAxisValue();
private static Method getNativeGetRawAxisValue() {
for (Method method : MotionEvent.class.getDeclaredMethods()) {
if (method.getName().equals("nativeGetRawAxisValue")) {
method.setAccessible(true);
return method;
}
}
throw new RuntimeException("nativeGetRawAxisValue method not found.");
}
Then I used NATIVE_GET_RAW_AXIS_VALUE in place of the getRawAxisValueMethod in the above code.
There is no API to get pointer's specific RawX and RawY.
But you can calculate similar values with regards to View's position to the parent and its rotation.
In case if parent view occupies the entire touch area you are trying to handle, using Matrix will help you to solve your problem:
private float[] calcRawCoords(MotionEvent event, int pointerIndex) {
Matrix screenMatrix = new Matrix();
screenMatrix.postRotate(getRotation(), mPivotX, mPivotY);
screenMatrix.postTranslate(getLeft(), getTop());
float viewToScreenCoords[] = {event.getX(pointerIndex), event.getY(pointerIndex)};
screenMatrix.mapPoints(viewToScreenCoords);
return viewToScreenCoords;
}

Categories

Resources