I am trying to find world coordinates from screen coordinates on a Plane where mCamera is a PerspectiveCamera.
public Vector3 getWorldCoordinates(float x, float y) {
// Use an imaginary plane at z=0 to intersect ray
Plane plane = new Plane();
plane.set(0, 0, 1, 0);
Ray ray = mCamera.getPickRay(x, y);
Vector3 pos = new Vector3();
Intersector.intersectRayPlane(ray, plane, pos);
return pos;
}
How can I modify this routine to rotate the Plane x degrees on the X axis before finding the coordinates?
Is there any simple built in libgdx routines used to rotate the plane such as for the Camera using Camera.rotate(Vector3.X, degrees)?
Set a Vector3 to the orientation you want and use that to set the Plane. So in your case
vector3.set(0, 0, 1);
vector3.rotateX(0.5f); // for example
plane.set(vector3.x, vector3.y, vector3.z, 0);
If you are doing this on every frame, you might want to consider instantiating your vector and plane one time in the class constructor and reusing them so you don't occasionally trigger the GC, which can cause stutters.
Related
I am developing a Gdx game but I have got stuck on some part and I will explain briefly about it:
I have 9 balls organized on 3*3 and I need to detect the ball that I'm touching as shown in the image below:
enter image description here
and I typed this code:
for (int i : listBalls) {
touchPoint = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0);
rectangle = new Rectangle(sprite[i].getX(), sprite[i].getY(), spriteSize, spriteSize);
if (rectangle.contains(touchPoint.x, touchPoint.y)) {
Gdx.app.log("Test", "Touched dragged " + String.valueOf(i));
pointer = i;
}
}
In case of touching any ball of the above row, it detects the opposite ball in the bottom row. For example in the above image, if I touch ball no 2 on the top it will point to ball no 8, and same if touching any of the bottom balls.
In case of touching any ball of the middle ball, it gives the right on.
I hope I could explain clearly my issue. Please help.
As explained here: LibGDX input y-axis flipped the coordinate system of the input is inverted on the y-axis, try substracting the screen height of your device or camera
int y = screenHeight - Gdx.input.getY();
Keep in mind that using a Camera and unprojecting the input coordinates is the most recommended way to go about detecting input in LibGDX. Example:
#Override
public boolean touchDown(int screenX, int screenY, int pointer, int button) {
Vector3 unprojected = camera.unproject(new Vector3(screenX, screenY,0));
}
You don't even have to invert the y-coordinate, unproject() does this automatically for you. To use the correct x and y coordinates you then could use:
float x = unprojected.x;
float y = unprojected.y;
I have a server function that detects and estimates a pose of aruco's marker from an image.
Using the function estimatePoseSingleMarkers I found the rotation and translation vector.
I need to use this value in an Android app with ARCore to create a Pose.
The documentation says that Pose needs two float array (rotation and translation): https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/Pose.
float[] newT = new float[] { t[0], t[1], t[2] };
Quaternion q = Quaternion.axisAngle(new Vector3(r[0], r[1], r[2]), 90);
float[] newR = new float[]{ q.x, q.y, q.z, q.w };
Pose pose = new Pose(newT, newR);
The position of the 3D object placed in this pose is totally random.
What am I doing wrong?
This is a snapshot from server image after estimate and draw axis. The image I receive is rotated of 90°, not sure if it relates to anything.
cv::aruco::estimatePoseSingleMarkers (link) returns rotation vector in Rodrigues format. Following the doc
w = norm( r ) // angle of rotation in radians
r = r/w // unit axis of rotation
thus
float w = sqrt( r[0]*r[0] + r[1]*r[1] + r[2]*r[2] );
// handle w==0.0 separately
// Get a new Quaternion using an axis/angle (degrees) to define the rotation
Quaternion q = Quaternion.axisAngle(new Vector3(r[0]/w, r[1]/w, r[2]/w), w * 180.0/3.14159 );
should work except for the right angle rotation mentioned. That is, if the lens parameters are fed to estimatePoseSingleMarkers correctly or up to certain accuracy.
I am currently working on a Worms-like game. I generate random Levels, which holds an Array of Points for each x with corresponding y. Also i have two arrays with some x values, where I then place trees and huts. I use a Orthographic camera and the translate method to move the camera when the user touches the screen. In order to have big levels, I decided to render the map only for the part that is currently visible. for that I have a BackgroundActor, which gets the current position of the camera, from that information I get the corresponding part of my map from the level class with the surface array. I then render this information with a ShapeRenderer. Then I render the props (trees and huts). The problem is, that the props get unaligned with the surface, when I drag the screen. For example: I move the map to the left, and the surface is moving faster to the left than the props. I already tried to set the projection Matrix for both the SpriteBatch and the ShapeRenderer, but it did not help.
Code:
#Override
public void draw(Batch batch, float parentAlpha) {
setBounds(); //gets the index for my map array from the camera
ShapeRenderer shapeRenderer = AndroidLauncher.gameScreen.shapeRenderer;
batch.end(); //needed, because otherwise the props do not render
shapeRenderer.begin(ShapeRenderer.ShapeType.Filled);
for (int x = 0; x < ScreenValues.screenWidth; x++) {
int y = level.getYForX(x + leftBound);
shapeRenderer.setColor(level.getUndergroundColor());
shapeRenderer.rectLine(x, 0, x, y - level.getSurfaceThickness(), 1);
shapeRenderer.setColor(level.getSurfaceColor());
shapeRenderer.rectLine(x, y - level.getSurfaceThickness(), x, y, 1);
}
shapeRenderer.end();
batch.begin();
for (int x = 0; x < ScreenValues.screenWidth; x++) {
int y = level.getYForX(x + leftBound);
if (level.getPropForX(x) != Level.PROP_NONE) {
if (level.getPropForX(x) == Level.PROP_TREE) y -= 10;
Image imageToDraw = getImageFromPropId(level.getPropForX(x)); //Images are setup in the create method of my Listener
imageToDraw.setPosition(x, y);
imageToDraw.draw(batch, parentAlpha);
}
}
}
I fixed the issue myself. In the for loop for the props I needed to run x from leftBound to ScreenValues.screenWidth + leftBound. This still gives me Texture popping when the props get to the left side of the screen, because the props x position is out of screen, but this will be a small fix.
Basically i have an application for Android 1.5 with a GLSurfaceView class that shows a simple square polygon on the screen. I want to learn to add a new functionality, the functionality of moving the square touching it with the finger. I mean that when the user touches the square and moves the finger, the square should be moved with the finger, until the finger releases the screen.
I'm trying to use gluUnProject to obtain the OpenGL coordinates that matches the exact position of the finger, then, i will make a translatef to the polygon, and i will get the polygon moved to that position (i hope it)
The problem is that something is going wrong with gluUnProject, it is giving me this exception: java.lang.IllegalArgumentException: length - offset < n on the call to gluUnProject.
First of all, i'm passing 0 as Z win coordinate because i dont know what i have to pass as z win coordinate, because win doesn't have Z coordinates, only X and Y. I tested passing 1 on Z coordinate, and i'm getting the same exception.
float [] outputCoords=getOpenGLCoords(event.getX(), event.getY(), 0);
x=outputCoords[0];
y=outputCoords[1];
z=outputCoords[2];
.
.
.
public float[] getOpenGLCoords(float xWin,float yWin,float zWin)
{
int screenW=SectionManager.instance.getDisplayWidth();
int screenH=SectionManager.instance.getDisplayHeight();
//CODE FOR TRANSLATING FROM SCREEN COORDINATES TO OPENGL COORDINATES
mg.getCurrentProjection(MyGl);
mg.getCurrentModelView(MyGl);
float [] modelMatrix = new float[16];
float [] projMatrix = new float[16];
modelMatrix=mg.mModelView;
projMatrix=mg.mProjection;
int [] mView = new int[4];
mView[0] = 0;
mView[1] = 0;
mView[2] = screenW; //width
mView[3] = screenH; //height
float [] outputCoords = new float[3];
GLU.gluUnProject(xWin, yWin, zWin, modelMatrix, 0, projMatrix, 0, mView, 0, outputCoords, 0);
return outputCoords;
}
I answered the same question here; basically the gluUnproject function expects your outputCoords array to have size 4 instead of 3. Note that these are homogeneous coordinates, so you still have to divide the first 3 by the 4th one if you're doing perspective projection.
I have a map application using an in-house map engine on Android. I'm working on a rotating Map view that rotates the map based on the phone's orientation using the Sensor Service. All works fine with the exception of dragging the map when the phone is pointing other than North. For example, if the phone is facing West, dragging the Map up still moves the Map to the South versus East as would be expected. I'm assuming translating the canvas is one possible solution but I'm honestly not sure the correct way to do this.
Here is the code I'm using to rotate the Canvas:
public void dispatchDraw(Canvas canvas)
{
canvas.save(Canvas.MATRIX_SAVE_FLAG);
// mHeading is the orientation from the Sensor
canvas.rotate(-mHeading, origin[X],origin[Y]);
mCanvas.delegate = canvas;
super.dispatchDraw(mCanvas);
canvas.restore();
}
What is the best approach to make dragging the map consistent regardless of the phones orientation? The sensormanager has a "remapcoordinates()" method but it's not clear that this will resolve my problem.
You can trivially get the delta x and delta y between two consecutive move events. To correct these values for your canvas rotation you can use some simple trignometry:
void correctPointForRotate(PointF delta, float rotation) {
// Get the angle of movement (0=up, 90=right, 180=down, 270=left)
double a = Math.atan2(-delta.x,delta.y);
a = Math.toDegrees(a); // a now ranges -180 to +180
a += 180;
// Adjust angle by amount the map is rotated around the center point
a += rotation;
a = Math.toRadians(a);
// Calculate new corrected panning deltas
double hyp = Math.sqrt(x*x + y*y);
delta.x = (float)(hyp * Math.sin(a));
delta.y = -(float)(hyp * Math.cos(a));
}