How to get coordinates of touch screen points in ogl? - android

I am drawing map using OpenGL. I am getting map drawn after reading XML files and setting corresponding buffer. This map contains streets, highways and boundary. What i want is whenever i touch the map, the color of the specific layer should be changed.
The issue i am facing is this whenever i touch on the screen i am just getting the the screen pixel of the point where i touched. I want to convert this point into OpenGL coordinates so that i can match this point with the Map drawn and can highlight the selected point.
How to convert this point into OpenGL coordinates?

You need to unproject screen point into an OpenGL world space:
vec3 UnProjectPoint( const vec3& Point, const max4& Projection, const mat4& ModelView )
{
vec4 R( Point, 1.0f );
R.x = 2.0f * R.x - 1.0f;
R.y = 2.0f * R.y - 1.0f;
R.y = -R.y;
R.z = 1.0f;
R = Projection.GetInversed() * R;
R = ModelView.GetInversed() * R;
return R.ToVec3();
}

You can transform screen co-ords to opengl using a transform matrix and your camera position.
See: https://stackoverflow.com/a/11716990/1369222

Better override onTouchEvent(MotionEvent e) of GLSurfaceView class and use the code below in the Renderer class in onSurfaceChanged(GL10 gl, int width, int height) method.
GLU.gluOrtho2D(gl,0,width,0,height);
The above code will map the screen coordinates to the openGL SurfaceView screen and you can put the points easily on the screen. But this will be only in the 2D view.

Related

ARCore Pose and Aruco estimatePoseSingleMarkers

I have a server function that detects and estimates a pose of aruco's marker from an image.
Using the function estimatePoseSingleMarkers I found the rotation and translation vector.
I need to use this value in an Android app with ARCore to create a Pose.
The documentation says that Pose needs two float array (rotation and translation): https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/Pose.
float[] newT = new float[] { t[0], t[1], t[2] };
Quaternion q = Quaternion.axisAngle(new Vector3(r[0], r[1], r[2]), 90);
float[] newR = new float[]{ q.x, q.y, q.z, q.w };
Pose pose = new Pose(newT, newR);
The position of the 3D object placed in this pose is totally random.
What am I doing wrong?
This is a snapshot from server image after estimate and draw axis. The image I receive is rotated of 90°, not sure if it relates to anything.
cv::aruco::estimatePoseSingleMarkers (link) returns rotation vector in Rodrigues format. Following the doc
w = norm( r ) // angle of rotation in radians
r = r/w // unit axis of rotation
thus
float w = sqrt( r[0]*r[0] + r[1]*r[1] + r[2]*r[2] );
// handle w==0.0 separately
// Get a new Quaternion using an axis/angle (degrees) to define the rotation
Quaternion q = Quaternion.axisAngle(new Vector3(r[0]/w, r[1]/w, r[2]/w), w * 180.0/3.14159 );
should work except for the right angle rotation mentioned. That is, if the lens parameters are fed to estimatePoseSingleMarkers correctly or up to certain accuracy.

How to add a static map marker in the HERE maps Android sdk?

I would like to make markers that doesn't move when the map rotates, exactly like the polylines. My goal is to give the marker a single orientation that never changes even when getsures occure.
I have tried with every marker type but I can't get the wanted effect.
Any kind of help would be appreciated since I am stuck here for really long hours
You can draw a simple rectangle with both front and back facing sides textured as follows:
// Two triangles
FloatBuffer buff = FloatBuffer.allocate(12);
buff.put(0- delta);
buff.put(0- delta);
buff.put(1.f);
buff.put(0 + delta);
buff.put(0 - delta);
buff.put(1.f);
buff.put(0 - delta);
buff.put(0 + delta);
buff.put(1.f);
buff.put(0 + delta);
buff.put(0 + delta);
buff.put(1.f);
// Two triangles to generate the rectangle. Both front and back face
IntBuffer vertIndicieBuffer = IntBuffer.allocate(12);
vertIndicieBuffer.put(0);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(3);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(0);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(2);
vertIndicieBuffer.put(1);
vertIndicieBuffer.put(3);
vertIndicieBuffer.put(2);
// Texture coordinates
FloatBuffer textCoordBuffer = FloatBuffer.allocate(8);
textCoordBuffer.put(0.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(0.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(1.f);
textCoordBuffer.put(1.f);
// The LocalMesh itself.
LocalMesh mesh = new LocalMesh();
mesh.setVertices(buff);
mesh.setVertexIndices(vertIndicieBuffer);
mesh.setTextureCoordinates(textCoordBuffer);
MapLocalModel model = new MapLocalModel();
model.setMesh(mesh);
model.setDynamicScalingEnabled(true);
model.setAnchor(new GeoCoordinate(LATITUDE, LONGITUDE, 0.0));
Attach an image to it for texture, and use MapRenderLisener#onPredraw() to change the pitch and yaw of the local model object to follow the camera.
The MapMarker object is what you are looking for? It is anchored to the position you give it and it will always be drawn in screen 2d space regardless of the tilt and rotate you apply to the map.
Hope this helps.

Converting Camera Coordinates to Custom View Coordinates

I am trying to make a simple face detection app consisting of a SurfaceView (essentially a camera preview) and a custom View (for drawing purposes) stacked on top. The two views are essentially the same size, stacked on one another in a RelativeLayout. When a person's face is detected, I want to draw a white rectangle on the custom View around their face.
The Camera.Face.rect object returns the face bound coordinates using the coordinate system explained here and the custom View uses the coordinate system described in the answer to this question. Some sort of conversion is needed before I can use it to draw on the canvas.
Therefore, I wrote an additional method ScaleFacetoView() in my custom view class (below) I redraw the custom view every time a face is detected by overriding the OnFaceDetection() method. The result is the white box appears correctly when a face is in the center. The problem I noticed is that it does not correct track my face when it moves to other parts of the screen.
Namely, if I move my face:
Up - the box goes left
Down - the box goes right
Right - the box goes upwards
Left - the box goes down
I seem to have incorrectly mapped the values when scaling the coordinates. Android docs provide this method of converting using a matrix, but it is rather confusing and I have no idea what it is doing. Can anyone provide some code on the correct way of converting Camera.Face coordinates to View coordinates?
Here's the code for my ScaleFacetoView() method.
public void ScaleFacetoView(Face[] data, int width, int height, TextView a){
//Extract data from the face object and accounts for the 1000 value offset
mLeft = data[0].rect.left + 1000;
mRight = data[0].rect.right + 1000;
mTop = data[0].rect.top + 1000;
mBottom = data[0].rect.bottom + 1000;
//Compute the scale factors
float xScaleFactor = 1;
float yScaleFactor = 1;
if (height > width){
xScaleFactor = (float) width/2000.0f;
yScaleFactor = (float) height/2000.0f;
}
else if (height < width){
xScaleFactor = (float) height/2000.0f;
yScaleFactor = (float) width/2000.0f;
}
//Scale the face parameters
mLeft = mLeft * xScaleFactor; //X-coordinate
mRight = mRight * xScaleFactor; //X-coordinate
mTop = mTop * yScaleFactor; //Y-coordinate
mBottom = mBottom * yScaleFactor; //Y-coordinate
}
As mentioned above, I call the custom view like so:
#Override
public void onFaceDetection(Face[] arg0, Camera arg1) {
if(arg0.length == 1){
//Get aspect ratio of the screen
View parent = (View) mRectangleView.getParent();
int width = parent.getWidth();
int height = parent.getHeight();
//Modify xy values in the view object
mRectangleView.ScaleFacetoView(arg0, width, height);
mRectangleView.setInvalidate();
//Toast.makeText( cc ,"Redrew the face.", Toast.LENGTH_SHORT).show();
mRectangleView.setVisibility(View.VISIBLE);
//rest of code
Using the explanation Kenny gave I manage to do the following.
This example works using the front facing camera.
RectF rectF = new RectF(face.rect);
Matrix matrix = new Matrix();
matrix.setScale(1, 1);
matrix.postScale(view.getWidth() / 2000f, view.getHeight() / 2000f);
matrix.postTranslate(view.getWidth() / 2f, view.getHeight() / 2f);
matrix.mapRect(rectF);
The returned Rectangle by the matrix has all the right coordinates to draw into the canvas.
If you are using the back camera I think is just a matter of changing the scale to:
matrix.setScale(-1, 1);
But I haven't tried that.
The Camera.Face class returns the face bound coordinates using the image frame that the phone would save into its internal storage, rather than using the image displayed in the Camera Preview. In my case, the images were saved in a different manner from the camera, resulting in a incorrect mapping. I had to manually account for the discrepancy by taking the coordinates, rotating it counter clockwise 90 degrees and flipping it on the y-axis prior to scaling it to the canvas used for the custom view.
EDIT:
It would also appear that you can't change the way the face bound coordinates are returned by modifying the camera capture orientation using the Camera.Parameters.setRotation(int) method either.

Imprecise Box2d coordinates using LibGDX

I am using LibGDX and Box2d to build my first Android game. Yay!
But I am having some serious problems with Box2d.
I have a simple stage with a rectangular Box2d body at the bottom representing the ground, and two other rectangular Box2d bodies both at the left and right representing the walls.
A Screenshot
Another Screenshot
I also have a box. This box can be touched and it moves using applyLinearImpulse, like if it was kicked. It is a DynamicBody.
What happens is that in my draw() code of the Box object, the Box2d body of the Box object is giving me a wrong value for the X axis. The value for the Y axis is fine.
Those blue "dots" on the screenshots are small textures that I printed on the box edges that body.getPosition() give me. Note how in one screenshot the dots are aligned with the actual DebugRenderer rectangle and in the other they are not.
This is what is happening: when the box moves, the alignment is lost in the movement.
The collision between the box, the ground and the walls occur precisely considering the area that the DebugRenderer renders. But body.getPosition() and fixture.testPoint() considers that area inside those blue dots.
So, somehow, Box2d is "maintaining" these two areas for the same body.
I thought that this could be some kind of "loss of precision" between my conversions of pixels and meters (I am scaling by 100 times) but the Y axis uses the same technique and it's fine.
So, I thought that I might be missing something.
Edit 1
I am converting from Box coordinates to World coordinates. If you see the blue debug sprites in the screenshots, they form the box almost perfectly.
public static final float WORLD_TO_BOX = 0.01f;
public static final float BOX_TO_WORLD = 100f;
The box render code:
public void draw(Batch batch, float alpha) {
x = (body.getPosition().x - width/2) * TheBox.BOX_TO_WORLD;
y = (body.getPosition().y - height/2) * TheBox.BOX_TO_WORLD;
float xend = (body.getPosition().x + width/2) * TheBox.BOX_TO_WORLD;
float yend = (body.getPosition().y + height/2) * TheBox.BOX_TO_WORLD;
batch.draw(texture, x, y);
batch.draw(texture, x, yend);
batch.draw(texture, xend, yend);
batch.draw(texture, xend, y);
}
Edit 2
I am starting to suspect the camera. I got the DebugRenderer and a scene2d Stage. Here is the code:
My screen resolution (Nexus 5, and it's portrait):
public static final int SCREEN_WIDTH = 1080;
public static final int SCREEN_HEIGHT = 1920;
At the startup:
// ...
stage = new Stage(SCREEN_WIDTH, SCREEN_HEIGHT, true);
camera = new OrthographicCamera();
camera.setToOrtho(false, SCREEN_WIDTH, SCREEN_HEIGHT);
debugMatrix = camera.combined.cpy();
debugMatrix.scale(BOX_TO_WORLD, BOX_TO_WORLD, 1.0f);
debugRenderer = new Box2DDebugRenderer();
// ...
Now, the render() code:
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
world.step(1/45f, 6, 6);
world.clearForces();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
debugRenderer.render(world, debugMatrix);
}
Looks like the answer to that one was fairly simple:
stage.setCamera(camera);
I was not setting the OrthographicCamera to the stage, so the stage was using some kind of default camera that wasn't aligned with my stuff.
It had nothing to do with Box2d in the end. Box2d was returning healthy values, but theses values were corresponding to wrong places in my screen because of the wrong stage resolution.

Rotating the Canvas impacts TouchEvents

I have a map application using an in-house map engine on Android. I'm working on a rotating Map view that rotates the map based on the phone's orientation using the Sensor Service. All works fine with the exception of dragging the map when the phone is pointing other than North. For example, if the phone is facing West, dragging the Map up still moves the Map to the South versus East as would be expected. I'm assuming translating the canvas is one possible solution but I'm honestly not sure the correct way to do this.
Here is the code I'm using to rotate the Canvas:
public void dispatchDraw(Canvas canvas)
{
canvas.save(Canvas.MATRIX_SAVE_FLAG);
// mHeading is the orientation from the Sensor
canvas.rotate(-mHeading, origin[X],origin[Y]);
mCanvas.delegate = canvas;
super.dispatchDraw(mCanvas);
canvas.restore();
}
What is the best approach to make dragging the map consistent regardless of the phones orientation? The sensormanager has a "remapcoordinates()" method but it's not clear that this will resolve my problem.
You can trivially get the delta x and delta y between two consecutive move events. To correct these values for your canvas rotation you can use some simple trignometry:
void correctPointForRotate(PointF delta, float rotation) {
// Get the angle of movement (0=up, 90=right, 180=down, 270=left)
double a = Math.atan2(-delta.x,delta.y);
a = Math.toDegrees(a); // a now ranges -180 to +180
a += 180;
// Adjust angle by amount the map is rotated around the center point
a += rotation;
a = Math.toRadians(a);
// Calculate new corrected panning deltas
double hyp = Math.sqrt(x*x + y*y);
delta.x = (float)(hyp * Math.sin(a));
delta.y = -(float)(hyp * Math.cos(a));
}

Categories

Resources