Right now, I am trying to create a simple app where a square follows where your finger touches as you move it along the screen. I would like to do this using opengl-es. I am fairly new to it and right now I am stuck on mapping the window coordinates reported by the touch event to the cartesian plane coordinate system that the opengl-es has. Right now, to get the x and y translation, I am using:
final int x = (e.getX()-(getWidth()/2))/(getWidth()/2) * SOME_SCALING_FACTOR;
final int y = ((getHeight()/2)-e.getY())/(getHeight()/2) * SOME_SCALING_FACTOR;
The logic behind this is that I think the cartesian plane is centered around the screen so I am trying to re-map my touch event coordinates in that way. However, my results are very inaccurate.
Is there another way I should be doing this?
I wouldn't use a SOME_SCALING_FACTOR there. It's better to configure OpenGL for it.
You may use code below to configure camera, then do your drawing.
gl.glMatrixMode(gl.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluOrtho2D(gl, 0, WINDOWx, 0, WINDOWy);
gl.glMatrixMode(gl.GL_MODELVIEW);
gl.glLoadIdentity();
The lower left point of screen will be origin, upper left is (0,WINDOWy) etc..
Related
I would like to convert 2d Screen Coordinates to 3d World Coordinates.
I put together a hack but would like a solid solution. If possible please use your equation/algorithm on the example below (if there is already a link, could you use the link to show the solution to the problem below)
My Environment is Java/C++ for Android and iOS using OpenGL 2.0
Problem to Solve
Where screen
(0,0) is top left and bottom right is screenWidth and screenHeight.
screenWidth=667; screenHeight=375;
//Camera's position,where camera is looking at, and camera direction
CamPos (0.0f,0.0f,11.0f); CamLookAt (0.0f, 0.0f, 10.5f); CamDir(0.0f,1.0f,0.0f);
//The Frustum
left =1.3f; right=1.3f; bottom=1.0f; top=1.0f; near=3.0f; far=1000.0f;
Obj
//object's position in 3d space x,y,z and scaling
objPos(1.0f, -.5f, 5.0f); objScale(.1f,.1f,0.0f);
The problem is how to convert (600,200) screen coordinates (with scaling of (.1f,.1f.0.0f))to the object's (1.0f,-.5f,5.0f) world coordinates to check if they are colliding(I am doing simple box collision, just need the 2d screen coordinates converted to world). I will put the hack that I have below but I am sure there is a better answer. Thank you in advance for any help. Please use the numbers above to show how your algorithm works.
Hack
//this is 0,0 in screen coordinates//-Cam_Offset =hardcoded to camera's pos
float convertedZeroY= (((mCam->right)*(Cam_Offset))); //2
float convertedZeroX= -(((Cam_Offset)+convertedZeroY));
convertedZeroY=convertedZeroY/2;
convertedZeroX=convertedZeroX/2;
//this is the length from top to bottom in game coordinates
float normX=convertedZeroX*2;
float normY=convertedZeroY*2;
//this is the screen coordinates converted to 3d world coordinates
mInput->mUserInputConverted->mPos.x=(-(((mInput->x)/mCam->mScreenW)*normX)+convertedZeroX);
mInput->mUserInputConverted->mPos.y=(-(((mInput->y)/mCam->mScreenH)*(normY))+convertedZeroY);
On my game screen i want to have a swipe detected only if its more than 100px, this is because the user taps a lot on the game screen and it tends to detect a swipe which changes the screen back to title. How can i make the swipe detect only if its longer than 100px?
There are two ways to achieve this.
The first one is to save the starting point of the touch and measure the distance on end of the touch event, just like Paul mentioned.
The second is to enlarge the tap square size if you use the GestureDetector of libgdx. Its defaulted to 40px which means if you're finger moves more than 20px to any side it's no longer a tap event but a pan event. I'd recommend using the GestureListener/Detector as it will give you the basic mobile gestures out of the box rather than recoding them.
On a side note: Determining the distance by pixels is error-prone because the pixel density will vary between mobile devices, especially if you code for android! 100px on one device may be only half the distance than on another device. Take pixel density into consideration when doing this or change to relative measurements like 1/3 of the screen size!
Save the position in the touch up and down.
private Vector2 downPos = new Vector2(), upPos = new Vector2();
private Vector3 tmp = new Vector3();
public void touchDown(float x, float y.....) {
tmp.set(x, y, 0);
camera.unproject(tmp);
downPos.set(tmp.x, tmp.y);
}
public void touchUp(float x, float y.....) {
tmp.set(x, y, 0);
camera.unproject(tmp);
upPos.set(tmp.x, tmp.y);
float distance = downPos.dst(upPos); // the distance between thoose vectors
if (distance > 100) {
// there was a swipe of a distance longer than 100 pixels.
}
}
If you don't want to do that only on touch up, put the code in the touchdrag method.
I am building an android application similar to x-ray scanner (Play Store Link), which moves images smoothly on screen by moving the device left,right top and bottom.
I am using accelerometer for this, but problem is that image is not moving smoothly.
My code is below
int x1 = (int) sensorEvent.values[0]*(screenW/10);
int y1 = (int) sensorEvent.values[1]*(screenH/14);
and then in on Draw
canvas.drawBitmap(bmp, x, y, mPaint);
This is not how you use them. You should take current value and ADD to current position instead of setting position from value directly. The more you tilt - the bigger values you will get and hence the faster the image will appear to move.
You can then also apply some linear interpolation to the movement so that it appears smoother.
Here is a link to learn more about lerp (linear interpolation) in code: http://en.wikipedia.org/wiki/Linear_interpolation#Programming_language_support
I have a Opengl ES 1.x ANdroid 1.5 app that shows a Square with Perspective projection, on the center of the screen.
I need to move the camera (NOT THE SQUARE) when the user moves the finger on the screen, for example, if the user moves the finger to the right, the camera must be moved to the left, it must be shown like if the user is moving the square.
I need to do it without translating the square. The square must be on the opengl position 0,0,-1 allways.
I DONT WANT to rotate the camera arround the square, no, what i want is to move the camera side to side. Code examples are welcome, my opengl skills are very low, and i can't find good examples for this in google
I know that i must use this function: public static void gluLookAt (GL10 gl, float eyeX, float eyeY, float eyeZ, float centerX, float centerY, float centerZ, float upX, float upY, float upZ), but i dont understand where and how to get the values for the parameters. Because this, i will apreciate code examples for doing this.
for example:
I have a cube on the position 0,0,-1. I want that my camera points the cube. I tryed with this: GLU.gluLookAt(gl, 0, 0, 2, 0, 0, 0, 0, 0, 1);, but the cube is not in the screen, i just donmt understand what im doing wrong
First of all, you have to understand that in OpenGL there are not distinct model and view matrices. There is only a combined modelview matrix. So OpenGL doesn't care (or even know) if you translate the camera (what is a camera anyway?) or the object, so your requirement not to move the square is entirely artificial. Though it may be that this is a valid requirement and the distinction between model and view transformation often is very practical, just don't think that translating the square is any different from translating the camera from OpenGL's point of view.
Likewise don't you neccessarily need to use gluLookAt. Like glOrtho, glFrustum or gluPerspective this function just modifies the currently selected matrix (usually the modelview matrix), nothing different from the glTranslate, glRotate or glScale functions. The gluLookAt function comes in handy when you want to position a classical camera, but its functionality can also be achieved by calls to glTranslate and glRotate without problems and sometimes (depending on your requirements) this is even easier than artificially mapping your view parameters to gluLookAt parameters.
Now to your problem, which is indeed solvable quite easily without gluLookAt: What you want to do is move the camera in a direction parallel to the screen plane and this in turn is equivalent to moving the camera in the x-y-plane in view space (or camera space, if you want). And this in turn is equivalent to moving the scene in opposite direction in the x-y-plane in view space.
So all that needs to be done is
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, 0.0f);
//camera setup...
Where (x, y) is the movement vector determined from the touch events, appropriately scaled (try dividing the touch coords you get by the screen dimensions or something similar for example). After this glTranslate comes whatever other camera or scene transformations you already have (be it gluLookAt or just some glTranslate/glRotate/glScale calls). Just make sure that the glTranslate(x, y, ...) is the first transformation you do on the modelview matrix after setting it to identity, since we want to move in view space.
So you don't even need gluLookAt. From your other questions I know your code already looks something like
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(x, y, z);
glRotatef(...);
...
So everything you need to do is plug the x and y values determined from the touch movement into the first glTranslate call (or add them to already existing x and y values), since multiple translations are perfectly commutative.
For more insight into OpenGL's transformation pipeline (which is definitely needed before progressing further), you may also look at the asnwers to this question.
EDIT: If you indeed want to use gluLookAt (be it instead or after the above mentioned translation), here some small words about its workings. It defines a camera using three 3d vectors (passed in as 3 consecutive values each). First the camera's position (in your case (0, 0, 2)), then the point at which the camera looks (in your case (0, 0, 0), but (0, 0, 1) or (0, 0, -42) would result in the same camera, the direction matters). And last comes an up-vector, defining the approximate up-direction of the camera (which is further orthogonalized by gluLookAt to make an appropriate orthogonal camera frame).
But since the up-vector in your case is the z-axis, which is also the negative viewing direction, this results in a singular matrix. You probably want the y-axis as up-direction, which would mean a call to
gluLookAt(0,0,2, 0,0,0, 0,1,0);
which is in turn equivalent to a simple
glTranslate(0, 0, -2);
since you use the negative z-axis as viewing direction, which is also OpenGL's default.
how to set the screen coordinate system of android screen as first Quadrant of the XY plane
,iwant the (0,0) position to be at bottom left , and i wanna know if i can use the trignometric equation on android screen as Android XY plane is not like the Xy plane
I don't think there's a way to do it that would affect the entire system, such as the XML layout files. But if you just want to draw with a Canvas, you can use translate() and scale().
First use translate() to slide the canvas down so 0,0 is at the bottom. Now the top of the screen would be a negative number, so call scale() to flip it around. Now 0,0 is still at the bottom, and the top of the screen is a positive number.
I'm working with information from this answer and its comments. Use something like:
canvas.save(); // need to restore after drawing
canvas.translate(0, canvas.getHeight()); // reset where 0,0 is located
canvas.scale(1, -1); // invert
... // draw to canvas here
canvas.restore(); // restore to normal
And yes, you can use normal 2D trigonometric functions with the XY coords. You can do it even if they're not translated, you just have to think it through more carefully.
I don't know that you're going to have much luck changing where (0,0) is located, but you could set a constant that accounts for such. myY = y minus screenHeight so (x, myY) adjusts y to the bottom of the screen and works from there +/-.
look up canvas.scale(xs,ys,xp,yp)
xp and yp are the new coordinates that you set for your (0,0) point.