I've created a hex tile map and I a problem in combining two features:
1) scrolling the map by dragging with the mouse/finger
2) highlighting a tile on mouse over position
My world contains a map, which consists of custom Tile() objects. Each Tile has an x and y coordinate saved. In my game class I use an OrthographicCamera and a TileSelector, a class that gets the current mouse position and highlights the appropriate tile underneath those coordinates.
Further more, my render() method checks whether the mouse/finger is down and if so, each tile's position is updated by the distance moved while finger/mouse button down.
The following code shows the relevant parts. As long as I don't move the map, the highlighting works. But as soon as I do move the tiles, the highlighting starts to be off (by the amount of the distance moved).
//initialiting orthographic camera
camera = new OrthographicCamera(800, 480);
camera.setToOrtho(false, 800, 480);
//draw screen black
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
camera.update();
game.batch.setProjectionMatrix(camera.combined);
//drawing to batch
game.batch.begin();
//drawing the tiles, simply getting x,y coords and putting on screen
world.render(game.batch);
//highlighting tile under the current mouse/finger coords and drawing highlight on screen
selector.select(Gdx.input.getX(), Gdx.graphics.getHeight() - Gdx.input.getY(), game.batch, world);
game.batch.end();
//scrolling routine
if (Gdx.input.isTouched()) {
Vector3 touchPos = new Vector3();
touchPos.set(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(touchPos);
deltaX = (float)Gdx.input.getDeltaX();
deltaY = (float)Gdx.input.getDeltaY();
//moving each tile by the distance moved
for(int i = 0; i< world.getMap().getWidth(); i++){
for(int j = 0; j< world.getMap().getHeight(); j++){
tile = world.getMap().getTile(i, j);
tile.x += deltaX;
tile.y -= deltaY;
world.getMap().setTile(tile, i, j);
}
}
}
See screenshots for visualization:
1)Map not moved, mouse(red arrow) over tile and it is highlighted
http://i.stack.imgur.com/3d7At.png
2)Map moved but mouse on old position, no tile should be highlighted, but the tile is highlighted as if the map has not moved
http://i.stack.imgur.com/dSibc.png
Now I understand that the problem seems to arise from not mapping/updating screen and world coordinates correctly. But due to my lack of knowledge about LibGdx/OpenGL/projecting/translating I can't pinpoint my mistake.
So can this rendering algorithm be fixed as it is? Or should I switch from moving my tiles to moving my camera(how)? Please let me know, if you need more of the code, thank you.
Alright, I was able to find a solution on my own, since the problem was rather in the logic and not in the LibGdx API. In case somebody else has the same issue in the future, I'll provide the solution here.
At first I followed Angel-Angel's tip and started to move the camera instead of the map/world. Second, the main problem was the fact, that I used the screen coordinates of my mouse/finger for the tile highlighting, instead of the world coordinates.
The new code:
//scrolling routine
if (Gdx.input.isTouched()) {
//get mouse/finger position in screen coordinates and save in a Vector3
touchPos.set(Gdx.input.getX(), Gdx.input.getY(), 0);
//get the distance moved by mouse/finger since last frame and translate camera to that position
//(I have no idea yet why the x-axis is flipped)
deltaX = (float)Gdx.input.getDeltaX();
deltaY = (float)Gdx.input.getDeltaY();
camera.translate(-deltaX, deltaY, 0);
//this is the solution, the coordinates that you use for tile highlighting
//are translated to world coordinates
touchPos = camera.unproject(touchPos);
}
//update the camera position and the math that is involved
camera.update();
game.batch.setProjectionMatrix(camera.combined);
game.batch.begin();
world.render(game.batch);
//now the highlighting routine is working with world coordinates
selector.select(touchPos.x, touchPos.y, game.batch, world);
game.batch.end();
I hope this will help out in the future. If anyone is interested how I get the right tile for highlighting, I use the method described in the highest rated answer in this thread:
Hexagonal Grids, how do you find which hexagon a point is in?
Good luck and have fun!
Related
I want to pinch zoom around a specific coordinate on a tiled 15f x 15f 3D board. I.e. I don't want to zoom around origin. Thus I need to pan the board accordingly.
I am using a PerspectiveCamera (near=0.1f, far=100f). For a perfect fit of the screen the camera is located at approx.z=13.4 before zooming.
Now what (I think) I want to do is to:
Unproject the screen coordinates (GestureDetector.pinch method) done once for each pinch zoom:
float icx = (initialPointer1.x + initialPointer2.x) / 2;
float icy = (initialPointer1.y + initialPointer2.y) / 2;
pointToMaintain = new Vector3(icx, icy, 0f);
mCamera.unproject(pointToMaintain);
Now for each zoom cycle (as I adjust the mCamera.z accordingly and do mCamera.update()) I project the point back to screen coordinates:
Vector3 pointNewPos = new Vector3(pointToMaintain);
mCamera.project(pointNewPos);
Then calculate the delta and pan accordingly:
int dX = (int) (pointNewPos.x - icx);
int dY = (int) (pointNewPos.y - icy);
pan(...); /* I.e. mCamera.translate(...) */
My problem is that the mCamera.z is initially above pointToMaintain.z and then goes below as the user moves the fingers:
cam.z ptm.z dX dY
0 13.40
1 13.32 13.30 12 134
2 13.24 13.30 12 -188
...
(0) is the original value of mCamera.z before zooming starts. (1) is not not valid? However (2) should be OK.
My questions are:
(1) How can I get a "valid" pointToMaintain when unprojecting the screen coordinates on the camera. I.e. a point that is not less than cam.z. (The reason I get the point to 13.30 is (I guess) because near=0.1f. But as seen above this results in (weird?) screen coordinates).
(2) Is this a good strategy for moving the tiles board closer to the coordinates the user pinched zoomed on?
To mantain focus points, I did this code:
Obs: This code relies on overloaded operators, you need to change the vectors operators by its method (addMe, subtract, etc)
void zoomAt(float changeAmmount, Vector2D focus) {
float newZoom = thisZoom + changeAmmount;
offset = focus - ((focus - offset) * newZoom / thisZoom);
thisZoom = newZoom;
}
Where
focus = Current center point to mantain
offset = Distance from 0,0
thisZoom = current zoom ammount; (starts at 1)
changeAmmount = value to increase or decrease zoom
It took me 4 tries along of 3 years to make it done, and was pretty easy when you drawn it down, its just two triangles.
I know this question has been asked many many times, but with all the knowledge out there I still can't get it to work for myself in the specific setting I now find myself in: Processing for Android.
The coordinate systems involved are (1) the real-world coordinate system as per Android's view: y is tangential to the ground and pointing north, z goes up into the sky, and x goes to your right, if you're standing on the ground and looking north; and (2) the device coordinate system as per Processing's view: x points to the right of the screen, y down, and z comes out of the screen.
The goal is simply to draw a cube on the screen and have it rotate on device rotation such that it seems that it is stable in actual space. That is: I want a map between the two coordinate systems so that I can draw in terms of the real-world coordinates instead of the screen coordinates.
In the code I'm using the Ketai sensor library, and subscribe to the onRotationVectorEvent(float x, float y, float z) event. Also, I have a simple quaternion class lying around that I got from https://github.com/kynd/PQuaternion. So far I have the following code, in which I have two different ways of trying to map, that coincide, but nevertheless don't work as I want them to:
import ketai.sensors.*;
KetaiSensor sensor;
PVector rotationAngle = new PVector(0, 0, 0);
Quaternion rot = new Quaternion();
void setup() {
fullScreen(P3D);
sensor = new KetaiSensor(this);
sensor.start();
}
void draw() {
background(#333333);
translate(width/2, height/2);
lights();
// method 1: draw lines for real-world axes in terms of processing's coordinates
PVector rot_x_axis = rot.mult(new PVector(400, 0, 0));
PVector rot_y_axis = rot.mult(new PVector(0, 0, -400));
PVector rot_z_axis = rot.mult(new PVector(0, 400, 4));
stroke(#ffffff);
strokeWeight(8); line(0, 0, 0, rot_x_axis.x, rot_x_axis.y, rot_x_axis.z);
strokeWeight(5); line(0, 0, 0, rot_y_axis.x, rot_y_axis.y, rot_y_axis.z);
strokeWeight(2); line(0, 0, 0, rot_z_axis.x, rot_z_axis.y, rot_z_axis.z);
// method 2: first rotate appropriately
fill(#f4f7d2);
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, rotationAngle.y, rotationAngle.z);
box(200, 200, 200);
}
void onRotationVectorEvent(float x, float y, float z) {
rotationAngle = new PVector(x, y, z);
// I believe these two do the same thing.
rot.set(x, y, z, cos(asin(rotationAngle.mag())));
//rot.setAngleAxis(asin(rotationAngle.mag())*2, rotationAngle);
}
The above works well enough that the real-world axis lines coincide with the cube drawn, and both rotate in an interesting way. But still, there seems to be some "gimbal stuff" going on, in the sense that, when I rotate my device up and down standing one way, the cube also rotates up and down, but standing another way, the cube rotates sideways --- as if I'm applying the rotations in the wrong order. However, I'm trying to avoid gimbal madness by working with quaternions this way --- how does it still apply?
I've solved it now, just by a simple "click to test next configuration" UI, to test all possible 6 * 8 configurations of rotate(asin(rotationAngle.mag()) * 2, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>, <SIGN> * rotationAngle.<DIM>); -- the solution to which seemed to be 0, -1, 2, i.e.:
rotate(asin(rotationAngle.mag()) * 2, rotationAngle.x, -rotationAngle.y, rotationAngle.z);
Good evening, quick question.
Im developing a top-down 2D platformer game in Unity3D. Here is a picture of the game.
I have pretty much everything worked out on a desktop, but when attempting to set up the controls for mobile, I can't seem to get it to work the way it should. All I need is to get the player to move in the direction of wherever the user touches the screen. With the current code im using, the player just rotates in 4 directions, up, down, left and right. He also moves a little, but never goes far from his spawn point.
Please take a look at my revised code:
public Camera camera;
public float movespeed = 0;
// Use this for initialization
void Start () {
movespeed = 2.75F;
}
// Update is called once per frame
void Update () {
if (Input.touchCount > 0) {
// The screen has been touched so store the touch
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Stationary || touch.phase == TouchPhase.Moved) {
// If the finger is on the screen, move the object smoothly to the touch position
Vector3 touchPosition = camera.ScreenToWorldPoint(new Vector3(touch.position.x, touch.position.y, -13));
Quaternion rot = Quaternion.LookRotation(transform.position - touchPosition, Vector3.back);
transform.rotation = rot;
transform.eulerAngles = new Vector3 (0, 0, transform.eulerAngles.z);
rigidbody2D.angularVelocity = 0;
//float input = Input.GetAxis ("Vertical");
transform.position = Vector3.Lerp(transform.position, touchPosition, Time.deltaTime);
}
}
}
}
Any ideas on how I can get my player to move to the touched are of the screen? Any help would be much appreciated. Thanks in advance.
If I have understood correctly, you want your player game object to move towards the point on the screen that is being touched. I think it's probably best to describe the behavior of your code so that you can hopefully better understand what might be happening.
From the code posted, I can see one possible issue. Look again at this line:
Quaternion rot = Quaternion.LookRotation(transform.position - touchPosition, Vector3.back);
Here, you are asking Unity to calculate the unit quaternion that represents a rotation from the direction of Vector3.forward to the direction from of the player game object from the touch position. This probably isn't what you want. From the description of the problem, you want the game object to rotate to face the point on the screen being touched (rather than the opposite direction). You can either change the order of the subtraction operands or, preferably, use instead the Transform.LookAt method.
After this, you update the transform's rotation:
transform.rotation = rot;
That's fine, but note that you wouldn't need to do this when using Transform.LookAt.
You then set the transform's rotation again using in this line:
transform.eulerAngles = new Vector3 (0, 0, transform.eulerAngles.z);
I'm not entirely sure why you are doing this. If you only want one axis of rotation, you can use, for example:
transform.LookAt(new Vector3(touchPosition.x, touchPosition.y, transform.position.z))
This should rotate the player's transform around the z-axis to look in the direction of the point being touched.
Finally, you linearly interpolate the transform's position from its current position to the point being touched:
transform.position = Vector3.Lerp(transform.position, touchPosition, Time.deltaTime);
This isn't necessary. Instead, you should just move the player's transform forward. The player should be looking in the direction of the touched screen point. Hence, translating the player forward will move the player towards said screen point:
transform.position += transform.forward * speed * Time.deltaTime;
When the player is very close to the touched screen point it will overshoot and immediately rotate to look in the opposite direction. This will occur repeatedly. You should include some distance that specifies when the player is assumed to have reached the target point.
I am writing my own game engine (a very base one), I want to learn how physics work in game development and not using an already built game engine. I am writing my code in Java for Android devices (using SurfaceView).
The problem is that I don't know how to calculate the position for my object after a collision. I have created my own collision detection and it is working perfectly.
As you can see, the red rectangle is the area where my ball should move. The arrows is showing where the ball should move after a collision happens. The ball have different position, marked with 1 - 11 (note, while rendering the "world" you see only one ball!).
The balls are actually rectangles! But you can not see the edges.
I have created my own Game Object class, where I'm keeping data about the object position, velocity, origin, etc.:
public abstract class GameObject
{
public Vector2 dimension;
public Vector2 position;
public Vector2 velocity;
public Vector2 origin;
public Rectangle rectangle;
public GameObject(Resources resources)
{
this.dimension = new Vector2();
this.position = new Vector2();
this.velocity = new Vector2();
this.origin = new Vector2();
this.rectangle = new Rectangle();
}
public void update(float deltaTime)
{
position.x += velocity.x;
position.y += velocity.y;
rectangle.set(position.x, position.y, dimension.x, dimension.y);
origin.x = position.x + dimension.x / 2;
origin.y = position.y + dimension.y / 2;
}
}
This method is called if a ball collide with one of the red rectangle margins:
protected void onBallCollideWithLevelEdge(Ball ball)
{
// Calculate next position:
??????????
}
My ball have a velocity and a position. Should I save the previous position of the ball?
1) Why do you represents balls as rectangles? Circle is far more easy to handle, expecially if you want to extend collision to ball-ball.
2) If you are trying to make a physics engine you must have coordinates of all objects and the respective first and second derivatives, aka speed and acceleration. Moreover you need a mass for each object and maybe some other parameters, for example material (for friction) and elasticity, but this is not needed at the beginning.
3) When a collision with walls happens you have current ball position and velocity. Given this data you have to calculate normal force. Normal force is such that ball cannot pass through the wall, so I'd calculate it's magnitude using something like this:
Nx = DELTAx*k;
Ny = DELTAy*k;
where k is some elasticity constant that you can trim and x is the measure of how much the ball has penetrated in the wall. Note that this is good only for "slow" objects. If you deal with bullets you'd better using something like rays or.
Another way could be to calculate kinetic energy at the time of collision and transfer it to elastic energy. Once you have elastic energy you can release it in a direction normal to the wall, transforming it to a force.
Once you have the force, it becomes an acceleration by dividing it for the mass.
4) at each simulation iteration you add speed to position, and acceleration to speed, remembering of multiplying it for the integration time (dt). This time can be set arbitrarly and it's a constant. If you want a 100Hz simulation you set dt to 10ms. You also have to recalculate acceleration as the sum of forces divided by the mass.
5) As you note I never talked about direction because you don't need it if you are reasoning decomposing coordinates in each axis (x,y in a 2D engine). If the normal force is "right", as in your example, the Ny component will be set to zero, while Nx will be tranformed into acceleration and will change the horizontal speed. This also will change the ball's direction.
These are only advices, actually you should start by studying kinematics and later something of rational mechanics.
I had an function that looked something like this:
Position calculateValidPosition(Position start, Position end)
Position middlePoint = (start + end) /2
if (middlePoint == start || middlePoint == end)
return start
if( isColliding(middlePont) )
return calculateValidPosition(start, middlePoint)
else
return calculate(middlePoint, end)
I just made this code on the fly, so there would be a lot of room for improvements... starting by not making it recursive.
This function would be called when a collision is detected, passing as a parameter the last valid position of the object, and the current invalid position. On each iteration, the first parameter is always valid (no collition), and the second one is invalid (there is collition).
But I think this can give you an idea of a possible solution, so you can adapt it to your needs.
I am developing an application which uses OpenGL for rendering of the images.
Now I just want to determine the touch event on the opengl sphere object which I have drwn.
Here i draw 4 object on the screen. now how should I come to know that which object has been
touched. I have used onTouchEvent() method. But It gives me only x & y co-ordinates but my
object is drawn in 3D.
please help since I am new to OpenGL.
Best Regards,
~Anup
t Google IO there was a session on how OpenGL was used for Google Body on Android. The selecting of body parts was done by rendering each of them with a solid color into a hidden buffer, then based on the color that was on the touch x,y the corresponding object could be found. For performance purposes, only a small cropped area of 20x20 pixels around the touch point was rendered that way.
Both approach (1. hidden color buffer and 2. intersection test) has its own merit.
1. Hidden color buffer: pixel read-out is a very slow operation.
Certainly an overkill for a simple ray-sphere intersection test.
Ray-sphere intersection test: this is not that difficult.
Here is a simplified version of an implementation in Ogre3d.
std::pair<bool, m_real> Ray::intersects(const Sphere& sphere) const
{
const Ray& ray=*this;
const vector3& raydir = ray.direction();
// Adjust ray origin relative to sphere center
const vector3& rayorig = ray.origin() - sphere.center;
m_real radius = sphere.radius;
// Mmm, quadratics
// Build coeffs which can be used with std quadratic solver
// ie t = (-b +/- sqrt(b*b + 4ac)) / 2a
m_real a = raydir%raydir;
m_real b = 2 * rayorig%raydir;
m_real c = rayorig%rayorig - radius*radius;
// Calc determinant
m_real d = (b*b) - (4 * a * c);
if (d < 0)
{
// No intersection
return std::pair<bool, m_real>(false, 0);
}
else
{
// BTW, if d=0 there is one intersection, if d > 0 there are 2
// But we only want the closest one, so that's ok, just use the
// '-' version of the solver
m_real t = ( -b - sqrt(d) ) / (2 * a);
if (t < 0)
t = ( -b + sqrt(d) ) / (2 * a);
return std::pair<bool, m_real>(true, t);
}
}
Probably, a ray that corresponds to cursor position also needs to be calculated. Again you can refer to Ogre3d's source code: search for getCameraToViewportRay. Basically, you need the view and projection matrix to calculate a Ray (a 3D position and a 3D direction) from 2D position.
In my project, the solution I chose was:
Unproject your 2D screen coordinates to a virtual 3D line going through your scene.
Detect possible intersections of that line and your scene objects.
This is quite a complex tast.
I have only done this in Direct3D rather than OpenGL ES, but these are the steps:
Find your modelview and projection matrices. It seems that OpenGL ES has removed the ability to retrieve the matrices set by gluProject() etc. But you can use android.opengl.Matrix member functions to create these matrices instead, then set with glLoadMatrix().
Call gluUnproject() twice, once with winZ=0, then with winZ=1. Pass the matrices you calculated earlier.
This will output a 3d position from each call. This pair of positions define a ray in OpenGL "world space".
Perform a ray - sphere intersection test on each of your spheres in order. (Closest to camera first, otherwise you may select a sphere that is hidden behind another.) If you detect an intersection, you've touched the sphere.
for find touch point is inside circle or not..
public boolean checkInsideCircle(float x,float y, float centerX,float centerY, float Radius)
{
if(((x - centerX)*(x - centerX))+((y - centerY)*(y - centerY)) < (Radius*Radius))
return true;
else
return false;
}
where
1) centerX,centerY are center point of circle.
2) Radius is radius of circle.
3) x,y point of touch..