I'm pretty much looking for exactly what the question suggests - a DecelerateAccelerateInterpolator. What I want to do is have an animation decelerate for the first half of the animation, and then accelerate after that. (I'm using it to mimic a gravity-like effect on a Bèzier curve).
EDIT:
Basically, what I'm looking for is something that as the object moves upward on the screen along a Bèzier curve, it decelerates until it gets to the top (at which point it momentarily stops, or has 0 speed or whatever), and then it starts to accelerate as it travels back down the other side.
If you're using Android API 21 or greater you can use PathInterpolator with a cubic Bezier curve:
if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP) {
TimeInterpolator lInter = new PathInterpolator(0.0f, 0.97f, 1.0f, 0.03f);
animation.setInterpolator(lInter);
}
And here's a great website for calculating your desired bezier curve values:
Try to use my custom Interpolator. And you're welcome :)
mAnimation.setInterpolator(new Interpolator() {
#Override
public float getInterpolation(float pInput) {
System.out.println("input " + pInput);
// (1-(1-(x*2))^3)/2
return (float) (1 - Math.pow(1 - (2 * pInput), 3)) / 2;
}
});
I'm thinking you could actually chain two Interpolators. The first would be a DecelerateInterpolator from time 0 to time 1, then start an AccelerateInterpolator at time 1 to time 2. Basically split the time in half.
Related
I'm using opengl with android. I am just playing around and trying to learn some stuff and I've decided to make a simple game where there are falling triangles and you have to tap them to "collect" them (Don't steal my idea! xD).
I am using an Android Timer like this:
Timer t = new Timer();
t.scheduleAtFixedRate(new TimerTask() {
#Override
public void run() {
float[] p1Temp = mTriangle.getP1();
float[] p2Temp = mTriangle.getP2();
float[] p3Temp = mTriangle.getP3();
mTriangle.changeCoords(new float[] {p1Temp[0], p1Temp[1] - 0.01f, p1Temp[2],
p2Temp[0], p2Temp[1] - 0.01f, p2Temp[2],
p3Temp[0], p3Temp[1] - 0.01f, p3Temp[2]});
if (mTriangle.getP1()[1] <= -1.0f ||
mTriangle.getP2()[1] <= -1.0f ||
mTriangle.getP3()[1] <= -1.0f) {
t.cancel();
}
}
}, 0, 40);
So basically what this code is doing is this: there is a timer, and every 40 milliseconds, the y coordinate of every vertex of the falling triangle is decremented. This process stops when it hits the bottom of the screen (i.e. hit the ground).
My question is this, I'm new to using openGL in android, is this the correct way to handle "movement" of objects etc? Or are there methods I'm supposed to use to implement animation/movement.
The most common approach I have seen is somewhat different. It's more typical to update the animation while preparing to render each frame, and base the update on the amount of time that has passed since the last frame.
Since distance is velocity multiplied by time, you do this by assigning a velocity vector to each of your objects. Then when it's time to update the animation, you take the time difference since the last update, and the increment you apply to your positions is the time difference multiplied by the velocity. The velocity is constant as long as you just use a linear motion, but can also change over time for more complex animations, e.g. due to gravity, collision with other objects, etc.
If you're using OpenGL on Android, you're probably using a GLSurfaceView for your rendering. By default, the GLSurfaceView will already invoke your rendering function continuously, up to 60 fps if your rendering can keep up with the display refresh rate.
What you roughly do is keep the time when the last frame was rendered as a member variable in your GLSurfaceView.Renderer implementation. Then each time onDraw() is called, you get the new time, and subtract the last frame time from this time to get the time increment. Then you store away the new time in your member variable, multiply the time increment by the velocity, and add the result to your positions.
After the positions are updated, you render your objects as you normally would.
To give you the outline, the following is a slightly adapted piece of (pseudo-)code I copied from my answer to a similar question (Android timing in OpenGL ES thread is not monotonic):
public void onDrawFrame(GL10 gl) {
currentTime = SystemClock.elapsedRealtime()
deltaTime = currentTime - lastFrameTime
lastFrameTime = currentTime
update animation based on deltaTime
draw frame
}
Where lastFrameTime is a member variable, currentTime a local variable.
I am writing my own game engine (a very base one), I want to learn how physics work in game development and not using an already built game engine. I am writing my code in Java for Android devices (using SurfaceView).
The problem is that I don't know how to calculate the position for my object after a collision. I have created my own collision detection and it is working perfectly.
As you can see, the red rectangle is the area where my ball should move. The arrows is showing where the ball should move after a collision happens. The ball have different position, marked with 1 - 11 (note, while rendering the "world" you see only one ball!).
The balls are actually rectangles! But you can not see the edges.
I have created my own Game Object class, where I'm keeping data about the object position, velocity, origin, etc.:
public abstract class GameObject
{
public Vector2 dimension;
public Vector2 position;
public Vector2 velocity;
public Vector2 origin;
public Rectangle rectangle;
public GameObject(Resources resources)
{
this.dimension = new Vector2();
this.position = new Vector2();
this.velocity = new Vector2();
this.origin = new Vector2();
this.rectangle = new Rectangle();
}
public void update(float deltaTime)
{
position.x += velocity.x;
position.y += velocity.y;
rectangle.set(position.x, position.y, dimension.x, dimension.y);
origin.x = position.x + dimension.x / 2;
origin.y = position.y + dimension.y / 2;
}
}
This method is called if a ball collide with one of the red rectangle margins:
protected void onBallCollideWithLevelEdge(Ball ball)
{
// Calculate next position:
??????????
}
My ball have a velocity and a position. Should I save the previous position of the ball?
1) Why do you represents balls as rectangles? Circle is far more easy to handle, expecially if you want to extend collision to ball-ball.
2) If you are trying to make a physics engine you must have coordinates of all objects and the respective first and second derivatives, aka speed and acceleration. Moreover you need a mass for each object and maybe some other parameters, for example material (for friction) and elasticity, but this is not needed at the beginning.
3) When a collision with walls happens you have current ball position and velocity. Given this data you have to calculate normal force. Normal force is such that ball cannot pass through the wall, so I'd calculate it's magnitude using something like this:
Nx = DELTAx*k;
Ny = DELTAy*k;
where k is some elasticity constant that you can trim and x is the measure of how much the ball has penetrated in the wall. Note that this is good only for "slow" objects. If you deal with bullets you'd better using something like rays or.
Another way could be to calculate kinetic energy at the time of collision and transfer it to elastic energy. Once you have elastic energy you can release it in a direction normal to the wall, transforming it to a force.
Once you have the force, it becomes an acceleration by dividing it for the mass.
4) at each simulation iteration you add speed to position, and acceleration to speed, remembering of multiplying it for the integration time (dt). This time can be set arbitrarly and it's a constant. If you want a 100Hz simulation you set dt to 10ms. You also have to recalculate acceleration as the sum of forces divided by the mass.
5) As you note I never talked about direction because you don't need it if you are reasoning decomposing coordinates in each axis (x,y in a 2D engine). If the normal force is "right", as in your example, the Ny component will be set to zero, while Nx will be tranformed into acceleration and will change the horizontal speed. This also will change the ball's direction.
These are only advices, actually you should start by studying kinematics and later something of rational mechanics.
I had an function that looked something like this:
Position calculateValidPosition(Position start, Position end)
Position middlePoint = (start + end) /2
if (middlePoint == start || middlePoint == end)
return start
if( isColliding(middlePont) )
return calculateValidPosition(start, middlePoint)
else
return calculate(middlePoint, end)
I just made this code on the fly, so there would be a lot of room for improvements... starting by not making it recursive.
This function would be called when a collision is detected, passing as a parameter the last valid position of the object, and the current invalid position. On each iteration, the first parameter is always valid (no collition), and the second one is invalid (there is collition).
But I think this can give you an idea of a possible solution, so you can adapt it to your needs.
I am making a 2d game. The phone is held horizontally and a character moves up/down & left/right to avoid obstacles. The character is controlled by the accelerometer on the phone. Everything works fine if the player doesn't mind (0,0) (the point where the character stands still) being when the phone is held perfectly flat. In this scenario it's possible to just read the Y and X values directly and use them to control the character. The accelerometer values are between -10 and 10 (they get multiplied by an acceleration constant to decide the movement speed of the character), libgdx is the framework used.
The problem is that having (0,0) isn't very comfortable, so the idea is to calibrate it so that 0,0 will be set to the phones position at a specific point in time.
Which brings me to my question, how would I do this? I tried just reading the current X and Y values then subtracting it. The problem with that is that when the phone is held at a 90 degree angle then the X offset value is 10 (which is the max value) so it ends up becoming impossible to move because the value will never go over 10 (10-10 = 0). The Z axis has to come into play here somehow, I'm just not sure how.
Thanks for the help, I tried explaining as best as I can, I did try searching for the solution, but I don't even know what the proper term is for what I'm looking for.
An old question, but I am providing the answer here as I couldn't find a good answer for Android or LibGDX anywhere. The code below is based on a solution someone posted for iOS (sorry, I have lost the reference).
You can do this in three parts:
Capture a vector representing the neutral direction:
Vector3 tiltCalibration = new Vector3(
Gdx.input.getAccelerometerX(),
Gdx.input.getAccelerometerY(),
Gdx.input.getAccelerometerZ() );
Transform this vector into a rotation matrix:
public void initTiltControls( Vector3 tiltCalibration ) {
Vector3.tmp.set( 0, 0, 1 );
Vector3.tmp2.set( tiltCalibration ).nor();
Quaternion rotateQuaternion = new Quaternion().setFromCross( Vector3.tmp, Vector3.tmp2 );
Matrix4 m = new Matrix4( Vector3.Zero, rotateQuaternion, new Vector3( 1f, 1f, 1f ) );
this.calibrationMatrix = m.inv();
}
Whenever you need inputs from the accelerometer, first run them through the rotation matrix:
public void handleAccelerometerInputs( float x, float y, float z ) {
Vector3.tmp.set( x, y, z );
Vector3.tmp.mul( this.calibrationMatrix );
x = Vector3.tmp.x;
y = Vector3.tmp.y;
z = Vector3.tmp.z;
[use x, y and z here]
...
}
For a simple solution you can look at the methods:
Gdx.input.getAzimuth(), Gdx.input.getPitch(), Gdx.input.getRoll()
The downside is that those somehow use the internal compass to give your devices rotation compared to North/South/East/West. I did only test that very shortly so I'm not 100% sure about it though. Might be worth a look.
The more complex method involves some trigonometry, basically you have to calculate the angle the phone is held at from Gdx.input.getAccelerometerX/Y/Z(). Must be something like (for rotation along the longer side of the phone):
Math.atan(Gdx.input.getAccelerometerX() / Gdx.input.getAccelerometerZ());
For both approaches you then store the initial angle and subtract it later on again. You have to watch out for the ranges though, I think Math.atan(...) is within -Pi and Pi.
Hopefully that'll get you started somehow. You might search for "Accelerometer to pitch/roll/rotation" and similar, too.
so I'm just starting to learn how to create live wallpapers in eclipse and I'm having trouble getting a simple line to move randomly across the screen after a random amount of time, sort of like a shooting star. I think my stop and start is wrong also... I was trying to set a length limit for the line...
I'm using the CubeLiveWallpaper as a template
/*
* Draw a line
*/
void drawCube(Canvas c) {
c.save();
c.drawColor(0xff000000);
drawLine(c);
c.restore();
}
/*
* Line path
*/
void drawLine(Canvas c) {
// Move line across screen randomly
//
float startX = 0;
float startY = 0;
float stopX = 100;
float stopY = 100;
c.drawLine(startX, startY, stopX, stopY, mPaint);
}
This is a pretty open-ended question. I'll try to give you some pointers. :-)
First of all, with all due respect to our good buddies at Google, the Cube example does not always present "best practice." Most notably, you should "never" use hard-coded constants in your wallpaper...always use a proportion of your screen size. In most cases, it's "good enough" to save the width and height variables from onSurfaceChanged() into class variables. My point is, instead of "100," you should be using things like "mScreenWidth / 4" to indicate one quarter of the width of your device (be it teeny tiny phone or ginormous tablet).
To get random numbers, you can use http://developer.android.com/reference/java/util/Random.html
As for the animation itself, well, you can randomize the rate by randomizing the delay you use to reschedule your runnable in postDelayed().
By now, you're probably wondering about the "tricky" part...drawing the line itself. :-) I suggest starting with something very simple, and adding complexity as you eyeball things. Let's say, fr'instance you generate random start and finish points, so that your final stroke will be
c.drawLine(startX, startY, stopX, stopY, mPaint);
Presumably, you will want to draw a straight line, which means maintaining a constant slope. You could set up a floating point "percentage" variable, initialized to zero, and each time through the runnable, increment it by a random amount, so that at each pass it indicates the "percentage" of the line you wish to draw. So each call in your runnable would look like
c.drawLine(startX, startY, startX + percentage * deltaX, startY + percentage * deltaX * slope, mPaint);
(where deltaX = stopX - startX)
Obviously, you want to stop when you hit 100 percent.
This is really just a start. You can get as heavy-duty with your animation as you wish (easing, etc.), for instance using a library like this one: http://code.google.com/p/java-universal-tween-engine/
Another option, depending on the effect you're trying to achieve, would be to work with a game engine, like AndEngine. Again, pretty heavy duty. :-)
http://code.google.com/p/andenginelivewallpaperextensionexample/source/browse/
Good luck!
I am developing an application which uses OpenGL for rendering of the images.
Now I just want to determine the touch event on the opengl sphere object which I have drwn.
Here i draw 4 object on the screen. now how should I come to know that which object has been
touched. I have used onTouchEvent() method. But It gives me only x & y co-ordinates but my
object is drawn in 3D.
please help since I am new to OpenGL.
Best Regards,
~Anup
t Google IO there was a session on how OpenGL was used for Google Body on Android. The selecting of body parts was done by rendering each of them with a solid color into a hidden buffer, then based on the color that was on the touch x,y the corresponding object could be found. For performance purposes, only a small cropped area of 20x20 pixels around the touch point was rendered that way.
Both approach (1. hidden color buffer and 2. intersection test) has its own merit.
1. Hidden color buffer: pixel read-out is a very slow operation.
Certainly an overkill for a simple ray-sphere intersection test.
Ray-sphere intersection test: this is not that difficult.
Here is a simplified version of an implementation in Ogre3d.
std::pair<bool, m_real> Ray::intersects(const Sphere& sphere) const
{
const Ray& ray=*this;
const vector3& raydir = ray.direction();
// Adjust ray origin relative to sphere center
const vector3& rayorig = ray.origin() - sphere.center;
m_real radius = sphere.radius;
// Mmm, quadratics
// Build coeffs which can be used with std quadratic solver
// ie t = (-b +/- sqrt(b*b + 4ac)) / 2a
m_real a = raydir%raydir;
m_real b = 2 * rayorig%raydir;
m_real c = rayorig%rayorig - radius*radius;
// Calc determinant
m_real d = (b*b) - (4 * a * c);
if (d < 0)
{
// No intersection
return std::pair<bool, m_real>(false, 0);
}
else
{
// BTW, if d=0 there is one intersection, if d > 0 there are 2
// But we only want the closest one, so that's ok, just use the
// '-' version of the solver
m_real t = ( -b - sqrt(d) ) / (2 * a);
if (t < 0)
t = ( -b + sqrt(d) ) / (2 * a);
return std::pair<bool, m_real>(true, t);
}
}
Probably, a ray that corresponds to cursor position also needs to be calculated. Again you can refer to Ogre3d's source code: search for getCameraToViewportRay. Basically, you need the view and projection matrix to calculate a Ray (a 3D position and a 3D direction) from 2D position.
In my project, the solution I chose was:
Unproject your 2D screen coordinates to a virtual 3D line going through your scene.
Detect possible intersections of that line and your scene objects.
This is quite a complex tast.
I have only done this in Direct3D rather than OpenGL ES, but these are the steps:
Find your modelview and projection matrices. It seems that OpenGL ES has removed the ability to retrieve the matrices set by gluProject() etc. But you can use android.opengl.Matrix member functions to create these matrices instead, then set with glLoadMatrix().
Call gluUnproject() twice, once with winZ=0, then with winZ=1. Pass the matrices you calculated earlier.
This will output a 3d position from each call. This pair of positions define a ray in OpenGL "world space".
Perform a ray - sphere intersection test on each of your spheres in order. (Closest to camera first, otherwise you may select a sphere that is hidden behind another.) If you detect an intersection, you've touched the sphere.
for find touch point is inside circle or not..
public boolean checkInsideCircle(float x,float y, float centerX,float centerY, float Radius)
{
if(((x - centerX)*(x - centerX))+((y - centerY)*(y - centerY)) < (Radius*Radius))
return true;
else
return false;
}
where
1) centerX,centerY are center point of circle.
2) Radius is radius of circle.
3) x,y point of touch..