spine 2d coordinate system libgdx - android

i don't know what im doing wrong...i think im having brain freeze. I am really struggling with converting my spine objects pixel coordinates to world coordinates. I have recently converted all my code to work with Ashley ecs and i cant seem to get my spine object to display in the correct position.
i have a system which handles the rendering and positioning of my spine object but i cant seem to get it displaying in the correct position.
I'm hoping someone can point me in the correct direction!
i have included my code for the spine rendering system...hope you can help!
i want to place the spine object at the same position as my box 2d object which is using world coordinates. but spine is using pixel coordinates. i have also included an image to show you what is happening. (the grey square near the middle right of the screen is where i want my spine object to be!)
Amarino
in game image
public class SpineRenderSystem extends IteratingSystem {
private static final String TAG = com.chaingang.freshstart.systems.SpineRenderSystem.class.getName();
private PolygonSpriteBatch pBatch;
SkeletonMeshRenderer skeletonMeshRenderer;
private boolean process = true;
BodyComponent bodyComp;
Spine2DComponent spineComp;
public SpineRenderSystem(PolygonSpriteBatch pBatch){
super(Family.all(RenderableComponent.class, Spine2DComponent.class, PositionComponent.class).get());
this.pBatch = pBatch;
skeletonMeshRenderer = new SkeletonMeshRenderer();
skeletonMeshRenderer.setPremultipliedAlpha(true);
}
#Override
protected void processEntity(Entity entity, float deltaTime) {
bodyComp = Mappers.body.get(entity);
spineComp = Mappers.spine2D.get(entity);
float offsetX = 100.00f/Gdx.graphics.getWidth(); //100 equal world width
float offsetY = 50.00f/Gdx.graphics.getHeight(); //50 equals world height
pBatch.begin();
spineComp.skeleton.setX((bodyComp.body.getPosition().x / offsetX) );
spineComp.skeleton.setY((bodyComp.body.getPosition().y) / offsetY);
skeletonMeshRenderer.draw(pBatch,spineComp.skeleton);
//spineComp.get(entity).skeleton.setFlipX(player.dir == -1);
spineComp.animationState.apply(spineComp.skeleton);
spineComp.skeleton.updateWorldTransform();
pBatch.end();
}
}

What I do for my spine renders is look at the bounding box size in pixels in spine. This is usually in the order of 100s of pixels. But if you are working with box2d scales, it is recommended that you think of 1 as 1 meter.
With this in mind I will scale a human spine animation with a hip y coordinate of 200 pixels by dividing it by 200, or there about.
Once you have this ratio, then when you build your Spine Skeleton you can do this (sorry, I do all my libgdx stuff in Kotlin now):
val atlasLoader = AtlasAttachmentLoader(atlas)
val skeletonJson = SkeletonJson(atlasLoader)
skeletonJson.scale = 1/200f
Then you might also want to handle an offset for rendering your spine object, as I see you are trying to do, because possibly your root bone is in the center of your spine object (a hip for example). However, you are doing a division operation, which I guess is exploratory as offsets should be an addition or subtraction. Here is how I do it using the spine pixel coordinates (again, sorry for the Kotlin, but I like it):
//In some object or global state we have this stuff
var skeleton: Skeleton
var skeletonRenderer = SkeletonRenderer<PolygonSpriteBatch>()
//Then in the rendering code
val offset = Vector2(0f,-200f)
val position = physicsRoot.position().add(offset)
skeleton.setPosition(position.x, position.y)
skeleton.updateWorldTransform()
skeletonRenderer.draw(batch, skeleton)
That should get your spine stuff working as you expect.

Have you heard of a method called camera.project(world coordinates); it might do what you are looking for. It takes the world coordinates and turns them into screen coordinates. For the opposite you can do camera.unproject(screen coordinates);

Related

Combine image with video stream on Android

I am investigating Augmented Reality on Android.
I am using ARCore and Sceneform within an Android application.
I have tried out the sample projects and now would like to develop my own application.
One effect I would like to achieve is to combine/overlay an image (say .jpeg or .png) with a live feed from the devices onboard camera.
The image will have a transparent background that allows the user to see the live feed and image simultaneously
However I do not want the overlayed image to be a fixed/static watermark, When the user zooms in, out or pans the overlayed image must also zoom in, out and pan etc.
I do not wish the overplayed image to become 3d or anything of that nature.
Is this effect possible with Sceneform? or will I need to use other 3rd party libraries and/or tools to achieve the desired results.
UPDATE
The user is drawing on a blank sheet of white paper. The sheet of paper is orientated so that the user is comfortably drawing (either left or right handed). The user is free to move the sheet of paper while they complete their image.
An Android device is held above the sheet of paper filming the user drawing their selected image.
The live camera feed is being cast to a large TV or monitor screen.
To aid the user they have selected a static image to "trace" or "Copy".
This image is chosen on the Android device and is being combined with the live camera stream within the Android application.
The user can zoom in and out on their drawing and the combined live stream and selected static image will also zoom in and out, this will enable the user to make an accurate copy of the selected static image by drawing "Free Hand".
When the user looks directly at the sheet of paper, they only see their drawing.
When the user views the cast live stream of them drawing on the TV or monitor they see their drawing and the chosen static image superimposed. The user can control the transparency of the static image to assist them in making an accurate copy of it.
I think what you are looking for is to use AR to display an image so that the image stays in place, for example over a sheet of paper in order to act as a guide for drawing a copy of the image on the paper.
There are 2 parts to this. First is to locate the sheet of paper, the second is to place the image over the paper and keep it there as the phone moves around.
Locating the sheet of paper can be done just by detecting the plane with the paper (having some contrast, or pattern or something vs. a plain white sheet of paper will help), then tap on where the center of the page should be. This is done in the HelloSceneform sample.
If you want to have a more accurate bounding of the paper, you could tap the 4 corners of the paper, and then create anchors there. To do this register a plane tapped listener in onCreate()
arFragment.setOnTapArPlaneListener(this::onPlaneTapped);
Then in onPlaneTapped, create the 4 anchorNodes. Once you have 4, initialize the drawing to be displayed.
private void onPlaneTapped(HitResult hitResult, Plane plane, MotionEvent event) {
if (cornerAnchors.size() != 4) {
AnchorNode corner = createCornerNode(hitResult.createAnchor());
arFragment.getArSceneView().getScene().addChild(corner);
cornerAnchors.add(corner);
}
if (cornerAnchors.size() == 4 && drawingNode == null) {
initializeDrawing();
}
}
To initialize the drawing, create a Sceneform Texture from the bitmap or drawable. This can be from a resource or a file URL. You want the texture to show the whole image, and scale as the model holding it is resized.
private void initializeDrawing() {
Texture.Sampler sampler = Texture.Sampler.builder()
.setWrapMode(Texture.Sampler.WrapMode.CLAMP_TO_EDGE)
.setMagFilter(Texture.Sampler.MagFilter.NEAREST)
.setMinFilter(Texture.Sampler.MinFilter.LINEAR_MIPMAP_LINEAR)
.build();
Texture.builder()
.setSource(this, R.drawable.logo_google_developers)
.setSampler(sampler)
.build()
.thenAccept(texture -> {
MaterialFactory.makeTransparentWithTexture(this, texture)
.thenAccept(this::buildDrawingRenderable);
});
}
The model to hold the texture is just a flat quad sized to the smallest dimension between the corners. This is the same logic as laying out a quad using OpenGL.
private void buildDrawingRenderable(Material material) {
Integer[] indices = {
0, 1, 3, 3, 1, 2
};
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
float width = Math.abs(max_x - min_x);
float height = Math.abs(max_z - min_z);
float extent = Math.min(width / 2, height / 2);
Vertex[] vertices = {
Vertex.builder()
.setPosition(new Vector3(-extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 1)) // top left
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 1)) // top right
.build(),
Vertex.builder()
.setPosition(new Vector3(extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(1, 0)) // bottom right
.build(),
Vertex.builder()
.setPosition(new Vector3(-extent, 0, -extent))
.setUvCoordinate(new Vertex.UvCoordinate(0, 0)) // bottom left
.build()
};
RenderableDefinition.Submesh[] submeshes = {
RenderableDefinition.Submesh.builder().
setMaterial(material)
.setTriangleIndices(Arrays.asList(indices))
.build()
};
RenderableDefinition def = RenderableDefinition.builder()
.setSubmeshes(Arrays.asList(submeshes))
.setVertices(Arrays.asList(vertices)).build();
ModelRenderable.builder().setSource(def)
.setRegistryId("drawing").build()
.thenAccept(this::positionDrawing);
}
The last part is to position the quad in the center of the corners, and create a Transformable node so the image can be nudged into position, rotated, or scaled to be the perfect size.
private void positionDrawing(ModelRenderable drawingRenderable) {
//Calculate the center of the corners.
float min_x = Float.MAX_VALUE;
float max_x = Float.MIN_VALUE;
float min_z = Float.MAX_VALUE;
float max_z = Float.MIN_VALUE;
for (AnchorNode node : cornerAnchors) {
float x = node.getWorldPosition().x;
float z = node.getWorldPosition().z;
min_x = Float.min(min_x, x);
max_x = Float.max(max_x, x);
min_z = Float.min(min_z, z);
max_z = Float.max(max_z, z);
}
Vector3 center = new Vector3((min_x + max_x) / 2f,
cornerAnchors.get(0).getWorldPosition().y, (min_z + max_z) / 2f);
Anchor centerAnchor = null;
Vector3 screenPt = arFragment.getArSceneView().getScene().getCamera().worldToScreenPoint(center);
List<HitResult> hits = arFragment.getArSceneView().getArFrame().hitTest(screenPt.x, screenPt.y);
for (HitResult hit : hits) {
if (hit.getTrackable() instanceof Plane) {
centerAnchor = hit.createAnchor();
break;
}
}
AnchorNode centerNode = new AnchorNode(centerAnchor);
centerNode.setParent(arFragment.getArSceneView().getScene());
drawingNode = new TransformableNode(arFragment.getTransformationSystem());
drawingNode.setParent(centerNode);
drawingNode.setRenderable(drawingRenderable);
}
The intended AR reference image can be scaled with ARobjects as points for the sizing of the template for the user.
The more complex AR images will not work easily, since the AR image is overlaid on top of the users tracing, and this will obstruct the tip of their pen/pencil.
My solution is to chromakey the white paper. This will replace the white paper with the chosen image or live feed. Moving the paper around as you specified would be an issue, unless you have a means of tracking the paper position.
As you can see in this example, AR objects are in front, while chromakey is background. Tracing surface (paper) would be in the center.
Reference to this example is on the link below.
RJ
YouTube - AR tracked environment

How to find out if game sprite is moving smoothly?

I'm making a simple jumping game for android using libgdx and box2d and I cannot figure out how to make sprites move really smooth. I have checked several articles regarding timestep fixing and synchronizing renderer and physics emulation, but none of the suggested ways really helped (http://gafferongames.com/game-physics/fix-your-timestep/).
Finally I decided to run the most simple test setting box2d world step equal to the framerate (which in case of stable fps should provide the best performance), but still movement is not totally smooth. I have tested on PC and on Android device, with stable 60-61 FPS. Here is pseudocode:
In render:
world.step(Gdx.graphics.getDeltaTime(), 6, 2);
stage.act();
stage.draw();
Stage basically has just one actor with act and draw overriden:
#Override
public void draw(Batch batch, float arg1) {
float x = this.getX() - width/2;
float y = this.getY() - height/2;
batch.draw(sprite, x, y, width, height);
}
#Override
public void act (float delta) {
...
//get body position
position = body.getPosition();
this.setPosition(position.x, position.y);
}
Actor has box2d body attached to it, there is no gravity and body's velocity is set constant:
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyType.DynamicBody;
bodyDef.position.set(world_position);
bodyDef.linearDamping = 0f;
bodyDef.angularDamping = 0f;
bodyDef.fixedRotation = true;
bodyDef.gravityScale = 0f;
...fixure added to the body
body.setLinearVelocity(0, -2f);
Camera is not moving, the case seems to be dead simple and yet sprite does not move exactly perfect. (Though it still looks smoother then when using time accumulator and interpolation)
Is it possible to achive absolutely smooth movement at all? Is there some mistake in my approach?
I have checked some similar games on the same android device - it seems that objects are moving absolutely smooth, but maybe it just seems so, because too many things happen on the screen and I don't have time to notice.
Any advice would be appreciated.
After further testing and researched I have figured out the problem - it was related not to FPS, but to pixel rounding. Box2d bodies have float coordinates - after converting them to round pixel values animation bemace much smoother.
How about to use CCPhysicsSprite instead of change position of sprite by time? You can use a batch, too. Just
sprite = [CCPhysicsSprite spriteWithTexture:batch.texture];
[batch addChild:sprite];
CCPhysicsSprite class
Example:
#import "CCPhysicsSprite.h"
CCPhysicsSprite *sprite = [CCPhysicsSprite spriteWithFile:#"sprite.png"];
[self addChild:sprite];
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position.Set(300/PTM_RATIO, 200/PTM_RATIO);
body = world->CreateBody(&bodyDef);
b2CircleShape circleShape;
circleShape.m_radius = 0.3;
b2FixtureDef fixtureDef;
fixtureDef.shape = &circleShape;
fixtureDef.density = 1;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
[sprite setPTMRatio:PTM_RATIO];
[sprite setB2Body:body];
[sprite setPosition: ccp(300, 200)];

Augmented Reality + Bullet Physics - trouble with rayTest/Ray picking

I am trying to pick objects in the bullet physics world but all I seem to be able to pick is the floor/ground plane!!! I am using the Vuforia SDK and have altered the ImageTargets demo code. I have used the following code to project my touched screen points to the 3d world:
void projectTouchPointsForBullet(QCAR::Vec2F point, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd, QCAR::Matrix44F &modelViewMatrix)
{
QCAR::Vec4F normalisedVector((2 * point.data[0] / screenWidth - 1),
(2 * (screenHeight-point.data[1]) / screenHeight - 1),
-1,
1);
QCAR::Matrix44F modelViewProjection;
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0] , &modelViewProjection.data[0]);
QCAR::Matrix44F inversedMatrix = SampleMath::Matrix44FInverse(modelViewProjection);
QCAR::Vec4F near_point = SampleMath::Vec4FTransform( normalisedVector,inversedMatrix);
near_point.data[3] = 1.0/near_point.data[3];
near_point = QCAR::Vec4F(near_point.data[0]*near_point.data[3], near_point.data[1]*near_point.data[3], near_point.data[2]*near_point.data[3], 1);
normalisedVector.data[2] = 1.0;//z coordinate now 1
QCAR::Vec4F far_point = SampleMath::Vec4FTransform( normalisedVector, inversedMatrix);
far_point.data[3] = 1.0/far_point.data[3];
far_point = QCAR::Vec4F(far_point.data[0]*far_point.data[3], far_point.data[1]*far_point.data[3], far_point.data[2]*far_point.data[3], 1);
lineStart = QCAR::Vec3F(near_point.data[0],near_point.data[1],near_point.data[2]);
lineEnd = QCAR::Vec3F(far_point.data[0],far_point.data[1],far_point.data[2]);
}
when I try a ray test in my physics world I only seem to be hitting the ground plane! Here is the code for the ray test call:
QCAR::Vec3F intersection, lineStart;
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
btVector3 btRayTo = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btCollisionWorld::ClosestRayResultCallback rayCallback(btRayFrom,btRayTo);
dynamicsWorld->rayTest(btRayFrom, btRayTo, rayCallback);
if(rayCallback.hasHit())
{
char* pPhysicsData = reinterpret_cast<char*>(rayCallback.m_collisionObject->getUserPointer());//my bodies have char* messages attached to them to determine what has been touched
btRigidBody* pBody = btRigidBody::upcast(rayCallback.m_collisionObject);
if (pBody && pPhysicsData)
{
LOG("handleTouches:: notifyOnTouchEvent from physics world!!!");
notifyOnTouchEvent(env, obj,0,0, pPhysicsData);
}
}
I know I am predominantly looking top-down so I am bound to hit the ground plane, I at least know my touch is being correctly projected into the world, but I have objects lying on the ground plane and I can't seem to be able to touch them! Any pointers would be greatly appreciated :)
I found out why I wasn't able to touch the objects - I am scaling the objects up when they are drawn, so I had to scale the view matrix by the same value before I projected my touch point into the 3d world (EDIT I also had the btRayFrom and btRayTo input cooordinates reversed, it is now fixed):
//top of code
int kObjectScale = 100.0f
....
...
//inside touch handler method
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,&modelViewMatrix.data[0]);
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btVector3 btRayTo = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
My touches are projected correctly now :)

MouseJoint not properly working

I have been trying to use a MouseJoint to move a piece wherever the user touches. But the piece, being affected by the joint, behaves strangely, never reaching the point. This is the code (x and y are already converted to 'physical' units):
MouseJointDef mj_def;
MouseJoint mj = null;
Body mj_gbody;
public void move(float x, float y)
{
if(mj == null)
{
BodyDef mgbd = new BodyDef();
mj_gbody = wrld.createBody(mgbd);
//
mj_def = new MouseJointDef();
mj_def.bodyA = mj_gbody;
mj_def.bodyB = body;
mj_def.collideConnected = true;
mj_def.maxForce = 20.0f * body.getMass();
//mj_def.target.set(x,y);
mj = (MouseJoint)wrld.createJoint(mj_def);
body.setAwake(true);
}
mj.setTarget(new Vector2(x, y));
}
I was looking for some way to establish the anchor point in the BodyB, as the 'strange behaviour' that I mentioned seems to make the body gravitate around the established point (an orbit twice the width of the object), as if the anchor point was outside of the body (hexagon shaped, btw). But I don't see any way of doing so in libgdx.
Does anybody know what I am doing wrong? Thank you in advance!
Well, MouseJoint was working properly, I just misunderstood how MouseJoint works.
As it is clearly seen in the Box2d testbed, MouseJoint is used for dragging after selecting an object. Therefore, the anchor is assigned in the first target.set.
As I wanted to move the center of the object to the place where the mouse was (or the user touched), a mj_def.target.set(body.getPosition().x + 2.0f, body.getPosition().y + 1.0f); (the object is 4.0f by 2.0f) in the initialization solved the problem. Also, it may be not the best Joint for my intentions (to move an specific object to one place in the screen).

How is Animation implemented in Android

I had a small question.If i want to make a man run in android one way of doing this is to get images of the man in different position and display them at different positions.But often,this does not work very well and it appears as two different images are being drawn.Is there any other way through which i can implement custom animation.(Like create a custom image and telling one of the parts of this image to move).
The way i do it is to use sprite sheets for example (Not my graphics!):
You can then use a class like this to handle your animation:
public class AnimSpriteClass {
private Bitmap mAnimation;
private int mXPos;
private int mYPos;
private Rect mSRectangle;
private int mFPS;
private int mNoOfFrames;
private int mCurrentFrame;
private long mFrameTimer;
private int mSpriteHeight;
private int mSpriteWidth;
public AnimSpriteClass() {
mSRectangle = new Rect(0,0,0,0);
mFrameTimer =0;
mCurrentFrame =0;
mXPos = 80;
mYPos = 200;
}
public void Initalise(Bitmap theBitmap, int Height, int Width, int theFPS, int theFrameCount) {
mAnimation = theBitmap;
mSpriteHeight = Height;
mSpriteWidth = Width;
mSRectangle.top = 0;
mSRectangle.bottom = mSpriteHeight;
mSRectangle.left = 0;
mSRectangle.right = mSpriteWidth;
mFPS = 1000 /theFPS;
mNoOfFrames = theFrameCount;
}
public void Update(long GameTime) {
if(GameTime > mFrameTimer + mFPS ) {
mFrameTimer = GameTime;
mCurrentFrame +=1;
if(mCurrentFrame >= mNoOfFrames) {
mCurrentFrame = 0;
}
}
mSRectangle.left = mCurrentFrame * mSpriteWidth;
mSRectangle.right = mSRectangle.left + mSpriteWidth;
}
public void draw(Canvas canvas) {
Rect dest = new Rect(getXPos(), getYPos(), getXPos() + mSpriteWidth,
getYPos() + mSpriteHeight);
canvas.drawBitmap(mAnimation, mSRectangle, dest, null);
}
mAnimation - This is will hold the actual bitmap containing the animation.
mXPos/mYPos - These hold the X and Y screen coordinates for where we want the sprite to be on the screen. These refer to the top left hand corner of the image.
mSRectangle - This is the source rectangle variable and controls which part of the image we are rendering for each frame.
mFPS - This is the number of frames we wish to show per second. 15-20 FPS is enough to fool the human eye into thinking that a still image is moving. However on a mobile platform it’s unlikely you will have enough memory 3 – 10 FPS which is fine for most needs.
mNoOfFrames -This is simply the number of frames in the sprite sheet we are animating.
mCurrentFrame - We need to keep track of the current frame we are rendering so we can move to the next one in order.~
mFrameTimer - This controls how long between frames.
mSpriteHeight/mSpriteWidth -These contain the height and width of an Individual Frame not the entire bitmap and are used to calculate the size of the source rectangle.
Now in order to use this class you have to add a few things to your graphics thread. First declare a new variable of your class and then it can be initialised in the constructor as below.
Animation = new OurAnimatedSpriteClass();
Animation.Initalise(Bitmap.decodeResource(res, R.drawable.stick_man), 62, 39, 20, 20);
In order to pass the value of the bitmap you first have to use the Bitmap Factory class to decode the resource. It decodes a bitmap from your resources folder and allows it to be passed as a variable. The rest of the values depend on your bitmap image.
In order to be able to time the frames correctly you first need to add a Game timer to the game code. You do this by first adding a variable to store the time as show below.
private long mTimer;
We now need this timer to be updated with the correct time every frame so we need to add a line to the run function to do this.
public void run() {
while (mRun) {
Canvas c = null;
mTimer = System.currentTimeMillis(); /////This line updates timer
try {
c = mSurfaceHolder.lockCanvas(null);
synchronized (mSurfaceHolder) {
Animation.update(mTimer);
doDraw(c);
}....
then you just have to add Animation.draw(canvas); your Draw function and the animation will draw the current frame in the right place.
When you describe : " one way of doing this is to get images of the man in different position and display them at different positions", this is indeed not only a programming technique to render animation but a general principle that is applied in every form of animation : it applies to making movies, making comics, computer gaming, etc, etc.
Our eyes see at the frequency of 24 images per second. Above 12 frames per second, your brain gets the feeling of real, fluid, movement.
So, yes, this is the way, if you got the feeling movement is not fuild, then you have to increase frame rate. But that works.
Moving only one part of an image is not appropriate for a small sprite representing a man running. Nevertheless, keep this idea in mind for later, when you will be more at ease with animation programming, you will see that this applies to bigger areas that are not entirely drawn at every frame in order to decresase the number of computations needed to "make a frame". Some parts of a whole screen are not "recomputed" every time, this technique is called double buffer and you should soon be introduced to it when making games.
But for now, you should start by making your man run, replacing quickly one picture by another. If movement is not fuild either increase frame rate (optimize your program) or choose images that are closer to each other.
Regards,
Stéphane

Categories

Resources