I am trying to pick objects in the bullet physics world but all I seem to be able to pick is the floor/ground plane!!! I am using the Vuforia SDK and have altered the ImageTargets demo code. I have used the following code to project my touched screen points to the 3d world:
void projectTouchPointsForBullet(QCAR::Vec2F point, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd, QCAR::Matrix44F &modelViewMatrix)
{
QCAR::Vec4F normalisedVector((2 * point.data[0] / screenWidth - 1),
(2 * (screenHeight-point.data[1]) / screenHeight - 1),
-1,
1);
QCAR::Matrix44F modelViewProjection;
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0] , &modelViewProjection.data[0]);
QCAR::Matrix44F inversedMatrix = SampleMath::Matrix44FInverse(modelViewProjection);
QCAR::Vec4F near_point = SampleMath::Vec4FTransform( normalisedVector,inversedMatrix);
near_point.data[3] = 1.0/near_point.data[3];
near_point = QCAR::Vec4F(near_point.data[0]*near_point.data[3], near_point.data[1]*near_point.data[3], near_point.data[2]*near_point.data[3], 1);
normalisedVector.data[2] = 1.0;//z coordinate now 1
QCAR::Vec4F far_point = SampleMath::Vec4FTransform( normalisedVector, inversedMatrix);
far_point.data[3] = 1.0/far_point.data[3];
far_point = QCAR::Vec4F(far_point.data[0]*far_point.data[3], far_point.data[1]*far_point.data[3], far_point.data[2]*far_point.data[3], 1);
lineStart = QCAR::Vec3F(near_point.data[0],near_point.data[1],near_point.data[2]);
lineEnd = QCAR::Vec3F(far_point.data[0],far_point.data[1],far_point.data[2]);
}
when I try a ray test in my physics world I only seem to be hitting the ground plane! Here is the code for the ray test call:
QCAR::Vec3F intersection, lineStart;
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
btVector3 btRayTo = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btCollisionWorld::ClosestRayResultCallback rayCallback(btRayFrom,btRayTo);
dynamicsWorld->rayTest(btRayFrom, btRayTo, rayCallback);
if(rayCallback.hasHit())
{
char* pPhysicsData = reinterpret_cast<char*>(rayCallback.m_collisionObject->getUserPointer());//my bodies have char* messages attached to them to determine what has been touched
btRigidBody* pBody = btRigidBody::upcast(rayCallback.m_collisionObject);
if (pBody && pPhysicsData)
{
LOG("handleTouches:: notifyOnTouchEvent from physics world!!!");
notifyOnTouchEvent(env, obj,0,0, pPhysicsData);
}
}
I know I am predominantly looking top-down so I am bound to hit the ground plane, I at least know my touch is being correctly projected into the world, but I have objects lying on the ground plane and I can't seem to be able to touch them! Any pointers would be greatly appreciated :)
I found out why I wasn't able to touch the objects - I am scaling the objects up when they are drawn, so I had to scale the view matrix by the same value before I projected my touch point into the 3d world (EDIT I also had the btRayFrom and btRayTo input cooordinates reversed, it is now fixed):
//top of code
int kObjectScale = 100.0f
....
...
//inside touch handler method
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,&modelViewMatrix.data[0]);
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btVector3 btRayTo = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
My touches are projected correctly now :)
Related
I have been searching how to control revolute joint in libgdx with box2d based on user touch and revolute joint is being stopped after it reached upper angle.Is there any way to control revolute joint ?
`
jd = new RevoluteJointDef();
jd.initialize(bodyPivot, boxBody, anchor);
jd.lowerAngle = 0.75f * (float)3.14; // -90 degrees
jd.upperAngle = 0.75f * (float)3.14; // 45 degrees
jd.collideConnected=false;
jd.enableLimit = true;
jd.maxMotorTorque = 1000.0f;
jd.enableMotor=false;
jd.motorSpeed = 0f*(float)3.14;
rj = (RevoluteJoint) world.createJoint(jd);`
I tried using rj.enableMotor(true) but It didn't work
When you create a joint the current relative angle between the bodies is taken to be zero when it comes to specifying the limits.
If the joint keeps rotating in the same direction all the time then the limits don't actually change, because the new starting point is now zero as far as the limits are concerned.
jointDef.upperAngle = MathUtils.PI;
jointDef.lowerAngle = 0;//the position when joint was created
But if the joint is supposed to rotate back to the original position before coming down, it would be something like:
jointDef.upperAngle = atTop ? 0 : MathUtils.PI;
jointDef.lowerAngle = atTop ? -MathUtils.PI : 0;
I found answer from here
I'm making a simple jumping game for android using libgdx and box2d and I cannot figure out how to make sprites move really smooth. I have checked several articles regarding timestep fixing and synchronizing renderer and physics emulation, but none of the suggested ways really helped (http://gafferongames.com/game-physics/fix-your-timestep/).
Finally I decided to run the most simple test setting box2d world step equal to the framerate (which in case of stable fps should provide the best performance), but still movement is not totally smooth. I have tested on PC and on Android device, with stable 60-61 FPS. Here is pseudocode:
In render:
world.step(Gdx.graphics.getDeltaTime(), 6, 2);
stage.act();
stage.draw();
Stage basically has just one actor with act and draw overriden:
#Override
public void draw(Batch batch, float arg1) {
float x = this.getX() - width/2;
float y = this.getY() - height/2;
batch.draw(sprite, x, y, width, height);
}
#Override
public void act (float delta) {
...
//get body position
position = body.getPosition();
this.setPosition(position.x, position.y);
}
Actor has box2d body attached to it, there is no gravity and body's velocity is set constant:
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyType.DynamicBody;
bodyDef.position.set(world_position);
bodyDef.linearDamping = 0f;
bodyDef.angularDamping = 0f;
bodyDef.fixedRotation = true;
bodyDef.gravityScale = 0f;
...fixure added to the body
body.setLinearVelocity(0, -2f);
Camera is not moving, the case seems to be dead simple and yet sprite does not move exactly perfect. (Though it still looks smoother then when using time accumulator and interpolation)
Is it possible to achive absolutely smooth movement at all? Is there some mistake in my approach?
I have checked some similar games on the same android device - it seems that objects are moving absolutely smooth, but maybe it just seems so, because too many things happen on the screen and I don't have time to notice.
Any advice would be appreciated.
After further testing and researched I have figured out the problem - it was related not to FPS, but to pixel rounding. Box2d bodies have float coordinates - after converting them to round pixel values animation bemace much smoother.
How about to use CCPhysicsSprite instead of change position of sprite by time? You can use a batch, too. Just
sprite = [CCPhysicsSprite spriteWithTexture:batch.texture];
[batch addChild:sprite];
CCPhysicsSprite class
Example:
#import "CCPhysicsSprite.h"
CCPhysicsSprite *sprite = [CCPhysicsSprite spriteWithFile:#"sprite.png"];
[self addChild:sprite];
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position.Set(300/PTM_RATIO, 200/PTM_RATIO);
body = world->CreateBody(&bodyDef);
b2CircleShape circleShape;
circleShape.m_radius = 0.3;
b2FixtureDef fixtureDef;
fixtureDef.shape = &circleShape;
fixtureDef.density = 1;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
[sprite setPTMRatio:PTM_RATIO];
[sprite setB2Body:body];
[sprite setPosition: ccp(300, 200)];
I am using LibGDX and Box2d to build my first Android game. Yay!
But I am having some serious problems with Box2d.
I have a simple stage with a rectangular Box2d body at the bottom representing the ground, and two other rectangular Box2d bodies both at the left and right representing the walls.
A Screenshot
Another Screenshot
I also have a box. This box can be touched and it moves using applyLinearImpulse, like if it was kicked. It is a DynamicBody.
What happens is that in my draw() code of the Box object, the Box2d body of the Box object is giving me a wrong value for the X axis. The value for the Y axis is fine.
Those blue "dots" on the screenshots are small textures that I printed on the box edges that body.getPosition() give me. Note how in one screenshot the dots are aligned with the actual DebugRenderer rectangle and in the other they are not.
This is what is happening: when the box moves, the alignment is lost in the movement.
The collision between the box, the ground and the walls occur precisely considering the area that the DebugRenderer renders. But body.getPosition() and fixture.testPoint() considers that area inside those blue dots.
So, somehow, Box2d is "maintaining" these two areas for the same body.
I thought that this could be some kind of "loss of precision" between my conversions of pixels and meters (I am scaling by 100 times) but the Y axis uses the same technique and it's fine.
So, I thought that I might be missing something.
Edit 1
I am converting from Box coordinates to World coordinates. If you see the blue debug sprites in the screenshots, they form the box almost perfectly.
public static final float WORLD_TO_BOX = 0.01f;
public static final float BOX_TO_WORLD = 100f;
The box render code:
public void draw(Batch batch, float alpha) {
x = (body.getPosition().x - width/2) * TheBox.BOX_TO_WORLD;
y = (body.getPosition().y - height/2) * TheBox.BOX_TO_WORLD;
float xend = (body.getPosition().x + width/2) * TheBox.BOX_TO_WORLD;
float yend = (body.getPosition().y + height/2) * TheBox.BOX_TO_WORLD;
batch.draw(texture, x, y);
batch.draw(texture, x, yend);
batch.draw(texture, xend, yend);
batch.draw(texture, xend, y);
}
Edit 2
I am starting to suspect the camera. I got the DebugRenderer and a scene2d Stage. Here is the code:
My screen resolution (Nexus 5, and it's portrait):
public static final int SCREEN_WIDTH = 1080;
public static final int SCREEN_HEIGHT = 1920;
At the startup:
// ...
stage = new Stage(SCREEN_WIDTH, SCREEN_HEIGHT, true);
camera = new OrthographicCamera();
camera.setToOrtho(false, SCREEN_WIDTH, SCREEN_HEIGHT);
debugMatrix = camera.combined.cpy();
debugMatrix.scale(BOX_TO_WORLD, BOX_TO_WORLD, 1.0f);
debugRenderer = new Box2DDebugRenderer();
// ...
Now, the render() code:
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
world.step(1/45f, 6, 6);
world.clearForces();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
debugRenderer.render(world, debugMatrix);
}
Looks like the answer to that one was fairly simple:
stage.setCamera(camera);
I was not setting the OrthographicCamera to the stage, so the stage was using some kind of default camera that wasn't aligned with my stuff.
It had nothing to do with Box2d in the end. Box2d was returning healthy values, but theses values were corresponding to wrong places in my screen because of the wrong stage resolution.
i know what i am going to ask is already discussed sometimes but after going through all of them i can't found my complete answer so i am asking a new question
when i tried integrating JPCT-ae with QCAR all goes well as expected, i got my modelview matrix from renderframe from jni and successfully transferred that in java to jpct model is shown perfectly as expected. but when i tried to pass this matrix to JPCT world camera my model disappear.
my code:in onsurfacechanged:
world = new World();
world.setAmbientLight(20, 20, 20);
sun = new Light(world);
sun.setIntensity(250, 250, 250);
cube = Primitives.getCube(1);
cube.calcTextureWrapSpherical();
cube.strip();
cube.build();
world.addObject(cube);
cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 10);
cam.lookAt(cube.getTransformedCenter());
SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);
MemoryHelper.compact();
and in ondraw:
com.threed.jpct.Matrix mResult = new com.threed.jpct.Matrix();
mResult.setDump(modelviewMatrix ); //modelviewMatrix i get from Qcar
cube.setRotationMatrix(mResult);
cam.setBack(mResult);
fb.clear(back);
world.renderScene(fb);
world.draw(fb);
fb.display();
after some research i found that QCAR uses a right-handed coordinate system meaning that the X positive goes right, the Y positive goes up and the Z positive comes out of screen but in JPCT coordinate system the X positive goes right, the Y positive goes down and the Z positive goes into the screen.
Qcar coordinate system:
i know that matrix QCar is giving is a 4*4 matrix having 3*3 rotational values and translation vector .
i am posting matrices to be more clear:
modelviewmatrix:
1.512537 -159.66255 -10.275316 0.0
-89.86529 -1.1592013 4.7839375 0.0
-8.619186 10.179538 -159.44305 0.0
59.182976 93.205956 437.2832 1.0
modelviewmatrix after reverse using cam.setBack(modelviewmatrix.invert(modelviewmatrix)) :
5.9083453E-5 -0.01109448 -3.3668696E-4 0.0
0.0040540528 -3.8752193E-4 0.0047518034 0.0
-0.004756433 -4.6811014E-4 0.0040459237 0.0
0.7533285 0.4116795 2.7063704 0.9999999
if i remove 13,14 and 15 matrix element assuming 3*3 rotation matrix...model is rotated properly but translation(in and out movement of image) is not there
finally i dont know what changes translation vector is needed.
so please suggest me what i am missing here?
QCAR::Matrix44F inverseMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Matrix44F invTransposeMatrix = SampleMath::Matrix44FTranspose(inverseMatrix);
then pass the invTransposeMatrix value to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, invTransposeMatrix.data);
env->CallVoidMethod(obj, method, modelviewArray);
I have been trying to use a MouseJoint to move a piece wherever the user touches. But the piece, being affected by the joint, behaves strangely, never reaching the point. This is the code (x and y are already converted to 'physical' units):
MouseJointDef mj_def;
MouseJoint mj = null;
Body mj_gbody;
public void move(float x, float y)
{
if(mj == null)
{
BodyDef mgbd = new BodyDef();
mj_gbody = wrld.createBody(mgbd);
//
mj_def = new MouseJointDef();
mj_def.bodyA = mj_gbody;
mj_def.bodyB = body;
mj_def.collideConnected = true;
mj_def.maxForce = 20.0f * body.getMass();
//mj_def.target.set(x,y);
mj = (MouseJoint)wrld.createJoint(mj_def);
body.setAwake(true);
}
mj.setTarget(new Vector2(x, y));
}
I was looking for some way to establish the anchor point in the BodyB, as the 'strange behaviour' that I mentioned seems to make the body gravitate around the established point (an orbit twice the width of the object), as if the anchor point was outside of the body (hexagon shaped, btw). But I don't see any way of doing so in libgdx.
Does anybody know what I am doing wrong? Thank you in advance!
Well, MouseJoint was working properly, I just misunderstood how MouseJoint works.
As it is clearly seen in the Box2d testbed, MouseJoint is used for dragging after selecting an object. Therefore, the anchor is assigned in the first target.set.
As I wanted to move the center of the object to the place where the mouse was (or the user touched), a mj_def.target.set(body.getPosition().x + 2.0f, body.getPosition().y + 1.0f); (the object is 4.0f by 2.0f) in the initialization solved the problem. Also, it may be not the best Joint for my intentions (to move an specific object to one place in the screen).