i know what i am going to ask is already discussed sometimes but after going through all of them i can't found my complete answer so i am asking a new question
when i tried integrating JPCT-ae with QCAR all goes well as expected, i got my modelview matrix from renderframe from jni and successfully transferred that in java to jpct model is shown perfectly as expected. but when i tried to pass this matrix to JPCT world camera my model disappear.
my code:in onsurfacechanged:
world = new World();
world.setAmbientLight(20, 20, 20);
sun = new Light(world);
sun.setIntensity(250, 250, 250);
cube = Primitives.getCube(1);
cube.calcTextureWrapSpherical();
cube.strip();
cube.build();
world.addObject(cube);
cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 10);
cam.lookAt(cube.getTransformedCenter());
SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);
MemoryHelper.compact();
and in ondraw:
com.threed.jpct.Matrix mResult = new com.threed.jpct.Matrix();
mResult.setDump(modelviewMatrix ); //modelviewMatrix i get from Qcar
cube.setRotationMatrix(mResult);
cam.setBack(mResult);
fb.clear(back);
world.renderScene(fb);
world.draw(fb);
fb.display();
after some research i found that QCAR uses a right-handed coordinate system meaning that the X positive goes right, the Y positive goes up and the Z positive comes out of screen but in JPCT coordinate system the X positive goes right, the Y positive goes down and the Z positive goes into the screen.
Qcar coordinate system:
i know that matrix QCar is giving is a 4*4 matrix having 3*3 rotational values and translation vector .
i am posting matrices to be more clear:
modelviewmatrix:
1.512537 -159.66255 -10.275316 0.0
-89.86529 -1.1592013 4.7839375 0.0
-8.619186 10.179538 -159.44305 0.0
59.182976 93.205956 437.2832 1.0
modelviewmatrix after reverse using cam.setBack(modelviewmatrix.invert(modelviewmatrix)) :
5.9083453E-5 -0.01109448 -3.3668696E-4 0.0
0.0040540528 -3.8752193E-4 0.0047518034 0.0
-0.004756433 -4.6811014E-4 0.0040459237 0.0
0.7533285 0.4116795 2.7063704 0.9999999
if i remove 13,14 and 15 matrix element assuming 3*3 rotation matrix...model is rotated properly but translation(in and out movement of image) is not there
finally i dont know what changes translation vector is needed.
so please suggest me what i am missing here?
QCAR::Matrix44F inverseMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Matrix44F invTransposeMatrix = SampleMath::Matrix44FTranspose(inverseMatrix);
then pass the invTransposeMatrix value to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, invTransposeMatrix.data);
env->CallVoidMethod(obj, method, modelviewArray);
Related
I have a server function that detects and estimates a pose of aruco's marker from an image.
Using the function estimatePoseSingleMarkers I found the rotation and translation vector.
I need to use this value in an Android app with ARCore to create a Pose.
The documentation says that Pose needs two float array (rotation and translation): https://developers.google.com/ar/reference/java/arcore/reference/com/google/ar/core/Pose.
float[] newT = new float[] { t[0], t[1], t[2] };
Quaternion q = Quaternion.axisAngle(new Vector3(r[0], r[1], r[2]), 90);
float[] newR = new float[]{ q.x, q.y, q.z, q.w };
Pose pose = new Pose(newT, newR);
The position of the 3D object placed in this pose is totally random.
What am I doing wrong?
This is a snapshot from server image after estimate and draw axis. The image I receive is rotated of 90°, not sure if it relates to anything.
cv::aruco::estimatePoseSingleMarkers (link) returns rotation vector in Rodrigues format. Following the doc
w = norm( r ) // angle of rotation in radians
r = r/w // unit axis of rotation
thus
float w = sqrt( r[0]*r[0] + r[1]*r[1] + r[2]*r[2] );
// handle w==0.0 separately
// Get a new Quaternion using an axis/angle (degrees) to define the rotation
Quaternion q = Quaternion.axisAngle(new Vector3(r[0]/w, r[1]/w, r[2]/w), w * 180.0/3.14159 );
should work except for the right angle rotation mentioned. That is, if the lens parameters are fed to estimatePoseSingleMarkers correctly or up to certain accuracy.
I'm making a simple jumping game for android using libgdx and box2d and I cannot figure out how to make sprites move really smooth. I have checked several articles regarding timestep fixing and synchronizing renderer and physics emulation, but none of the suggested ways really helped (http://gafferongames.com/game-physics/fix-your-timestep/).
Finally I decided to run the most simple test setting box2d world step equal to the framerate (which in case of stable fps should provide the best performance), but still movement is not totally smooth. I have tested on PC and on Android device, with stable 60-61 FPS. Here is pseudocode:
In render:
world.step(Gdx.graphics.getDeltaTime(), 6, 2);
stage.act();
stage.draw();
Stage basically has just one actor with act and draw overriden:
#Override
public void draw(Batch batch, float arg1) {
float x = this.getX() - width/2;
float y = this.getY() - height/2;
batch.draw(sprite, x, y, width, height);
}
#Override
public void act (float delta) {
...
//get body position
position = body.getPosition();
this.setPosition(position.x, position.y);
}
Actor has box2d body attached to it, there is no gravity and body's velocity is set constant:
BodyDef bodyDef = new BodyDef();
bodyDef.type = BodyType.DynamicBody;
bodyDef.position.set(world_position);
bodyDef.linearDamping = 0f;
bodyDef.angularDamping = 0f;
bodyDef.fixedRotation = true;
bodyDef.gravityScale = 0f;
...fixure added to the body
body.setLinearVelocity(0, -2f);
Camera is not moving, the case seems to be dead simple and yet sprite does not move exactly perfect. (Though it still looks smoother then when using time accumulator and interpolation)
Is it possible to achive absolutely smooth movement at all? Is there some mistake in my approach?
I have checked some similar games on the same android device - it seems that objects are moving absolutely smooth, but maybe it just seems so, because too many things happen on the screen and I don't have time to notice.
Any advice would be appreciated.
After further testing and researched I have figured out the problem - it was related not to FPS, but to pixel rounding. Box2d bodies have float coordinates - after converting them to round pixel values animation bemace much smoother.
How about to use CCPhysicsSprite instead of change position of sprite by time? You can use a batch, too. Just
sprite = [CCPhysicsSprite spriteWithTexture:batch.texture];
[batch addChild:sprite];
CCPhysicsSprite class
Example:
#import "CCPhysicsSprite.h"
CCPhysicsSprite *sprite = [CCPhysicsSprite spriteWithFile:#"sprite.png"];
[self addChild:sprite];
b2BodyDef bodyDef;
bodyDef.type = b2_dynamicBody;
bodyDef.position.Set(300/PTM_RATIO, 200/PTM_RATIO);
body = world->CreateBody(&bodyDef);
b2CircleShape circleShape;
circleShape.m_radius = 0.3;
b2FixtureDef fixtureDef;
fixtureDef.shape = &circleShape;
fixtureDef.density = 1;
fixtureDef.friction = 0.3f;
body->CreateFixture(&fixtureDef);
[sprite setPTMRatio:PTM_RATIO];
[sprite setB2Body:body];
[sprite setPosition: ccp(300, 200)];
I am using LibGDX and Box2d to build my first Android game. Yay!
But I am having some serious problems with Box2d.
I have a simple stage with a rectangular Box2d body at the bottom representing the ground, and two other rectangular Box2d bodies both at the left and right representing the walls.
A Screenshot
Another Screenshot
I also have a box. This box can be touched and it moves using applyLinearImpulse, like if it was kicked. It is a DynamicBody.
What happens is that in my draw() code of the Box object, the Box2d body of the Box object is giving me a wrong value for the X axis. The value for the Y axis is fine.
Those blue "dots" on the screenshots are small textures that I printed on the box edges that body.getPosition() give me. Note how in one screenshot the dots are aligned with the actual DebugRenderer rectangle and in the other they are not.
This is what is happening: when the box moves, the alignment is lost in the movement.
The collision between the box, the ground and the walls occur precisely considering the area that the DebugRenderer renders. But body.getPosition() and fixture.testPoint() considers that area inside those blue dots.
So, somehow, Box2d is "maintaining" these two areas for the same body.
I thought that this could be some kind of "loss of precision" between my conversions of pixels and meters (I am scaling by 100 times) but the Y axis uses the same technique and it's fine.
So, I thought that I might be missing something.
Edit 1
I am converting from Box coordinates to World coordinates. If you see the blue debug sprites in the screenshots, they form the box almost perfectly.
public static final float WORLD_TO_BOX = 0.01f;
public static final float BOX_TO_WORLD = 100f;
The box render code:
public void draw(Batch batch, float alpha) {
x = (body.getPosition().x - width/2) * TheBox.BOX_TO_WORLD;
y = (body.getPosition().y - height/2) * TheBox.BOX_TO_WORLD;
float xend = (body.getPosition().x + width/2) * TheBox.BOX_TO_WORLD;
float yend = (body.getPosition().y + height/2) * TheBox.BOX_TO_WORLD;
batch.draw(texture, x, y);
batch.draw(texture, x, yend);
batch.draw(texture, xend, yend);
batch.draw(texture, xend, y);
}
Edit 2
I am starting to suspect the camera. I got the DebugRenderer and a scene2d Stage. Here is the code:
My screen resolution (Nexus 5, and it's portrait):
public static final int SCREEN_WIDTH = 1080;
public static final int SCREEN_HEIGHT = 1920;
At the startup:
// ...
stage = new Stage(SCREEN_WIDTH, SCREEN_HEIGHT, true);
camera = new OrthographicCamera();
camera.setToOrtho(false, SCREEN_WIDTH, SCREEN_HEIGHT);
debugMatrix = camera.combined.cpy();
debugMatrix.scale(BOX_TO_WORLD, BOX_TO_WORLD, 1.0f);
debugRenderer = new Box2DDebugRenderer();
// ...
Now, the render() code:
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
world.step(1/45f, 6, 6);
world.clearForces();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
debugRenderer.render(world, debugMatrix);
}
Looks like the answer to that one was fairly simple:
stage.setCamera(camera);
I was not setting the OrthographicCamera to the stage, so the stage was using some kind of default camera that wasn't aligned with my stuff.
It had nothing to do with Box2d in the end. Box2d was returning healthy values, but theses values were corresponding to wrong places in my screen because of the wrong stage resolution.
I am trying to pick objects in the bullet physics world but all I seem to be able to pick is the floor/ground plane!!! I am using the Vuforia SDK and have altered the ImageTargets demo code. I have used the following code to project my touched screen points to the 3d world:
void projectTouchPointsForBullet(QCAR::Vec2F point, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd, QCAR::Matrix44F &modelViewMatrix)
{
QCAR::Vec4F normalisedVector((2 * point.data[0] / screenWidth - 1),
(2 * (screenHeight-point.data[1]) / screenHeight - 1),
-1,
1);
QCAR::Matrix44F modelViewProjection;
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0] , &modelViewProjection.data[0]);
QCAR::Matrix44F inversedMatrix = SampleMath::Matrix44FInverse(modelViewProjection);
QCAR::Vec4F near_point = SampleMath::Vec4FTransform( normalisedVector,inversedMatrix);
near_point.data[3] = 1.0/near_point.data[3];
near_point = QCAR::Vec4F(near_point.data[0]*near_point.data[3], near_point.data[1]*near_point.data[3], near_point.data[2]*near_point.data[3], 1);
normalisedVector.data[2] = 1.0;//z coordinate now 1
QCAR::Vec4F far_point = SampleMath::Vec4FTransform( normalisedVector, inversedMatrix);
far_point.data[3] = 1.0/far_point.data[3];
far_point = QCAR::Vec4F(far_point.data[0]*far_point.data[3], far_point.data[1]*far_point.data[3], far_point.data[2]*far_point.data[3], 1);
lineStart = QCAR::Vec3F(near_point.data[0],near_point.data[1],near_point.data[2]);
lineEnd = QCAR::Vec3F(far_point.data[0],far_point.data[1],far_point.data[2]);
}
when I try a ray test in my physics world I only seem to be hitting the ground plane! Here is the code for the ray test call:
QCAR::Vec3F intersection, lineStart;
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
btVector3 btRayTo = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btCollisionWorld::ClosestRayResultCallback rayCallback(btRayFrom,btRayTo);
dynamicsWorld->rayTest(btRayFrom, btRayTo, rayCallback);
if(rayCallback.hasHit())
{
char* pPhysicsData = reinterpret_cast<char*>(rayCallback.m_collisionObject->getUserPointer());//my bodies have char* messages attached to them to determine what has been touched
btRigidBody* pBody = btRigidBody::upcast(rayCallback.m_collisionObject);
if (pBody && pPhysicsData)
{
LOG("handleTouches:: notifyOnTouchEvent from physics world!!!");
notifyOnTouchEvent(env, obj,0,0, pPhysicsData);
}
}
I know I am predominantly looking top-down so I am bound to hit the ground plane, I at least know my touch is being correctly projected into the world, but I have objects lying on the ground plane and I can't seem to be able to touch them! Any pointers would be greatly appreciated :)
I found out why I wasn't able to touch the objects - I am scaling the objects up when they are drawn, so I had to scale the view matrix by the same value before I projected my touch point into the 3d world (EDIT I also had the btRayFrom and btRayTo input cooordinates reversed, it is now fixed):
//top of code
int kObjectScale = 100.0f
....
...
//inside touch handler method
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,&modelViewMatrix.data[0]);
projectTouchPointsForBullet(QCAR::Vec2F(touch1.tapX, touch1.tapY), lineStart, lineEnd,inverseProjMatrix, modelViewMatrix);
btVector3 btRayFrom = btVector3(lineStart.data[0], lineStart.data[1], lineStart.data[2]);
btVector3 btRayTo = btVector3(lineEnd.data[0], lineEnd.data[1], lineEnd.data[2]);
My touches are projected correctly now :)
Basically i have an application for Android 1.5 with a GLSurfaceView class that shows a simple square polygon on the screen. I want to learn to add a new functionality, the functionality of moving the square touching it with the finger. I mean that when the user touches the square and moves the finger, the square should be moved with the finger, until the finger releases the screen.
I'm trying to use gluUnProject to obtain the OpenGL coordinates that matches the exact position of the finger, then, i will make a translatef to the polygon, and i will get the polygon moved to that position (i hope it)
The problem is that something is going wrong with gluUnProject, it is giving me this exception: java.lang.IllegalArgumentException: length - offset < n on the call to gluUnProject.
First of all, i'm passing 0 as Z win coordinate because i dont know what i have to pass as z win coordinate, because win doesn't have Z coordinates, only X and Y. I tested passing 1 on Z coordinate, and i'm getting the same exception.
float [] outputCoords=getOpenGLCoords(event.getX(), event.getY(), 0);
x=outputCoords[0];
y=outputCoords[1];
z=outputCoords[2];
.
.
.
public float[] getOpenGLCoords(float xWin,float yWin,float zWin)
{
int screenW=SectionManager.instance.getDisplayWidth();
int screenH=SectionManager.instance.getDisplayHeight();
//CODE FOR TRANSLATING FROM SCREEN COORDINATES TO OPENGL COORDINATES
mg.getCurrentProjection(MyGl);
mg.getCurrentModelView(MyGl);
float [] modelMatrix = new float[16];
float [] projMatrix = new float[16];
modelMatrix=mg.mModelView;
projMatrix=mg.mProjection;
int [] mView = new int[4];
mView[0] = 0;
mView[1] = 0;
mView[2] = screenW; //width
mView[3] = screenH; //height
float [] outputCoords = new float[3];
GLU.gluUnProject(xWin, yWin, zWin, modelMatrix, 0, projMatrix, 0, mView, 0, outputCoords, 0);
return outputCoords;
}
I answered the same question here; basically the gluUnproject function expects your outputCoords array to have size 4 instead of 3. Note that these are homogeneous coordinates, so you still have to divide the first 3 by the 4th one if you're doing perspective projection.