By default, the modelInstance is rotated by its centre (0,0,0), I want it to rotate by (0,2,2). I know that in other game engines, there is method like model.setRotationPivot(float), is there any similar method in libgdx?
// how to set rotation pivot?
modelInstance.transform.set(position, rotation, scale);
Thanks!
Late answer:
As I know there is no method to set the pivot.
I use a workaround for this.
Vector3 vec3 = new Vector3(0, 2, 2);
vec3.rotate(Vector3.Y, rotation);
modelInstance.transform.setToTranslation(vec3);
this.transform.rotate(Vector3.Y, rotation);
Related
I'm working on some app using Android's Camera2 API. So far I've been able to get a preview displayed within a TextureView. The app is by default in landscape mode. When using the emulator the preview will appear upside-down. On my physical Nexus 5 the preview is usually displayed correctly (landscape, not upside-down), but occasionally it is rotated by 90 degrees, yet stretched to the dimensions of the screen.
I thought that should be easy and thought the following code would return the necessary information on the current orientation:
// display rotation
getActivity().getWindowManager().getDefaultDisplay().getRotation();
// sensor orientation
mManager.getCameraCharacteristics(mCameraId).get(CameraCharacteristics.SENSOR_ORIENTATION);
... I was pretty surprised when I saw that above code always returned 1 for the display rotation and 90 for the sensor orientation, regardless of the preview being rotated by 90 degree or not. (Within the emulator sensor orientation is always 270 which kinda makes sense if I assume 90 to be the correct orientation).
I also checked the width and height within onMeasure within AutoMeasureTextureView (adopted from Android's Camera2 example) that I'm using to create my TextureView. But no luck either - width and height reported from within onMeasure are always the same regardless of the preview rotation.
So I'm clueless on how to tackle this issue. Does anyone have an idea what could be the reason for the occasional hickups in my preview orientation?
[Edit]
A detail I just found out: Whenever the preview appears rotated onSurfaceTextureSizeChanged in the TextureView.SurfaceTextureListener seems not to get called. In the documentation for onSurfaceTextureSizeChanged it is said that this method is called whenever the SurfaceTexture's buffers size is changed. I have a method createCameraPreviewSession (copied from Android's Camera2 example) in which I set the default buffer size of my texture like
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
From my logging output I can tell that onSurfaceTextureSizeChanged is called exactly after that - however, not always... (or setting the default buffer size sometimes silently fails?).
I think I can answer my own question: I was creating my Camera2 Fragment after the Android's Camera2 example. However, I didn't really consider the method configureTransform to be important as, opposite to the example code, my application is forced to landscape mode anyway. It turned out that this assumption was wrong. Since having configureTransform reintegrated in my code I haven't experienced any more hiccups.
Update: The original example within the Android documentation pages doesn't seem to exist anymore. I've updated the link which is now pointing to the code on Github.
I followed the whole textureView.setTransform(matrix) method listed above, and it worked. However, I was also able to manually set the rotation using the much simpler textureView.setRotation(270) without the need to create a Matrix.
I had also faced a similar issue on the Nexus device. The below code is working for me.
Call this function before opening the camera and also on Resume().
private void transformImage(int width, int height)
{
if (textureView == null) {
return;
} else try {
{
Matrix matrix = new Matrix();
int rotation = getWindowManager().getDefaultDisplay().getRotation();
RectF textureRectF = new RectF(0, 0, width, height);
RectF previewRectF = new RectF(0, 0, textureView.getHeight(), textureView.getWidth());
float centerX = textureRectF.centerX();
float centerY = textureRectF.centerY();
if (rotation == Surface.ROTATION_90 || rotation == Surface.ROTATION_270) {
previewRectF.offset(centerX - previewRectF.centerX(), centerY - previewRectF.centerY());
matrix.setRectToRect(textureRectF, previewRectF, Matrix.ScaleToFit.FILL);
float scale = Math.max((float) width / width, (float) height / width);
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
}
textureView.setTransform(matrix);
}
} catch (Exception e) {
e.printStackTrace();
}
}
Current state:
Creating the frustum, saved in mViewMatrix, also I have both a quaternion q, a float array of three angles (yaw, roll, pitch e.g.) as well as a rotation matrix mRotationMatrix - representing the quaternion as well as the angles' rotation.
What I want to achieve is some sort of an augmented reality effect. I'm currently applying the mRotationMatrix to the mViewMatrix:
Matrix.setLookAtM(mTmpMatrix, 0, // mViewMatrix
mCameraPosition[0], mCameraPosition[1], mCameraPosition[2], // eye
mTargetRotPosition[0], mTargetRotPosition[1], mTargetRotPosition[2],
0, 1, 0); // up
Matrix.setIdentityM(mViewMatrix, 0);
Matrix.multiplyMM(mViewMatrix, 0, mRotationMatrix, 0, mTmpMatrix, 0);
This handles the whole rotation, up vector as well, so the rotation works fine. But since the rotation matrix comes from the device's sensors, the rotation matrix is kind of around the wrong axis.
As a reference, this image should help:
Scenario #1:
Yaw: pointing towards north, it's 0.
Pitch: 0
Roll: 0
Camera is looking to the right, but y is correct.
If I now increase pitch, i.e. pick up the device, the camera now moves to the right, instead of looking up.
If I increase yaw, camera is moving up, instead of to the right.
If I increase roll, weird transformations happen.
In the video, I'm executing the movements in this order. The compass is also showing correct movements, just the transformations of the OpenGL camera are screwed.
Video: Sample screenrecord video
Currently, I'm using the following code to get the rotation matrix, as well as pitch/roll/yaw:
switch (rotation) {
case Surface.ROTATION_0:
mRemappedXAxis = SensorManager.AXIS_MINUS_Y;
mRemappedYAxis = SensorManager.AXIS_Z;
break;
case Surface.ROTATION_90:
mRemappedXAxis = SensorManager.AXIS_X;
mRemappedYAxis = SensorManager.AXIS_Y;
break;
case Surface.ROTATION_180:
mRemappedXAxis = SensorManager.AXIS_Y;
mRemappedYAxis = SensorManager.AXIS_MINUS_Z;
break;
case Surface.ROTATION_270:
mRemappedXAxis = SensorManager.AXIS_MINUS_X;
mRemappedYAxis = SensorManager.AXIS_MINUS_Y;
break;
}
float[] rotationMatrix = new float[16];
float[] correctedRotationMatrix = new float[16];
float[] rotationVector = new float[]{x, y, z}; // from sensor fusion
float[] orientationVals = new float[3];
SensorManager.getRotationMatrixFromVector(rotationMatrix, rotationVector);
SensorManager.remapCoordinateSystem(rotationMatrix, mRemappedXAxis, mRemappedYAxis, correctedRotationMatrix);
SensorManager.getOrientation(correctedRotationMatrix, orientationVals);
I've already tried some other remap-combinations, but none of them seemed to change anything in the movement-translation..
My other thought would be to rotate the vectors I'm using in setLookAtM by myself. But I don't know how I am supposed to work with the up vector.
If someone could either show me / point me in the direction how to handle the rotation, that the movements I execute will be parsed right, or else how I am supposed to do this with the bare angles in OpenGL, I'd be thankful.
In my case , i used to calculate the incremental rotation delta(angle) every time the finger moved on the screen. From this incremental rotation angle, i created a temporary rotation matrix. Then, i post multiplied my overall rotation matrix(Historical rotation matrix with all the previous incremental rotations) with this and finally used this overall rotation matrix in my draw method.
The problem was that i was POST MULTIPLYING the incremental rotation with my overall rotation which meant... my latest rotation would be applied to the object first and the oldest(first) rotation would be applied last.
This is what messed up everything.
The solution was simple, instead of post multiplying, i pre-multiplied the incremental rotation with my overall rotation matrix.My rotations were now in right order and everything worked fine.
Hope this helps.
Here is where i learnt it from.Check this question.
"9.070 How do I transform my objects around a fixed coordinate system rather than the object's local coordinate system?"
http://www.opengl.org/archives/resources/faq/technical/transformations.htm#tran0162
I want to implement zoom in\out of my plaine object.
Now i try scaling :
Matrix.translateM(mModelMatrix, 0, mFocalPoint.x, mFocalPoint.y, 0f);
Matrix.scaleM(mModelMatrix, 0, mCurrentScaleFactor, mCurrentScaleFactor, 1f);
Matrix.translateM(mModelMatrix, 0, -mFocalPoint.x, -mFocalPoint.y, 0f);
In first zoom i have success result, but on the next zoom i have a problem - looks like a focal point calculated based on old matrix.
Here i calculated a focal point
float glX = detector.getFocusX() * mScaleCoefX - mGLSceneWidth/2;
float glY = mGLSceneHeight - detector.getFocusY() * mScaleCoefY - mGLSceneHeight/2;
mFocalPoint = new PointF(glX, glY);
Also i save my model matrix after each zoom and restore before each draw.
So i have a question. Why my zoom doesn't work if i save matrix after each zoom and start scaling on new matrix?
Also - maybe i should recalculate my mFocalPoint?
Each time you use a Matrix, be sure to initialize it with the identity matrix before you use Translate, Scale or Rotate. I don't see that in your code. The calculated Matrix should then be multiplied to the projection matrix either before or in the Vertex shader.
I need some help with matrix operations. What I'm trying to achieve is:
Scale down
Move to a specific position
Rotate by some degree (in the center of the bitmap)
My code currently looks like this:
Matrix matrix = new Matrix();
matrix.preRotate(mShip.getRotation(), mShip.getX() + mShip.getCurrentBitmap().getWidth()/2f, mShip.getY() + mShip.getCurrentBitmap().getHeight()/2f);
matrix.setScale((1.0f * mShip.getWidth() / mShip.getCurrentBitmap().getWidth()), (1.0f * mShip.getHeight() / mShip.getCurrentBitmap().getHeight()));
matrix.postTranslate(mShip.getX(), mShip.getY());
mCanvas.drawBitmap(mShip.getCurrentBitmap(), matrix, mBasicPaint);
But the rotation has the wrong center, and I can't figure out how to solve this - I've already looked around on SO but did only find similar problems, no solutions to this.
I think that I might have to apply one of the operations to another one's values as they are executed in a sequence but I cant figure out how to.
Try this code:
Matrix matrix = new Matrix();
matrix.setTranslate(-mShip.getCurrentBitmap().getWidth()/2f, -mShip.getCurrentBitmap().getHeight()/2f);
matrix.postRotate(mShip.getRotation());
matrix.postTranslate(mShip.getX(), mShip.getY());
matrix.postScale((1.0f * mShip.getWidth() / mShip.getCurrentBitmap().getWidth()), (1.0f * mShip.getHeight() / mShip.getCurrentBitmap().getHeight()), mShip.getX(), mShip.getY());
I have written a first person camera class for android.
The class is really simple , the camera object has its three axes
X,y and Z
and there are functions to create the ModelView matrix ( i.e. calculateModelViewMatrix() ),
rotate the camera along its X and Y axis
and Translate the camera along its Z-axis.
I think that my ModelViewMatrix calulation is correct and i can also translate the camera along the Z-axis.
Rotation along x-axis seems to work but along Y-axis it gives strange results.
Also another problem with the rotation seems to be that instead of the camera being rotated, my 3d model starts to rotate instead along its axis.
I have written another implementation based on the look at point and using the openGL ES's GLU.gluLookAt( ) function to obtain the ModelView matrix but that too seems to suffer from the exactly the same problems.
EDIT
First of all thanks for your reply.
I have actually made a second implementation of the Camera class, this time using the rotation functions provided in android.opengl.Matrix class as you said.
I have provided the code below, which is much simpler.
To my surprise, the results are "Exactly" the same.
This means that my rotation functions and Android's rotation functions are producing the same results.
I did a simple test and looked at my data.
I just rotated the LookAt point 1-dgree at a time around Y-axis and looked at the coordinates. It seems that my LookAt point is lagging behind the exact rotation angle e.g. at 20-deg it has only roatated 10 to 12 degree.
And after 45-degrees it starts reversing back
There is a class android.opengl.Matrix which is a collection of static methods which do everything you need on a float[16] you pass in. I highly recommend you use those functions instead of rolling your own. You'd probably want either setLookAtM with the lookat point calculated from your camera angles (using sin, cos as you are doing in your code - I assume you know how to do this.)
-- edit in response to new answer --
(you should probably have edited your original question, by the way - your answer as another question confused me for a bit)
Ok, so here's one way of doing it. This is uncompiled and untested. I decided to build the matrix manually instead; perhaps that'll give a bit more information about what's going on...
class TomCamera {
// These are our inputs - eye position, and the orientation of the camera.
public float mEyeX, mEyeY, mEyeZ; // position
public float mYaw, mPitch, mRoll; // euler angles.
// this is the outputted matrix to pass to OpenGL.
public float mCameraMatrix[] = new float [16];
// convert inputs to outputs.
public void createMatrix() {
// create a camera matrix (YXZ order is pretty standard)
// you may want to negate some of these constant 1s to match expectations.
Matrix.setRotateM(mCameraMatrix, 0, mYaw, 0, 1, 0);
Matrix.rotateM(mCameraMatrix, 0, mPitch, 1, 0, 0);
Matrix.rotateM(mCameraMatrix, 0, mRoll, 0, 0, 1);
Matrix.translateM(mCameraMatrix, 0, -mEyeX, -mEyeY, -mEyeZ);
}
}