Hello I am developing a game using andengine and now I want my sprite to be rotated with an OnScreenAnalogController. I've initialized it but now I can't figure out how to do the rest of the job. Any examples code or anything would be much appreciated.
Thanks in advance.
P.S Rotation should be around sprite's axis. And when I let go the controller I want the sprite to be facing in the direction where it has rotated not the initial one.
For sprite rotations we override the applyRotation() method ->
Sprite sprite = new Sprite(0, 0, textureRegionForMySprite, getVertexBufferObjectManager()){
#Override
protected void applyRotation(GLState pGLState) {
pGLState.rotateModelViewGLMatrixf(this.mRotation, 0, 1, 0);
}
};
I kinda found a solution using this code
mAnalogController = new AnalogOnScreenControl(90, cameraHeight - 130, mCamera, mControllerTextureRegion, mKnobTextureRegion, 0.01f, getVertexBufferObjectManager(), new IAnalogOnScreenControlListener(){
#Override
public void onControlChange(
BaseOnScreenControl pBaseOnScreenControl, float pValueX,
float pValueY) {
//rect.registerEntityModifier(new RotationByModifier(0.5f, MathUtils.radToDeg((float) Math.atan2(-pValueX, pValueY))));
rect.registerEntityModifier(new RotationModifier(0.1f, rect.getRotation(), MathUtils.radToDeg((float) Math.atan2(pValueX, -pValueY))));
}
#Override
public void onControlClick(
AnalogOnScreenControl pAnalogOnScreenControl) {
// TODO Auto-generated method stub
}
});
the only thing is when I release the controller it get's back to the initial position and rotating which is not what I want. Any ideas on how I can do that?
Related
I have two textures drawn in my 2d scene. And I want to drag them each separately. When they are touched and dragged I want both textures to be dragged to each points. How it drag them individually, or is there any other method to implement. as I am a beginner to libgdx.
My code:
public class MyGdxGame implements ApplicationListener{
OrthographicCamera camera;
ShapeRenderer shapeRenderer;
float screenOffset=10,circleRadius=30;
Texture firstTexture;
Texture secondTexture;
float firstTextureX;
float firstTextureY;
float secondTextureX;
float secondTextureY;
float touchX;
float touchY;
SpriteBatch batch;
#Override
public void create()
{
firstTexture= new Texture("b1.jpg");
firstTextureX = 50;
firstTextureY = 50;
secondTexture = new Texture("b2.jpg");
secondTextureX = 250;
secondTextureY = 250;
batch = new SpriteBatch();
camera=new OrthographicCamera();
shapeRenderer=new ShapeRenderer();
shapeRenderer.setAutoShapeType(true);
}
#Override
public void render()
{
Gdx.gl.glClearColor(1,1,1,1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
shapeRenderer.setProjectionMatrix(camera.combined);
shapeRenderer.begin();
shapeRenderer.setColor(Color.RED);
shapeRenderer.circle(camera.viewportWidth/2,camera.viewportHeight/2,circleRadius);
shapeRenderer.rect(screenOffset,screenOffset,camera.viewportWidth-2*screenOffset,camera.viewportHeight-2*screenOffset);
shapeRenderer.line(screenOffset,screenOffset,camera.viewportWidth-screenOffset,camera.viewportHeight-screenOffset);
shapeRenderer.line(screenOffset,camera.viewportHeight-screenOffset,camera.viewportWidth-screenOffset,screenOffset);
shapeRenderer.line(screenOffset,camera.viewportHeight/2,camera.viewportWidth-screenOffset,camera.viewportHeight/2);
shapeRenderer.line(camera.viewportWidth/2,screenOffset,camera.viewportWidth/2,camera.viewportHeight-screenOffset);
shapeRenderer.end();
batch.begin();
batch.draw(firstTexture, firstTextureX, firstTextureY);
batch.draw(secondTexture, secondTextureX, secondTextureY);
batch.end();
}
Well, you should save the current position of the texture, and on the update, move the texture checking the movement of the mouse if the mouse right click is active. This is the general idea, valid for any framework or code.
I recommend you to move the texture to their own class, with a getter and a setter of the position and the texture, so it's easy to manage.
Using libdgx you can check if the mouse is clicked at any time with
Gdx.input.justTouched();
On every update you can check what was the last position of the mouse and calculate the difference with the new position on every update using
Gdx.input.getX()
Gdx.input.getY()
To syncronize the position inside your screen with the position on your camera you should use unproject on your camera, for example:
Vector3 mousePos = new Vector3();
mousePos.x = Gdx.input.getX();
mousePos.y = Gdx.input.getY();
mousePos.z = 0;
camera.unproject(mousePos); //this will convert the screen position to your camera position
TL;DR you need to check what was the last position of the mouse when it was clicked, and on the next update calculate the difference, and then you update the position of the texture.
BTW, although this is for when you get more experience, you can create a class that implements InputProcessor, alow the class to be processed by libgdx with, for example:
public class CameraControllerDesktop implements InputProcessor, ControllerListener {
public CameraControllerDesktop() {
Gdx.input.setInputProcessor(this);
}
and the you could use the function
#Override
public boolean touchDragged(int screenX, int screenY, int pointer) {
return false;
}
to let libgx calculate the position of what you are dragging.
Sorry about not writing the entire solution, but if whit this info you can solve your problem, I'm pretty sure that you sooner or later will be able to make the game you want.
Objective: to rotate an image in the center of the screen with movement equal to left or right touchDragged event.
Right now I have a basic Stage that is created and adds an actor (centerMass.png) to the stage. it is created and rendered like this:
public class Application extends ApplicationAdapter {
Stage stageGamePlay;
#Override
public void create () {
//setup game stage variables
stageGamePlay = new Stage(new ScreenViewport());
stageGamePlay.addActor(new CenterMass(new Texture(Gdx.files.internal("centerMass.png"))));
Gdx.input.setInputProcessor(stageGamePlay);
}
#Override
public void render () {
Gdx.gl.glClearColor(255f/255, 249f/255, 236f/255, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
//before drawing, updating actions that have changed
stageGamePlay.act(Gdx.graphics.getDeltaTime());
stageGamePlay.draw();
}
}
I then have a separate class file that contains the CenterMass class, extending Image. I am familiar enough to know I could extend Actor, but I am not sure the benefit I would gain using Actor vs Image.
In the CenterMass class I create the texture, set bounds, set touchable and center it on the screen.
Inside CenterMass class I also have an InputListener listening for events. I have an override set for touchDragged where I am trying to get the X and Y of the drag, and use that to set the rotate actions accordingly. That class looks like this:
//extend Image vs Actor classes
public class CenterMass extends Image {
public CenterMass(Texture centerMassSprite) {
//let parent be aware
super(centerMassSprite);
setBounds(getX(), getY(), getWidth(), getHeight());
setTouchable(Touchable.enabled);
setPosition(Gdx.graphics.getWidth()/2, Gdx.graphics.getHeight()/2);
setRotation(90f);
addListener(new InputListener(){
private int dragX, dragY;
private float duration;
private float rotateBy = 30f;
#Override
public void touchDragged(InputEvent event, float x, float y, int pointer) {
//get
float dX = (float)(x-dragX)/(float)Gdx.graphics.getWidth();
float dY = (float)(dragY-y)/(float)Gdx.graphics.getHeight();
duration = 1.0f; // 1 second
Actions.sequence(
Actions.parallel(
Actions.rotateBy(rotateBy, duration),
Actions.moveBy( dX, dY, duration)
)
);
}
});
}
#Override
protected void positionChanged() {
//super.positionChanged();
}
#Override
public void draw(Batch batch, float parentAlpha) {
//draw needs to be available for changing color and rotation, I think
batch.setColor(this.getColor());
//cast back to texture because we use Image vs Actor and want to rotate and change color safely
((TextureRegionDrawable)getDrawable()).draw(batch, getX(), getY(),
getOriginX(), getOriginY(),
getWidth(), getHeight(),
getScaleX(), getScaleY(),
getRotation());
}
#Override
public void act(float delta) {
super.act(delta);
}
}
The Problem:
I have not been able to get it to rotate the way I would like. I have been able to get it to shift around in unpredictable ways. Any guidance would be much appreciated.
As from you code it seems everything is good. except you don't set any origin of the image. without setting the origin it is by default set to 0,0.(bottom left of your image)
So if yow want to rotate the image with origin to centre you have to set the origin to imageWidth/2. imageHeight/2.
setOrigin(imageWidth/2,imageHeight/2)// something like this
I have a Sprite in Andengine. When I use first code below, it's working and displaying on scene and rotating. But when I use sprite's onManageUpdate method for detect collision or for anything else, sprite isn't rotating...
circleBox = new CircleBox(x, y, resourcesManager.circleBoxRegion, 2, vbom);
There is rotate function CircleBox class and is rotating in code above.When I use code below is not rotating why?
circleBox = new CircleBox(x, y, resourcesManager.circleBoxRegion, 2, vbom){
#Override
protected void onManagedUpdate(float pSecondsElapsed)
{
if(player.collidesWith(this)){
player.setCurrentTileIndex(8); // olunce duran adam pozistonuna gelsin
player.getBody().setTransform(new Vector2(100/PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT,
400/PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT), 0); //1=32
}
}
};
I think you should call - super.onManagedUpdate(pSecondsElapsed) in onManagedUpdate() method.
I have customized BoundCamera and have overrided the update method as:
#Override
public void onUpdate(float pSecondsElapsed) {
// TODO Auto-generated method stub
super.onUpdate(pSecondsElapsed);
if(chaseEntity != null) {
tempHeight = (chaseEntity.getY() * PIXEL_TO_METER_RATIO_DEFAULT) + PlayLevelActivity.CAMERA_HEIGHT/2;
if(tempHeight < heightCovered) {
setBounds(0, 0, PlayLevelActivity.CAMERA_WIDTH, tempHeight);
heightCovered = tempHeight;
}
}
}
and have initialized the camera as:
mCamera = new MyBoundCamera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT, 0, CAMERA_WIDTH, 0, CAMERA_HEIGHT);
I want to keep chase entity in center all time. Now the problem I am facing is that in start, the camera chases the entity. As the entity goes higher and higher, it goes beyond screen bounds in y direction. I am updating camera bounds in on update method to keep entity always in center but not working. The chaseEntity.getY() gets the physics body y position. Does anyone know where I am going wrong?
If you use
this.mBoundChaseCamera.setChaseEntity(Sptite);
setBoundsEnabled(false);
then the sprite will still be in the center of the screen all the time. The downside is that the sprite can go beyond the bounds. You would have to implement your own method to keep the sprite within the bounds.
In the comments you mentioned that at some point you want the sprite to fall but don't want it to remain in the center while it is falling. You could just use
this.mBoundChaseCamera.setChaseEntity(null);
then drop the sprite to the bottom of the screen. That should provide an effect similar to papi jump.
I started work on libgdx a day before. I wanted to create a triangle whose points should be such that two corners should be at bottom left and bottom right and one point at top middle of screen. I am using perspective camera. My code example is:
public class Test1 implements ApplicationListener{
PerspectiveCamera camera;
Mesh triangle;
#Override
public void create() {
// TODO Auto-generated method stub
camera = new PerspectiveCamera(67, 45, 45 / (Gdx.graphics.getWidth() / (float)Gdx.graphics.getHeight()));
camera.near = 1;
camera.far = 200;
triangle = createTriangle();
}
#Override
public void resize(int width, int height) {
// TODO Auto-generated method stub
}
#Override
public void render() {
// TODO Auto-generated method stub
GL10 gl = Gdx.gl10;
gl.glClearColor(0, 0, 0, 1);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_DEPTH_TEST);
camera.update();
camera.apply(gl);
triangle.render(Gdx.gl10.GL_TRIANGLES);
}
public Mesh createTriangle() {
float[] vertices = {-45f, -27f, -67,
45f, -27f, -67,
0, 27f, -67
};
short[] indices = {0,1,2};
Mesh mesh = new Mesh(true, 3, 3, new VertexAttribute(Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE));
mesh.setVertices(vertices);
mesh.setIndices(indices);
return mesh;
}
#Override
public void pause() {
// TODO Auto-generated method stub
}
#Override
public void resume() {
// TODO Auto-generated method stub
}
#Override
public void dispose() {
// TODO Auto-generated method stub
}
}
I was reading that OpenGL is unitless so I decided to have 45 units wide and accordingly set its height. When I executed the application, the triangle is not as I expected. It is smaller than the width and height of the screen. I have no prior experience in 3D. Kindly guide me where I am wrong?
Here is the screen shot:
You say you wanted to create a rectangle? You have only specified 3 vertices? Did you mean Triangle and if so, what seems to be the problem with your result?
Cheers
EDIT 1:
Having reread your question I apologise for the answer I gave.
You need to implement the resize function, creating your camera there as it is called once when the window is created before render is called. Something along the lines of
#Override
public void resize(int width, int height) {
float aspectRatio = (float) width / (float) height;
camera = new PerspectiveCamera(67,2f * aspectRatio, 2f);
}
should be what you're looking for. Read more about it here in the camera section. It may be about an orthographic camera, but the basic principle still applies
Edit 2:
Units in OpenGl are application specific. You have to set up the units you use.
There are however conventions, for instance by defualt the camera will be "looking" down the negative Z axis, with the positive X to the right and positive Y up. This is called a right hand system.
You have set up your 3D camera to have a view width of 45 (so an object at the camera with a width of 45 would fill the screen) and a height of 45 over the aspect ratio. Something we must remember is that objects in 3D that are far away are smaller than when they are up close. So you may have been expecting the triangle to fill the screen, however the Z coordinate of the points of the triangle are far away (67 from the camera) so it makes the triangle look smaller.
If you are only interested in 2D then use something called an OrthographicCamera which makes it so what you draw does not change size with distance from the camera (it has no perspective)