I started work on libgdx a day before. I wanted to create a triangle whose points should be such that two corners should be at bottom left and bottom right and one point at top middle of screen. I am using perspective camera. My code example is:
public class Test1 implements ApplicationListener{
PerspectiveCamera camera;
Mesh triangle;
#Override
public void create() {
// TODO Auto-generated method stub
camera = new PerspectiveCamera(67, 45, 45 / (Gdx.graphics.getWidth() / (float)Gdx.graphics.getHeight()));
camera.near = 1;
camera.far = 200;
triangle = createTriangle();
}
#Override
public void resize(int width, int height) {
// TODO Auto-generated method stub
}
#Override
public void render() {
// TODO Auto-generated method stub
GL10 gl = Gdx.gl10;
gl.glClearColor(0, 0, 0, 1);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_DEPTH_TEST);
camera.update();
camera.apply(gl);
triangle.render(Gdx.gl10.GL_TRIANGLES);
}
public Mesh createTriangle() {
float[] vertices = {-45f, -27f, -67,
45f, -27f, -67,
0, 27f, -67
};
short[] indices = {0,1,2};
Mesh mesh = new Mesh(true, 3, 3, new VertexAttribute(Usage.Position, 3, ShaderProgram.POSITION_ATTRIBUTE));
mesh.setVertices(vertices);
mesh.setIndices(indices);
return mesh;
}
#Override
public void pause() {
// TODO Auto-generated method stub
}
#Override
public void resume() {
// TODO Auto-generated method stub
}
#Override
public void dispose() {
// TODO Auto-generated method stub
}
}
I was reading that OpenGL is unitless so I decided to have 45 units wide and accordingly set its height. When I executed the application, the triangle is not as I expected. It is smaller than the width and height of the screen. I have no prior experience in 3D. Kindly guide me where I am wrong?
Here is the screen shot:
You say you wanted to create a rectangle? You have only specified 3 vertices? Did you mean Triangle and if so, what seems to be the problem with your result?
Cheers
EDIT 1:
Having reread your question I apologise for the answer I gave.
You need to implement the resize function, creating your camera there as it is called once when the window is created before render is called. Something along the lines of
#Override
public void resize(int width, int height) {
float aspectRatio = (float) width / (float) height;
camera = new PerspectiveCamera(67,2f * aspectRatio, 2f);
}
should be what you're looking for. Read more about it here in the camera section. It may be about an orthographic camera, but the basic principle still applies
Edit 2:
Units in OpenGl are application specific. You have to set up the units you use.
There are however conventions, for instance by defualt the camera will be "looking" down the negative Z axis, with the positive X to the right and positive Y up. This is called a right hand system.
You have set up your 3D camera to have a view width of 45 (so an object at the camera with a width of 45 would fill the screen) and a height of 45 over the aspect ratio. Something we must remember is that objects in 3D that are far away are smaller than when they are up close. So you may have been expecting the triangle to fill the screen, however the Z coordinate of the points of the triangle are far away (67 from the camera) so it makes the triangle look smaller.
If you are only interested in 2D then use something called an OrthographicCamera which makes it so what you draw does not change size with distance from the camera (it has no perspective)
Related
I'm working on a simple game and I have problem with setting the map size. I am using OrthographicCamera. I want the map to be visible. How should I set the proper size of viewport new OrthographicCamera(viewport_width, viewport_height)?
Currently I'm passing some value and when I'm drawing elements on the map I don't see them because they are out of bound, if I make the width and height enormous I can see them. I want to have the whole map visible and draw everything on the sight of the user. I'm not sure where I'm making a mistake
You map is in square size and most of devices are in rectangular size so you need to keep your map in center of screen, either you're using portrait or landscape mode.
public class TileTest extends ApplicationAdapter {
ExtendViewport extendViewport;
OrthogonalTiledMapRenderer mapRenderer;
OrthographicCamera camera;
float worldWidth,worldHeight;
#Override
public void create() {
float tileWidth=64,tileHeight=64;
float mapWidth=20,mapHeight=20;
worldWidth=tileWidth*mapWidth;
worldHeight=tileHeight*mapHeight;
camera=new OrthographicCamera();
extendViewport =new ExtendViewport(worldWidth,worldHeight,camera);
TmxMapLoader mapLoader=new TmxMapLoader();
TiledMap map=mapLoader.load("square.tmx");
mapRenderer=new OrthogonalTiledMapRenderer(map);
}
#Override
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
mapRenderer.setView(camera);
mapRenderer.render();
}
#Override
public void dispose() {
mapRenderer.dispose();
}
#Override
public void resize(int width, int height) {
extendViewport.update(width,height,false);
extendViewport.getCamera().position.set(worldWidth/2,worldHeight/2,0);
extendViewport.getCamera().update();
}
}
Output is :
In my game I am drawing a scene2d Stage using a custom world coordinate system.
I then want to draw a debug UI with some text like FPS on top of that, but simply using screen coordinates, i.e. where the text is positioned in the upper right corner of the screen. My main render method looks something like this:
#Override
public void render(float delta) {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
gameStage.act(delta);
gameStage.draw();
debugInfo.render(delta);
}
The Stage sets a custom viewport:
public GameStage(GameDimensions dimensions, OrthographicCamera camera) {
// mapWith will be smaller than screen width
super(new FitViewport(dimensions.mapWidth, dimensions.mapHeight, camera));
...
}
The DebugInfo render code looks like this:
public DebugInfo() {
Matrix4 mat = new Matrix4();
mat.setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setProjectionMatrix(mat);
}
#Override
public void render(float delta) {
batch.begin();
font.setScale(2f);
drawText("FPS " + Gdx.graphics.getFramesPerSecond());
batch.end();
}
private void drawText(String text) {
final BitmapFont.TextBounds textBounds = font.getBounds(text);
textPos.x = Gdx.graphics.getWidth() - textBounds.width - MARGIN;
textPos.y = Gdx.graphics.getHeight() - MARGIN;
textPos.z = 0;
font.draw(batch, text, textPos.x, textPos.y);
}
The problem is that even though I make no reference whatsoever to the stage's world system or its camera, but strictly use pixel values and an according projection matrix, the text appears in the upper right corner of the stage's viewport, which is smaller than the screen. I want it to appear detached from the stage in the screen's corner instead.
I could pinpoint the problem down to the creation of the Stage instance itself; stopping to draw it is not enough. I actually have to remove the call to super(new FitViewport(...)) to prevent this from happening.
This leads me to believe that I need to somehow reset the viewport of the rendering pipeline before I render the UI overlay? How would I do that?
My first answer was wrong. Now, when I've looked into Viewport source code I can see that it uses OpenGL's glViewPort and indeed it affects all subsequent draw calls. So you have to call glViewport with appropriate dimensions before rendering your stuff:
#Override
public void render(float delta) {
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.begin();
font.setScale(2f);
drawText("FPS " + Gdx.graphics.getFramesPerSecond());
batch.end();
}
Hello I am developing a game using andengine and now I want my sprite to be rotated with an OnScreenAnalogController. I've initialized it but now I can't figure out how to do the rest of the job. Any examples code or anything would be much appreciated.
Thanks in advance.
P.S Rotation should be around sprite's axis. And when I let go the controller I want the sprite to be facing in the direction where it has rotated not the initial one.
For sprite rotations we override the applyRotation() method ->
Sprite sprite = new Sprite(0, 0, textureRegionForMySprite, getVertexBufferObjectManager()){
#Override
protected void applyRotation(GLState pGLState) {
pGLState.rotateModelViewGLMatrixf(this.mRotation, 0, 1, 0);
}
};
I kinda found a solution using this code
mAnalogController = new AnalogOnScreenControl(90, cameraHeight - 130, mCamera, mControllerTextureRegion, mKnobTextureRegion, 0.01f, getVertexBufferObjectManager(), new IAnalogOnScreenControlListener(){
#Override
public void onControlChange(
BaseOnScreenControl pBaseOnScreenControl, float pValueX,
float pValueY) {
//rect.registerEntityModifier(new RotationByModifier(0.5f, MathUtils.radToDeg((float) Math.atan2(-pValueX, pValueY))));
rect.registerEntityModifier(new RotationModifier(0.1f, rect.getRotation(), MathUtils.radToDeg((float) Math.atan2(pValueX, -pValueY))));
}
#Override
public void onControlClick(
AnalogOnScreenControl pAnalogOnScreenControl) {
// TODO Auto-generated method stub
}
});
the only thing is when I release the controller it get's back to the initial position and rotating which is not what I want. Any ideas on how I can do that?
I have customized BoundCamera and have overrided the update method as:
#Override
public void onUpdate(float pSecondsElapsed) {
// TODO Auto-generated method stub
super.onUpdate(pSecondsElapsed);
if(chaseEntity != null) {
tempHeight = (chaseEntity.getY() * PIXEL_TO_METER_RATIO_DEFAULT) + PlayLevelActivity.CAMERA_HEIGHT/2;
if(tempHeight < heightCovered) {
setBounds(0, 0, PlayLevelActivity.CAMERA_WIDTH, tempHeight);
heightCovered = tempHeight;
}
}
}
and have initialized the camera as:
mCamera = new MyBoundCamera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT, 0, CAMERA_WIDTH, 0, CAMERA_HEIGHT);
I want to keep chase entity in center all time. Now the problem I am facing is that in start, the camera chases the entity. As the entity goes higher and higher, it goes beyond screen bounds in y direction. I am updating camera bounds in on update method to keep entity always in center but not working. The chaseEntity.getY() gets the physics body y position. Does anyone know where I am going wrong?
If you use
this.mBoundChaseCamera.setChaseEntity(Sptite);
setBoundsEnabled(false);
then the sprite will still be in the center of the screen all the time. The downside is that the sprite can go beyond the bounds. You would have to implement your own method to keep the sprite within the bounds.
In the comments you mentioned that at some point you want the sprite to fall but don't want it to remain in the center while it is falling. You could just use
this.mBoundChaseCamera.setChaseEntity(null);
then drop the sprite to the bottom of the screen. That should provide an effect similar to papi jump.
It is possible to render a view at a low resolution, and then scale it up to fit the actual size of your view using the setFixedSize() method of a SurfaceHolder. However, the scaling is done with some kind of interpolation, causing everything to blur.
Is there any method for changing the method of interpolation to nearest neighbour or just turning it off?
Here is an example of what I mean, Made with a 4x4 surface in a fullscreen-view:
Left image: This is how I want the result to look (here achieved by drawing a nonfiltered bitmap)
Right image: This is what happens when the 4x4 canvas is scaled to fullscreen.
Sample code for generating the right image if anyone's interested:
public class ScaleView extends SurfaceView implements SurfaceHolder.Callback {
private final static float[] points = {0,0, 2,0, 4,0, 1,1, 3,1, 0,2, 2,2, 4,2, 1,3, 3,3};
private Paint white;
public ScaleView(Context context) {
super(context);
white = new Paint();
white.setColor(0xFFFFFFFF);
getHolder().addCallback(this);
}
#Override
public void surfaceChanged(SurfaceHolder holder, int format, int width, int height){
Canvas c = holder.lockCanvas();
try{
c.drawPoints(points, white);
}finally{
holder.unlockCanvasAndPost(c);
}
}
#Override
public void surfaceCreated(SurfaceHolder holder){
holder.setFixedSize(4, 4);
}
#Override
public void surfaceDestroyed(SurfaceHolder holder){}
}
Note: The square pattern above is just an example of the effect I want to avoid. I am looking for a general solution that can be applied to any view (with a surfaceholder) with any content.
Sorry, you can't control this. It is not defined what kind of scaling will be done on these surfaces -- it depends on the hardware and how it is configured. Also, this kind of extreme scaling really should be avoided, since in some cases hardware can't do it so you will end up in slower paths. (For example if surfaces are being put into overlays, many hardware overlay engines can't do that kind of extreme scaling.)