My camera is set to a normalized space (1 unit in height and 1.5 unit in width). But it seems the circle algorithm of the ShapeRenderer works only in integer space. Is there a workaround ?
public void create() {
camera = new OrthographicCamera();
camera.setToOrtho(false, 1.5f, 1f);
shapes = new ShapeRenderer();
}
public void drawScene() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
shapes.setProjectionMatrix(camera.combined);
shapes.begin(ShapeType.Circle);
shapes.setColor(1, 0, 0, 1);
shapes.circle(0.75f, 0.5f,0.5f);
shapes.end();
}
ShapeRenderer uses 6 times the cube root of the radius to estimate how many segments it needs to draw a circle.
In your case that you end up with 4 segments (6 * 0.793 is roughly 4.76). Which is what you see.
ShapeRenderer assumes units are screen pixels according to the documentation, so these are reasonable estimates.
Related
I would like to render spheres like in the image below attached to anchors.
Unfortunately all examples are based on a Sceneform which I don't want to use. The spheres should be free in in the air without being bound to a flat surface.
With the Hello_AR example from Google I was able to render a 3D sphere into the space and fix it by attaching it to an anchor.
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
...
backgroundRenderer.createOnGlThread(this);
virtualObject.createOnGlThread(this, "models/sphere.obj", "models/sphere.png");
virtualObject.setMaterialProperties(0.0f, 0.0f, 0.0f, 0.0f);
...
}
#Override
public void onDrawFrame(GL10 gl) {
...
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
// The first three components are color scaling factors.
// The last one is the average pixel intensity in gamma space.
final float[] colorCorrectionRgba = new float[] {255f, 0, 0, 255f};
frame.getLightEstimate().getColorCorrection(colorCorrectionRgba, 0);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
anchor.getPose().toMatrix(anchorMatrix, 0);
virtualObject.updateModelMatrix(anchorMatrix, scaleFactor);
float[] objColor = new float[] { 255f, 255f, 255f, 0 };
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba, objColor);
}
}
With that I am able to create a black sphere 1 meter away from the camera in the air.
My questions:
Is this a good / correct way to do it?
How do I change the color of the sphere, since color values have no effect on the object
How do I make it transparent?
Thank you very much.
You need to attach it to anchor. You don't need to use sceneform. Sceneform is only one of two methods.
In terms of color and transparency it depends on the way you serve your object. In your code I see that you're using material so it's hard to change color.
I have a simple libgdx code for game which contains 1 texture and a particle effect with 4 emitters.
Whenever I resume to the game screen or lock-unlock the phone, I get a delay of about 3 seconds.
How do I reduce this delay?
One thing I have tried is that by reducing the texture image size.
Before i had a texture image of 300kb and used to get a delay of 5 seconds, now i hve reduced it to 60kb
and now I get a delay of 3 seconds.
Is there any way programaticaly that I can reduce the delay. I dont want to show any spalsh screen
Code:
#Override
public void show() {
SpriteBatch batch = new SpriteBatch();
Texture tex = new Texture(Gdx.files.internal("data/bg1.jpg"));
Sprite sprite = new Sprite(tex);
ParticleEffect pe = new ParticleEffect();
pe.load(Gdx.files.internal("data/pe1.p"), Gdx.files.internal("data"));
pe.start();
}
#Override
public void render(float delta) {
gl.glClearColor(0f, 0f, 0f, 1f);
gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
gl.glViewport(0, 0, (int) Width, (int) Height);
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
sprite.draw(batch);
pe.draw(batch, delta);
batch.end();
}
Why don't you just try making the local variables in show() field variables and initializing them in the constructor rather than in the show method. I would still recommend measuring which part is taking the most time as the one comment states.
I'm working on a word game and I was dynamically creating the textures for the letter tiles when the game loads, comprising of a background image and a font.
To do this I was drawing pixmaps onto pixmaps, this was all fine until I started working on scaling. The font scaling on the pixmaps was terrible, even with bilinear filtering turned on (left image below) even though my scaled fonts were looking pretty good elsewhere.
So I decided to get round this I'd use a frame buffer, render everything to that and then copy that out to a pixmap and create a texture from that. That way I could use the gpu filtering and it should look exactly the same as my other fonts, (middle image below) but it still didn't look quite as nice as the other fonts. A slight dark line round the outside, it looks like the alpha blending isn't working properly.
I then tried drawing straight over the tiles with the font at runtime to make sure it wasn't my imagination, and this definitely looks better with smooth blending into the image below (right image below), but this impacts my frame rate quite a lot.
So my question is, why is drawing to the frame buffer not producing the same result as when I draw to the screen? Code below.
Texture tx = Assets.loadTexture("bubbles/BubbleBlue.png");
tx.setFilter(TextureFilter.Linear, TextureFilter.Linear);
SpriteBatch sb = new SpriteBatch();
FrameBuffer fb = new FrameBuffer(Format.RGBA8888,
LayoutManager.getWidth(), LayoutManager.getHeight(), false);
fb.begin();
sb.begin();
sb.draw(tx, 0, 0, LetterGrid.blockWidth, LetterGrid.blockHeight);
Assets.candara80.font.getRegion().getTexture()
.setFilter(TextureFilter.Linear, TextureFilter.Linear);
Assets.candara80.setSize(0.15f);
TextBounds textBounds = Assets.candara80.getBounds(letter);
Assets.candara80.drawText(sb, letter,
(LetterGrid.blockWidth - textBounds.width) / 2,
(LetterGrid.blockHeight + textBounds.height) / 2);
sb.end();
Pixmap pm = ScreenUtils.getFrameBufferPixmap(0, 0,
(int) LetterGrid.blockWidth, (int) LetterGrid.blockHeight);
Pixmap flipped = flipPixmap(pm);
result = new Texture(flipped);
fb.end();
pm.dispose();
flipped.dispose();
tx.dispose();
fb.dispose();
sb.dispose();
set PROJECTION is the problem.
EXAMPLE
public Texture texture(Color fg_color, Color bg_color)
{
Pixmap pm = render( fg_color, bg_color );
texture = new Texture(pm);//***here's your new dynamic texture***
disposables.add(texture);//store the texture
}
//---------------------------
public Pixmap render(Color fg_color, Color bg_color)
{
int width = Gdx.graphics.getWidth();
int height = Gdx.graphics.getHeight();
SpriteBatch spriteBatch = new SpriteBatch();
m_fbo = new FrameBuffer(Format.RGB565, (int)(width * m_fboScaler), (int)(height * m_fboScaler), false);
m_fbo.begin();
Gdx.gl.glClearColor(bg_color.r, bg_color.g, bg_color.b, bg_color.a);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
/**set PROJECTION**/
Matrix4 normalProjection = new Matrix4().setToOrtho2D(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
spriteBatch.setProjectionMatrix(normalProjection);
spriteBatch.begin();
spriteBatch.setColor(fg_color);
//do some drawing ***here's where you draw your dynamic texture***
...
spriteBatch.end();//finish write to buffer
pm = ScreenUtils.getFrameBufferPixmap(0, 0, (int) width, (int) height);//write frame buffer to Pixmap
m_fbo.end();
// pm.dispose();
// flipped.dispose();
// tx.dispose();
m_fbo.dispose();
m_fbo = null;
spriteBatch.dispose();
// return texture;
return pm;
}
I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
I need to rotate it around Z axis, preserving its size.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Here's a picture.
So far I managed to achieve first 2 tasks:
public void onDrawFrame(GL10 gl) {
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int width, int height) {
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glOrthof(0f, (float)width, (float)height, 0f, -1f, 1f);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
But when I'm trying to rotate my box around x or y, nasty thing are happening with my box and there is no perspective effect. I tried to use some other function instead of glRotate (glFrustum, glPerspective, gluLookAt, applying "skewing" matrix), but I couldn't make them work properly.
I'm trying to migrate graphics in my game to OpenGL for performance reasons.
I need to draw an object using exact screen coordinates. Say a box 100x100 pixels in the center of 240x320 screen.
For a perspective you also need some length for the lens, which determines the FOV. The FOV is the ratio of viewing plane distance to visible extents. In the case of the near plane it thus becomes {left,right,top,bottom}/near. For the sake of simplicity we assume horizontal FOV and a symmetric projection i.e.
FOV = 2*|left|/near = 2*|right|/near = extent/distance
or if you're more into angles
FOV = 2*tan(angular FOV / 2)
For a 90° FOV the length of the lens is half the width of the focal plane. Your focal plane is 240x320 pixels, so 120 to the left and right and 160 to the top and bottom. OpenGL does not really have a focus, but we can say that the middle plane between near and far is the "focal".
So let's say the object will have in average a extent of about the order of magnitude of visible plane limits, i.e. for a visible plane of 240x360, an object will have in average a size of ~200px. It thus makes sense the distance of near to far clipping to be 200, so +- 100 about the focal plane. So for a FOV of 90° the focal plane has distance
2*tan(90°/2) = extent/distance
2*tan(45°) = 2 = 240/distance
2*distance = 240
distance = 120
120, thus near and far clipping distances are 20 and 220.
Last but not least the near clip plane limits must be scaled by near_distance/focal_distance = 20/120
So
left = -120 * 20/120 = -20
right = 120 * 20/120 = 20
bottom = -180 * 20/120 = -30
top = 180 * 20/120 = 30
So this gives us the glFrustum parameters:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(-20, 20, -30, 30, 20, 220);
And last but not least we must move the world origin into the "focal" plane
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -120);
I need to rotate it around Z axis, preserving its size.
done.
I need to rotate it around X axis, with perspective effect, preserving (or close to) its size.
I need to rotate it around Y axis, with perspective effect, preserving (or close to) its size.
Perspective does not preserve size. That's what's makes it a perspective. You can use a very long lens, i.e. small FOV.
Code Update
As a general pro-tip: Do all OpenGL operations in the drawing handler. Don't set the projection in the reshape handler. It's ugly and as soon as you want to have some HUD or other kind of overlay you'll have to discard it anyway. So here's how to change it:
public void onDrawFrame(GL10 gl) {
// fov, extents are parameters set somewhere else
// 2*tan(fov/2.) = width/distance =>
float distance = width/(2.*tan(fov));
float near = distance - extent/2;
float far = distance + extent/2;
if(near < 1.) {
near = 1.;
}
float left = (-width/2) * near/distance;
float right = ( width/2) * near/distance;
float bottom = (-height/2) * near/distance;
float top = ( height/2) * near/distance;
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glViewport(0, 0, width, height);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustum(left, right, bottom, top, near, far);
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatef(0, 0, -focal);
gl.glTranslatef(120, 160, 0); // move rotation point
gl.glRotatef(angle, 0.0f, 0.0f, 1.0f); // rotate
gl.glTranslatef(-120, -160, 0); // restore rotation point
mesh.draw(gl); // draws 100x100 px rectangle with the following coordinates: (70, 110, 170, 210)
}
public void onSurfaceChanged(GL10 gl, int new_width, int new_height) {
width = new_width;
height = new_height;
}
You need to use a perspective projection matrix and then use your model-view matrix to get the position and scaling right.
You're using an orthogonal projection (glOrthof()), which explicitly disables perspective.
It's opposite is glFrustum(), often wrapped by gluPerspective()/ which is easier to use but requires the GLU library.
I'm creating a game with libgdx that I want to run at a higher resolution on the desktop, but I want it to scale everything down correctly when I run it on android at smaller resolutions. I've read that the best way to do this is to not use a pixel perfect camera, and instead to use world coordinates, but I'm not sure how to correctly do that.
This is the code I have right now:
#Override
public void create() {
characterTexture = new Texture(Gdx.files.internal("character.png"));
characterTextureRegion = new TextureRegion(characterTexture, 0, 0, 100, 150);
batch = new SpriteBatch();
Gdx.gl10.glClearColor(0.4f, 0.6f, 0.9f, 1);
float aspectRatio = (float)Gdx.graphics.getWidth() / (float)Gdx.graphics.getHeight();
camera= new OrthographicCamera(aspectRatio, 1.0f);
}
#Override
public void render() {
GL10 gl = Gdx.graphics.getGL10();
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
camera.apply(gl);
batch.setProjectionMatrix(camera.combined);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
draw();
}
private void draw() {
//batch.getProjectionMatrix().set(camera.combined);
batch.begin();
batch.draw(characterTextureRegion, 0, 0, // the bottom left corner of the box, unrotated
1f, 1f, // the rotation center relative to the bottom left corner of the box
0.390625f, 0.5859375f, // the width and height of the box
1, 1, // the scale on the x- and y-axis
0); // the rotation angle
batch.end();
}
The texture I'm use is 256x256 with the actual image in it being 100x150.
This is the result I get when I run the game: http://i.imgur.com/HV9Bi.png
The sprite that gets rendered is massive, considering this is the original image: http://i.imgur.com/q1cZT.png
What's the best way to go about making it so that the sprites get rendered at their original size while still keeping the ability to have the game scale correctly when played in different resolutions?
I've only found two solutions, both of which I don't like.
The image showed up how it was supposed to if I used pixel coordinates for the camera, but then that didn't scale at all when I put it on my phone with a different resolution.
I can scale the texture region down when I draw it, but it seems like there is a better way because it is extremely tedious trying to figure out the correct number to scale it by.
Have you ever used the Libgdx setup tool? When you create a project with it, it has a sample image that is displayed. It seems to keep it's ratio correct no matter what size you change the screen to.
public class RotationTest implements ApplicationListener {
private OrthographicCamera camera;
private SpriteBatch batch;
private Texture texture;
private Sprite sprite;
Stage stage;
public boolean leonAiming = true;
#Override
public void create() {
float w = Gdx.graphics.getWidth();
float h = Gdx.graphics.getHeight();
camera = new OrthographicCamera(1, h/w);
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("data/libgdx.png"));
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
TextureRegion region = new TextureRegion(texture, 0, 0, 512, 275);
sprite = new Sprite(region);
sprite.setSize(0.9f, 0.9f * sprite.getHeight() / sprite.getWidth());
sprite.setOrigin(sprite.getWidth()/2, sprite.getHeight()/2);
sprite.setPosition(-sprite.getWidth()/2, -sprite.getHeight()/2); }....
#Override
public void render() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
sprite.draw(batch);
batch.end();
First of all you need to fix boundaries to the world (I mean to your game ). In that world only you actors(game characters) should play. If you are crossing boundaries, manage it with camera like showing up, down, left and right.