I'm trying to create a children's game using the LibGdx framework. What I want to accomplish is to tilt an image of a balloon that will be used to collect points. So far I have added code to move the balloon up/down. I'm unable to figure out how to tilt it to the right or left. Here is the code I have so far. Can someone please help
public class balloongame extends ApplicationAdapter {
SpriteBatch batch;
Texture background;
Texture balloon;
private float renderX;
#Override
public void create () {
batch = new SpriteBatch();
background = new Texture("bg.png");
balloon = new Texture("final.png");
renderX = 100;
}
#Override
public void render () {
renderX += Gdx.input.getAccelerometerX();
if(renderX < 0) renderX = 0;
if(renderX > Gdx.graphics.getWidth() - 200) renderX = Gdx.graphics.getWidth() - 200;
batch.begin();
batch.draw(background, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.draw(balloon,renderX, Gdx.graphics.getWidth());
batch.end();
}
#Override
public void dispose () {
batch.dispose();
}
}
I know two easy ways to do it.
Affine2 class have shear method in it.And similar transformation methods.
Affine2.shear()
After you set affine matrix you can draw it with
draw(TextureRegion region, float width, float height, Affine2 transform)
I prefer changing vertices attributes in draw method.
draw(Texture texture,
float[] spriteVertices,
int offset,
int count)
There must be 4 vertices, each made up of 5 elements in this order: x, y, color, u, v
batch.draw(textures[0], new float[]{
0, 0 ,Color.RED.toFloatBits(), 0f, 1f,
textures[0].getWidth(),50, Color.BLUE.toFloatBits(), 1f, 1f,
textures[0].getWidth(), 50+textures[1].getHeight(), Color.GREEN.toFloatBits(), 1f, 0f,
0 ,textures[0].getHeight(), Color.GOLD.toFloatBits(), 0f, 0f},0,20);
You can set individial colors for each vertice or simply can set color to white.
You can change it however you want.
Related
I would like to render spheres like in the image below attached to anchors.
Unfortunately all examples are based on a Sceneform which I don't want to use. The spheres should be free in in the air without being bound to a flat surface.
With the Hello_AR example from Google I was able to render a 3D sphere into the space and fix it by attaching it to an anchor.
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
...
backgroundRenderer.createOnGlThread(this);
virtualObject.createOnGlThread(this, "models/sphere.obj", "models/sphere.png");
virtualObject.setMaterialProperties(0.0f, 0.0f, 0.0f, 0.0f);
...
}
#Override
public void onDrawFrame(GL10 gl) {
...
// Get projection matrix.
float[] projmtx = new float[16];
camera.getProjectionMatrix(projmtx, 0, 0.1f, 100.0f);
// Get camera matrix and draw.
float[] viewmtx = new float[16];
camera.getViewMatrix(viewmtx, 0);
// Compute lighting from average intensity of the image.
// The first three components are color scaling factors.
// The last one is the average pixel intensity in gamma space.
final float[] colorCorrectionRgba = new float[] {255f, 0, 0, 255f};
frame.getLightEstimate().getColorCorrection(colorCorrectionRgba, 0);
// Visualize anchors created by touch.
float scaleFactor = 1.0f;
for (Anchor anchor : anchors) {
if (anchor.getTrackingState() != TrackingState.TRACKING) {
continue;
}
anchor.getPose().toMatrix(anchorMatrix, 0);
virtualObject.updateModelMatrix(anchorMatrix, scaleFactor);
float[] objColor = new float[] { 255f, 255f, 255f, 0 };
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba, objColor);
}
}
With that I am able to create a black sphere 1 meter away from the camera in the air.
My questions:
Is this a good / correct way to do it?
How do I change the color of the sphere, since color values have no effect on the object
How do I make it transparent?
Thank you very much.
You need to attach it to anchor. You don't need to use sceneform. Sceneform is only one of two methods.
In terms of color and transparency it depends on the way you serve your object. In your code I see that you're using material so it's hard to change color.
I have an OpenGL scene with a sphere having a radius of 1, and the camera being at the center of the sphere (it's a 360° picture viewer). The user can rotate the sphere by panning.
Now I need to display 2D pins "attached" to some parts of the picture. To do so, I want to convert the 3D coordinates of my pins into 2D screen coordinates, to add the pin image at that screen coordinates.
I'm using GLU.glProject and the following classes from android-apidemo:
MatrixGrabber
MatrixStack
MatrixTrackingGL
I save the projection matrix in the onSurfaceChanged method and the model-view matrix in the onDraw method (after having drawn my sphere). Then I feed GLU.glProject with them when the user rotates the sphere to update the pins position.
When I pan horizontally, the pins pan correctly, but when I pan vertically, the texture pans "faster" than the pin image (like if the pin was closer to the camera than the sphere).
Here are some relevant parts of my code:
public class CustomRenderer implements GLSurfaceView.Renderer {
MatrixGrabber mMatrixGrabber = new MatrixGrabber();
private float[] mModelView = null;
private float[] mProjection = null;
[...]
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Get the sizes:
float side = Math.max(width, height);
int x = (int) (width - side) / 2;
int y = (int) (height - side) / 2;
// Set the viewport:
gl.glViewport(x, y, (int) side, (int) side);
// Set the perspective:
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
GLU.gluPerspective(gl, FIELD_OF_VIEW_Y, 1, Z_NEAR, Z_FAR);
// Grab the projection matrix:
mMatrixGrabber.getCurrentProjection(gl);
mProjection = mMatrixGrabber.mProjection;
// Set to MODELVIEW mode:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
}
#Override
public void onDrawFrame(GL10 gl) {
// Load the texture if needed:
if(mTextureToLoad != null) {
mSphere.loadGLTexture(gl, mTextureToLoad);
mTextureToLoad = null;
}
// Clear:
gl.glClearColor(0.5f, 0.5f, 0.5f, 0.0f);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glLoadIdentity();
// Rotate the scene:
gl.glRotatef( (1 - mRotationY + 0.25f) * 360, 1, 0, 0); // 0.25 is used to adjust the texture position
gl.glRotatef( (1 - mRotationX + 0.25f) * 360, 0, 1, 0); // 0.25 is used to adjust the texture position
// Draw the sphere:
mSphere.draw(gl);
// Grab the model-view matrix:
mMatrixGrabber.getCurrentModelView(gl);
mModelView = mMatrixGrabber.mModelView;
}
public float[] getScreenCoords(float x, float y, float z) {
if(mModelView == null || mProjection == null) return null;
float[] result = new float[3];
int[] view = new int[] {0, 0, (int) mSurfaceViewSize.getWidth(), (int) mSurfaceViewSize.getHeight()};
GLU.gluProject(x, y, z,
mModelView, 0,
mProjection, 0,
view, 0,
result, 0);
result[1] = mSurfaceViewSize.getHeight() - result[1];
return result;
}
}
I use the result of the getScreenCoords method to display my pins. The y value is wrong.
What am I doing wrong?
firstly I'm really new to opengl on androidand I'm still looking though courses online
I'm trying to make a simply appthat has a sequre in the middle of the screen
and on touch it's moves to the touch event coordinate
this is my sureface view
public class MyGLSurfaceView extends GLSurfaceView {
private final MyGLRenderer mRenderer;
public MyGLSurfaceView(Context context) {
super(context);
// Set the Renderer for drawing on the GLSurfaceView
mRenderer = new MyGLRenderer();
setRenderer(mRenderer);
// Render the view only when there is a change in the drawing data
setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
}
#Override
public boolean onTouchEvent(MotionEvent e)
{
float x =e.getX();
float y =e.getY();
if(e.getAction()==MotionEvent.ACTION_MOVE)
{
mRenderer.setLoc(x,y);
requestRender();
}
return true;
}
}
and this is me renderer class
public class MyGLRenderer implements GLSurfaceView.Renderer {
private float mAngle;
public PVector pos;
public Rectangle r;
public Square sq;
#Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
// Set the background frame color
gl.glClearColor(1f, 1f, 1f, 1.0f);
pos=new PVector();
r=new Rectangle(0.5f,0.4f);
sq=new Square(0.3f);
}
#Override
public void onDrawFrame(GL10 gl) {
// Draw background color
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
// Set GL_MODELVIEW transformation mode
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity(); // reset the matrix to its default state
// When using GL_MODELVIEW, you must set the view point
GLU.gluLookAt(gl, 0, 0, -3, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glTranslatef(pos.x,pos.y,0f);
//r.draw(gl);
sq.draw(gl);
}//rend
#Override
public void onSurfaceChanged(GL10 gl, int width, int height) {
// Adjust the viewport based on geometry changes
// such as screen rotations
gl.glViewport(0, 0, width, height);
// make adjustments for screen ratio
float ratio = (float) width / height;
gl.glMatrixMode(GL10.GL_PROJECTION); // set matrix to projection mode
gl.glLoadIdentity(); // reset the matrix to its default state
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7); // apply the projection matrix
}
/**
* Returns the rotation angle of the triangle shape (mTriangle).
*
* #return - A float representing the rotation angle.
*/
public float getAngle() {
return mAngle;
}
/**
* Sets the rotation angle of the triangle shape (mTriangle).
*/
public void setAngle(float angle) {
mAngle = angle;
}
public void setLoc(float x,float y)
{
pos.x=x; pos.y=y;
}
}
When asking a question you should add what is your current result and what is your expected result.
From a short glance at your code I would expect the square is drawn correctly until you touch the screen after which it disappears completely (unless you press on a top left part of the screen actually).
If this is the case your problem is only that you are not transforming the touch coordinates to the openGL coordinate system. You openGL coordinate system is by default in range [-1,1] in all axis but you may change it (as you do) with matrices. The 2 most common are glFrustum and glOrtho both of these accept 4 border coordinates which are left, right, bottom and top which represent what value is at corresponding border of the view.
So to compute x from the touch for instance you would first normalize it to see what part of the screen you pressed by relativeX = touch.x / view.size.width then this will present a coordinate in openGL such that you do glX = left + (right-left)*relativeX. Similar is for the vertical coordinate.
But in your case it would be better to use 2D and use glOrtho while using view coordinates. This means replacing the frustum call with this one and setting left=.0, right=viewWidth, top=.0, bottom=viewHeight. Now you will have the same coordinate system in openGL as you do on the view. You will need to increase the square size to see it since it is very small at this point. Also you should then remove the lookAt and just use identity + translate.
I have a simple LibGdx application with two sprites. One is a simple texture which is repeated to fill the background. This one works. The other is a texture which is alpha blended so the corners look darker than the center. It is stretched to cover the entire screen. For some reason, this one appears in the wrong location and is just a big white box.
Here is my code:
public class TestGame extends ApplicationAdapter {
SpriteBatch batch;
boolean showingMenu;
Texture background;
Sprite edgeBlur;
Texture edgeBlurTex;
#Override
public void create() {
showingMenu = true;
batch = new SpriteBatch();
background = new Texture(Gdx.files.internal("blue1.png"));
edgeBlurTex = new Texture(Gdx.files.internal("edge_blur.png"));
edgeBlur = new Sprite(edgeBlurTex);
edgeBlur.setSize(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
}
#Override
public void resize(int width, int height) {
super.resize(width, height);
edgeBlur.setSize(width, height);
}
#Override
public void dispose() {
background.dispose();
edgeBlurTex.dispose();
super.dispose();
}
#Override
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
drawBackground();
batch.end();
}
private void drawBackground() {
for (float x = 0; x < Gdx.graphics.getWidth(); x += background.getWidth()) {
for (float y = 0; y < Gdx.graphics.getHeight(); y += background.getHeight()) {
batch.draw(background, x, y);
}
}
edgeBlur.draw(batch);
}
}
Edit:
I fixed it by changing the draw command to:
batch.draw(edgeBlurTex, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
However the if I attempt to do any drawing after drawing these textures, such as:
ShapeRenderer shapeRenderer = new ShapeRenderer();
shapeRenderer.begin(ShapeRenderer.ShapeType.Line);
shapeRenderer.setColor(0, 0, 0, 1);
float unitHeight = Gdx.graphics.getHeight() / 9;
float indent = Gdx.graphics.getWidth() / 20;
shapeRenderer.rect(indent, unitHeight, Gdx.graphics.getWidth() - indent * 2, unitHeight);
shapeRenderer.rect(indent, unitHeight * 3, Gdx.graphics.getWidth() - indent * 2, unitHeight);
shapeRenderer.rect(indent, unitHeight * 5, Gdx.graphics.getWidth() - indent * 2, unitHeight);
shapeRenderer.rect(indent, unitHeight * 7, Gdx.graphics.getWidth() - indent * 2, unitHeight);
shapeRenderer.end();
It just stops working and goes back to drawing a white box. It seems very random, like something is seriously misconfigured with libgdx. Is there any way to debug this thing to work out what is wrong?
you should add enableBlending before your blended texture draw
batch.enableBlending();
edgeBlur.draw(batch);
batch.disableBlending();
you can also try to set the batch blending func by using setBlendFunction method after batch is created
update due to edit:
the SpriteBatch should be ended when starting Shaperenderer
Hi I have a 512x512 texture that I would like to display within my GlSurfaceview at a 100% scale at a 1:1 pixel for pixel view.
I have having troubles achieving this and require some assistance.
Every combination of settings in OnSurfaceChanged and onDrawFrame result in a scaled image.
Can someone pls direct me to an example where this is possible.
private float[] mProjectionMatrix = new float[16];
// where mWidth and mHeight are set to 512
public void onSurfaceChanged(GL10 gl, int mWidth, int mHeight) {
GLES20.glViewport(0, 0, mWidth, mHeight);
float left = -1.0f /(1/ScreenRatio );
float right = 1.0f /(1/ScreenRatio );
float bottom = -1.0f ;
float top = 1.0f ;
final float near = 1.0f;
final float far = 10.0f;
Matrix.frustumM(mProjectionMatrix, 0, left, right, bottom, top, near, far);
}
#Override
public void onDrawFrame(GL10 glUnused ) {
....stuff here
Matrix.setIdentityM(mModelMatrix, 0);
Matrix.translateM(mModelMatrix, 0, 0, 0, 1);
Matrix.rotateM(mModelMatrix, 0, 0.0f, 1.0f, 1.0f, 0.0f);
drawCube();
}
many thanks,
There's various options. The simplest IMHO is to not apply any view/projection transformations at all. Then draw a textured quad with a range of (-1.0, 1.0) for both the x- and y-coordinates. That would get your texture to fill the entire view. Since you want it displayed in a 512x512 part of the view, you can set the viewport to cover only that area:
glViewport(0, 0, 512, 512);
Another possibility is that you reduce the range of your input coordinates to map to a 512x512 area of the screen. Or scale the coordinates in the vertex shader.
You didn't specify what version of OpenGL ES you use. In ES 3.0, you could also use glBlitFramebuffer() to copy the texture to your view.