Disappearing 3D texture with libgdx - android

this is a bug that has been troubling and demotivating me for several days now. Perhaps someone has some insight.
I build a terrain mesh programmatically and set up a renderable for it, with a material that has a texture attribute with a repeating texture created from a PNG file. All of this appears to work fine until the camera is moved for a while, along, say, the x or z axes, at which time the texture will initially flicker to black, and eventually stay black, if the camera is moved far enough away.
I clear the screen with blue – so the mesh is still getting rendered, just without the texture – so it is solid black. This happens at the same spots with regards to camera placement. Once the camera is far enough away, the problem persists if the camera continues to move away from the world origin.
The problem does not appear on desktop, only android. I am using an HTC EVO 4G to test. I realize that this phone's early Adreno GPU has some serious 3D issues, but I've surely seen games, or at least demos, with large textured landscapes work fine. My mesh has around 5000 triangles and usually renders around 30-40 fps.
Some things I have experimented with:
Change PNG compression to 0 in the GIMP export options
Used JPG instead of PNG
Explictly specifying no mipmapping
Changed filtering modes
PNG files of different sizes and resolutions
Check glGetError
Here is the relevant code:
private void createMesh(){
mesh = new Mesh(true, vertices.length, indices.length,
new VertexAttribute(Usage.Position, 3, "a_position"),
new VertexAttribute(Usage.Normal, 3, "a_normal"),
new VertexAttribute(Usage.TextureCoordinates, 2, "a_texCoord0"));
float vertexData[] = new float[vertices.length * 8];
for (int x = 0; x < vertices.length; x++) {
vertexData[x*8] = vertices[x].position.x;
vertexData[x*8+1] = vertices[x].position.y;
vertexData[x*8+2] = vertices[x].position.z;
vertexData[x*8+3] = vertices[x].normal.x;
vertexData[x*8+4] = vertices[x].normal.y;
vertexData[x*8+5] = vertices[x].normal.z;
vertexData[x*8+6] = vertices[x].texCoord.x;
vertexData[x*8+7] = vertices[x].texCoord.y;
}
mesh.setVertices(vertexData);
mesh.setIndices(indices);
}
The texture repeats across the terrain, but I've commented out the setWrap line before and the problem
persists.
private void createRenderable(Lights lights){
texture.setWrap(TextureWrap.Repeat, TextureWrap.Repeat);
TextureAttribute textureAttribute1 = new TextureAttribute(TextureAttribute.Diffuse, texture);
terrain = new Renderable();
terrain.mesh = mesh;
terrain.worldTransform.trn(-xTerrainSize/2, (float) -heightRange, -zTerrainSize/2);
terrain.meshPartOffset = 0;
terrain.meshPartSize = mesh.getNumIndices();
terrain.primitiveType = GL10.GL_TRIANGLE_STRIP; // or GL_TRIANGLE_STRIP etc.
terrain.material = new Material(textureAttribute1);
terrain.lights = lights;
}
Note that decreasing the far clipping plane does not make a difference.
...
lights = new Lights();
lights.ambientLight.set(0.2f, 0.2f, 0.2f, 1f);
lights.add(new DirectionalLight().set(0.8f, 0.8f, 0.8f, -1f, -0.8f, -0.2f));
cam = new PerspectiveCamera(90, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
cam.position.set(0, 0, 0);
cam.lookAt(0,0,-1);
cam.near = 0.1f;
cam.far = 1000f;
cam.update();
...
#Override
public void render () {
...
Gdx.gl.glViewport(0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.graphics.getGL10().glClearColor( 0, 0, 1, 1 );
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
modelBatch.begin(cam);
modelBatch.render(terrain);
modelBatch.end();
...
}
Any insight into this problem would be much-appreciated. Unfortunately, it is difficult to screen capture on an EVO 4G, or else I would post a picture.

Related

What might be causing only half of textures to be drawn in an Android game?

I've had reports from a couple of users (from 85000 downloads) of the problem shown in the image below. No doubt it has occurred a few more times than this, but it's certainly rare.
I'm unable to reproduce the problem and don't believe it's specific to device as some users would appear to be playing the game perfectly happily on the same device models that have reported the problem.
The letters are drawn onto a frame buffer first to build them up from the circular background with the character drawn on top. They are then copied off and a new texture is created.
The background is also put together from multiple components on a frame buffer and copied off to create a texture, so I'm not too sure why that appears to work perfectly well when the letters don't. The background is drawn using the same FrameBuffer and the same SpriteBatch instance.
What it does look like
What it should look like
Method that create the images
private static Texture getTextureUsingGpu(String letter, Bubble.BubbleType bubbleType) {
if (!enabled)
return null;
StrokeFontHelper font = Assets.strokeFont;
font.setSettings(Fonts.BUBBLE_TEXT_SETTINGS);
TextureRegion tx = getBlockImage(letter, bubbleType);
FrameBuffer fb = TextureLoader.getFrameBuffer();
fb.begin();
Gdx.gl20.glClearColor(0.0f, 0.0f, 0.0f, 1);
// Make sure everything is really really clear! Trying to fix graphics glitches on some phones
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT | GL20.GL_STENCIL_BUFFER_BIT | GL20.GL_SUBPIXEL_BITS);
float width = tx.getRegionWidth();
float height = tx.getRegionHeight();
TextureLoader.sb.begin();
tx.flip(false, !tx.isFlipY());
TextureLoader.sb.disableBlending();
TextureLoader.sb.draw(tx, 0, 0);
TextureLoader.sb.enableBlending();
// Removed character drawing code to make it more readable
TextureLoader.sb.end();
Pixmap pm = ScreenUtils.getFrameBufferPixmap(0, 0, (int) width,
(int) height);
PixmapTextureData data = new PixmapTextureData(pm, Format.RGBA8888,
false, false, true);
Texture result = new Texture(data);
result.setFilter(TextureFilter.Linear, TextureFilter.Linear);
cacheTexture(letter, result, pm);
fb.end();
return result;
}
public static FrameBuffer getFrameBuffer() {
if (frameBuffer == null || frameBuffer.getWidth() != Gdx.graphics.getWidth() || frameBuffer.getHeight() != Gdx.graphics.getHeight()) {
if (frameBuffer != null)
frameBuffer.dispose();
// Create the highest quality frame buffer we can get away with
try {
frameBuffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
} catch (Exception e) {
try {
frameBuffer = new FrameBuffer(Pixmap.Format.RGB888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
} catch (Exception e2) {
frameBuffer = new FrameBuffer(Pixmap.Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
}
}
// Set up the camera correctly for the frame buffer
camera = new OrthographicCamera(frameBuffer.getWidth(), frameBuffer.getHeight());
camera.position.set(frameBuffer.getWidth() * 0.5f, frameBuffer.getHeight() * 0.5f, 0);
camera.update();
sb.setProjectionMatrix(camera.combined);
}
return frameBuffer;
}
Edit
I've done some fiddling with this and have a very helpful user who has been testing versions for me. Here's what I've established.
If I use a ShapeRenderer rather than a SpriteBatch then I can draw over the whole area as expected.
It is almost certainly the point at which I draw textures to the FrameBuffer using the SpriteBatch that the problem occurs. It just doesn't draw to the bottom half of the textures. What's on the FreameBuffer is copied to the pixmap correctly.
Another Edit I've got a new visual glitch reported by a user. I don't know if this might shed some more light on a problem.

Imprecise Box2d coordinates using LibGDX

I am using LibGDX and Box2d to build my first Android game. Yay!
But I am having some serious problems with Box2d.
I have a simple stage with a rectangular Box2d body at the bottom representing the ground, and two other rectangular Box2d bodies both at the left and right representing the walls.
A Screenshot
Another Screenshot
I also have a box. This box can be touched and it moves using applyLinearImpulse, like if it was kicked. It is a DynamicBody.
What happens is that in my draw() code of the Box object, the Box2d body of the Box object is giving me a wrong value for the X axis. The value for the Y axis is fine.
Those blue "dots" on the screenshots are small textures that I printed on the box edges that body.getPosition() give me. Note how in one screenshot the dots are aligned with the actual DebugRenderer rectangle and in the other they are not.
This is what is happening: when the box moves, the alignment is lost in the movement.
The collision between the box, the ground and the walls occur precisely considering the area that the DebugRenderer renders. But body.getPosition() and fixture.testPoint() considers that area inside those blue dots.
So, somehow, Box2d is "maintaining" these two areas for the same body.
I thought that this could be some kind of "loss of precision" between my conversions of pixels and meters (I am scaling by 100 times) but the Y axis uses the same technique and it's fine.
So, I thought that I might be missing something.
Edit 1
I am converting from Box coordinates to World coordinates. If you see the blue debug sprites in the screenshots, they form the box almost perfectly.
public static final float WORLD_TO_BOX = 0.01f;
public static final float BOX_TO_WORLD = 100f;
The box render code:
public void draw(Batch batch, float alpha) {
x = (body.getPosition().x - width/2) * TheBox.BOX_TO_WORLD;
y = (body.getPosition().y - height/2) * TheBox.BOX_TO_WORLD;
float xend = (body.getPosition().x + width/2) * TheBox.BOX_TO_WORLD;
float yend = (body.getPosition().y + height/2) * TheBox.BOX_TO_WORLD;
batch.draw(texture, x, y);
batch.draw(texture, x, yend);
batch.draw(texture, xend, yend);
batch.draw(texture, xend, y);
}
Edit 2
I am starting to suspect the camera. I got the DebugRenderer and a scene2d Stage. Here is the code:
My screen resolution (Nexus 5, and it's portrait):
public static final int SCREEN_WIDTH = 1080;
public static final int SCREEN_HEIGHT = 1920;
At the startup:
// ...
stage = new Stage(SCREEN_WIDTH, SCREEN_HEIGHT, true);
camera = new OrthographicCamera();
camera.setToOrtho(false, SCREEN_WIDTH, SCREEN_HEIGHT);
debugMatrix = camera.combined.cpy();
debugMatrix.scale(BOX_TO_WORLD, BOX_TO_WORLD, 1.0f);
debugRenderer = new Box2DDebugRenderer();
// ...
Now, the render() code:
public void render() {
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
camera.update();
world.step(1/45f, 6, 6);
world.clearForces();
stage.act(Gdx.graphics.getDeltaTime());
stage.draw();
debugRenderer.render(world, debugMatrix);
}
Looks like the answer to that one was fairly simple:
stage.setCamera(camera);
I was not setting the OrthographicCamera to the stage, so the stage was using some kind of default camera that wasn't aligned with my stuff.
It had nothing to do with Box2d in the end. Box2d was returning healthy values, but theses values were corresponding to wrong places in my screen because of the wrong stage resolution.

Chase player with camera in AndEngine and limit world's bounds

Using AndEngine for Android, I would like to have my scene look like this:
The red box is the world which must be limited to a given size, say 2000px*450px.
The blue box is the Camera, which is limited as well (as usual), for example to 750px*450px.
For the whole scene, I have a background image that is exactly 450px high. So my Camera can be scaled to whatever size is appropriate, but the background must exactly fit to the height. The width of the Camera may be variable.
The player (circle) must always be in the center (horizontally) but may not leave the world's boundaries.
To achieve this, I've tried adding two types of sizes:
camera size (CAMERA_WIDTH, CAMERA_HEIGHT)
world size (WORLD_WIDTH, WORLD_HEIGHT)
And this function was to add boundaries to the world so that the physics engine prevents the player from leaving those boundaries:
private void createWorldBoundaries() {
Body body;
final Rectangle wall_top = new Rectangle(0, WORLD_HEIGHT-5, WORLD_WIDTH, 10, mVertexManager);
final Rectangle wall_bottom = new Rectangle(0, 5, WORLD_WIDTH, 10, mVertexManager);
final Rectangle wall_left = new Rectangle(5, 0, 10, WORLD_HEIGHT, mVertexManager);
final Rectangle wall_right = new Rectangle(WORLD_WIDTH-5, 0, 10, WORLD_HEIGHT, mVertexManager);
body = PhysicsFactory.createBoxBody(mPhysicsWorld, wall_top, BodyType.StaticBody, new PhysicsFactory.createFixtureDef(0.0f, 0.5f, 0.5f));
wall_top.setUserData(body);
body = PhysicsFactory.createBoxBody(mPhysicsWorld, wall_bottom, BodyType.StaticBody, new PhysicsFactory.createFixtureDef(0.0f, 0.5f, 0.5f));
wall_bottom.setUserData(body);
body = PhysicsFactory.createBoxBody(mPhysicsWorld, wall_left, BodyType.StaticBody, new PhysicsFactory.createFixtureDef(0.0f, 0.5f, 0.5f));
wall_left.setUserData(body);
body = PhysicsFactory.createBoxBody(mPhysicsWorld, wall_right, BodyType.StaticBody, new PhysicsFactory.createFixtureDef(0.0f, 0.5f, 0.5f));
wall_right.setUserData(body);
attachChild(wall_top);
attachChild(wall_bottom);
attachChild(wall_left);
attachChild(wall_right);
}
But this is not working, unfortunately. (see edit)
Setting the camera to chase the player has the wrong result for me: The player does really stay in the center of the screen all time, but I want the player only to stay in the center horizontally, not vertically.
What am I doing wrong and what can I change? And the basic question is: How can I make the world wider than the camera view, while the height is equal to the camera view. The result should be that you can horizontally walk through your world (moving camera) and you can always see the full height.
Edit:
As you define the coordinates of the Rectangle's center and not its top-left corner, you have to do it like this, it seems:
final Rectangle wall_top = new Rectangle(WORLD_WIDTH/2, WORLD_HEIGHT-1, WORLD_WIDTH, 2, mVertexManager);
final Rectangle wall_bottom = new Rectangle(WORLD_WIDTH/2, FIELD_BASELINE_Y+1, WORLD_WIDTH, 2, mVertexManager);
final Rectangle wall_left = new Rectangle(1, WORLD_HEIGHT/2, 2, WORLD_HEIGHT, mVertexManager);
final Rectangle wall_right = new Rectangle(WORLD_WIDTH-1, WORLD_HEIGHT/2, 2, WORLD_HEIGHT, mVertexManager);
However, I had found the other solution in several tutorials. Are these authors not testing their code before writing the tutorials or did the behaviour change from GLES1 to GLES2 or with any recent version?
i think your question about the world boundaries is self answered, isn't it?
PhysicsWorld Boundaries
for further research you can download nicolas' AndEngine Examples App from the Play Store and look up the different examples here (GLES_2, didn't look for AnchorCenter yet): https://github.com/nicolasgramlich/AndEngineExamples/tree/GLES2/src/org/andengine/examples
Taken from the PhysicsExample, the code for the rectangles should look like this, if the bounds are set to the camera bounds. in your case, you can extend width like you want (3 times CAMERA_WIDTH?)
final Rectangle ground = new Rectangle(0, CAMERA_HEIGHT - 2, WORLD_WIDTH, 2, vertexBufferObjectManager);
final Rectangle roof = new Rectangle(0, 0, WORLD_WIDTH, 2, vertexBufferObjectManager);
final Rectangle left = new Rectangle(0, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager);
final Rectangle right = new Rectangle(WORLD_WIDTH - 2, 0, 2, CAMERA_HEIGHT, vertexBufferObjectManager);
Camera following player
for the Camera to follow your player, you can lookup the code of the BoundCameraExample https://github.com/nicolasgramlich/AndEngineExamples/blob/GLES2/src/org/andengine/examples/BoundCameraExample.java
the interesting part for you should be the addFace method at the bottom
private void addFace(final float pX, final float pY) {
final FixtureDef objectFixtureDef = PhysicsFactory.createFixtureDef(1, 0.5f, 0.5f);
final AnimatedSprite face = new AnimatedSprite(pX, pY, this.mBoxFaceTextureRegion, this.getVertexBufferObjectManager()).animate(100);
final Body body = PhysicsFactory.createBoxBody(this.mPhysicsWorld, face, BodyType.DynamicBody, objectFixtureDef);
this.mScene.attachChild(face);
this.mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(face, body, true, true));
this.mBoundChaseCamera.setChaseEntity(face);
}
this method creates a physics body + sprite for "your player" (in this case, a boxed face) and sets the sprite as a chaseEntity for the camera to follow. Since the camera has bounds, that it can't exceed and your camera will have the height of your PhysicWorld boundaries, you can use this to let your camera follow the player in x, but not in y direction.
if you (i don't know why) don't want to use these boundaries, you can overwrite the onUpdate method of your Sprite and re-locate your camera only in x-direction, instead of xy coords
face.registerUpdateHandler(new IUpdateHandler() {
#Override
public void onUpdate(final float pSecondsElapsed) {
float[] coord = face.getSceneCenterCoordinates();
this.mBoundChaseCamera.setCenter(sceneCenterCoordinates[0], CAMERA_Y_POSITION);
}
}
where the CAMERA_Y_POSITION is a static final field with the y-position.
I hope this answers your question(s). :-)
edit: oops, i forgot to mention, how to achieve the camera to be bound and i will edit the world width above:
this.mBoundChaseCamera.setBounds(0, 0,
WORLD_WIDTH, CAMERA_HEIGHT);
all settings are like your image given (except the exact position of the face, that has to be given to the addFace(px, py))
Edit: Difference between scene boundaries in Andengine GLES2 vs GLES2-AnchorCenter
As far as i understood the question, i thought you would use GLES2, i thought of the (older) default GLES2 branch of AndEngine and posted the boundaries. As you found out yourself before and stated in the comments, you use another approach to set the rectangles - where you need to set the rectangles center as pX and pY. The reason for this is in fact, that with the AnchorCenter branch, you won't set the upper left position of an entity anymore and instead use it's center position.

Load 3d .obj file and add background image to live wallpaper in android?

i want to load a 3d .obj file in live wallpaper and animate it. i use RAJAWALI library to load it. the problem is that the the 3d obj file is load and animate successfully but Loading background image are not loaded in some android devices. code is follow:
protected void initScene() {
DirectionalLight localDirectionalLight = new DirectionalLight(0.0F, 0.6F, 0.4F);
localDirectionalLight.setPower(3.0F);
PointLight mLight = new PointLight();
mLight.setPosition(0, 0, 0);
mLight.setPower(13);
Plane plane = new Plane(70, 70, 1, 1, 1);
plane.addLight(mLight);
plane.setMaterial(new SimpleMaterial());
plane.addTexture(mTextureManager.addTexture(BitmapFactory.decodeResource(mContext.getResources(), R.drawable.pozadina)));
addChild(plane);
plane.setZ(10);
try
{
ObjParser localObjParser = new ObjParser(this.mContext.getResources(), this.mTextureManager, R.raw.allah);
localObjParser.parse();
this.strela = localObjParser.getParsedObject();
this.strela.addLight(localDirectionalLight);
if (this.tekstura == null)
{
BitmapFactory.Options localOptions2 = new BitmapFactory.Options();
localOptions2.inPurgeable = true;
this.strela.setColor(Color.WHITE);
}
this.strela.setScale(2.0F);
addChild(this.strela);
this.mCamera.setPosition(0.0F, 0.0F, -50.0F);
this.mCamera.setFarPlane(1000.0F);
Number3D localNumber3D = new Number3D(0.0F, 1.0F, 0.0F);
localNumber3D.normalize();
this.mAnim = new RotateAnimation3D(localNumber3D, 360.0F);
this.mAnim.setDuration(10000L);
this.mAnim.setRepeatCount(-1);
this.mAnim.setTransformable3D(this.strela);
return;
}
catch (Exception localException)
{
while (true)
localException.printStackTrace();
}
}
it give problem on SAMSUNG NEXUS android v4.1.
please help how can i do it?
This is tested and working on Galaxy S3. So it should work for you as well. I use it in my wallpapers. Try this:
SimpleMaterial localSimpleMaterial = new SimpleMaterial();
Bitmap localBitmap = BitmapFactory.decodeResource(this.mContext.getResources(), R.drawable.image);
localSimpleMaterial.addTexture(this.mTextureManager.addTexture(localBitmap));
Plane localPlane = new Plane(10.0F, 10.0F, 1, 1);
localPlane.setRotZ(-90.0F);
localPlane.setScale(3.7F);
localPlane.setPosition(0.0F, 0.0F, 10.0F);
localPlane.setMaterial(localSimpleMaterial);
addChild(localPlane);
Just make sure that your image is power of two, i.e., 256x256, 512x512, etc. Let me know how it works for you.
Edit: Also make sure you import rajawali.primitives not rajawali.math
Is the issue solved?
What are those devices on which the plane is not rendered? Are those Samsung?
Also please tell me is it rendered in grey color or not rendered at all?
I am asking because I ran into same problem where the solution was to adjust the mip-mapping. It took time as I was a beginner then, but there was a very simple solution:
Textures are not displayed on the sphere for Samsung Galaxy SL

integrating JPCT-ae with QCAR(vuforia)

i know what i am going to ask is already discussed sometimes but after going through all of them i can't found my complete answer so i am asking a new question
when i tried integrating JPCT-ae with QCAR all goes well as expected, i got my modelview matrix from renderframe from jni and successfully transferred that in java to jpct model is shown perfectly as expected. but when i tried to pass this matrix to JPCT world camera my model disappear.
my code:in onsurfacechanged:
world = new World();
world.setAmbientLight(20, 20, 20);
sun = new Light(world);
sun.setIntensity(250, 250, 250);
cube = Primitives.getCube(1);
cube.calcTextureWrapSpherical();
cube.strip();
cube.build();
world.addObject(cube);
cam = world.getCamera();
cam.moveCamera(Camera.CAMERA_MOVEOUT, 10);
cam.lookAt(cube.getTransformedCenter());
SimpleVector sv = new SimpleVector();
sv.set(cube.getTransformedCenter());
sv.y -= 100;
sv.z -= 100;
sun.setPosition(sv);
MemoryHelper.compact();
and in ondraw:
com.threed.jpct.Matrix mResult = new com.threed.jpct.Matrix();
mResult.setDump(modelviewMatrix ); //modelviewMatrix i get from Qcar
cube.setRotationMatrix(mResult);
cam.setBack(mResult);
fb.clear(back);
world.renderScene(fb);
world.draw(fb);
fb.display();
after some research i found that QCAR uses a right-handed coordinate system meaning that the X positive goes right, the Y positive goes up and the Z positive comes out of screen but in JPCT coordinate system the X positive goes right, the Y positive goes down and the Z positive goes into the screen.
Qcar coordinate system:
i know that matrix QCar is giving is a 4*4 matrix having 3*3 rotational values and translation vector .
i am posting matrices to be more clear:
modelviewmatrix:
1.512537 -159.66255 -10.275316 0.0
-89.86529 -1.1592013 4.7839375 0.0
-8.619186 10.179538 -159.44305 0.0
59.182976 93.205956 437.2832 1.0
modelviewmatrix after reverse using cam.setBack(modelviewmatrix.invert(modelviewmatrix)) :
5.9083453E-5 -0.01109448 -3.3668696E-4 0.0
0.0040540528 -3.8752193E-4 0.0047518034 0.0
-0.004756433 -4.6811014E-4 0.0040459237 0.0
0.7533285 0.4116795 2.7063704 0.9999999
if i remove 13,14 and 15 matrix element assuming 3*3 rotation matrix...model is rotated properly but translation(in and out movement of image) is not there
finally i dont know what changes translation vector is needed.
so please suggest me what i am missing here?
QCAR::Matrix44F inverseMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Matrix44F invTransposeMatrix = SampleMath::Matrix44FTranspose(inverseMatrix);
then pass the invTransposeMatrix value to java
env->SetFloatArrayRegion(modelviewArray, 0, 16, invTransposeMatrix.data);
env->CallVoidMethod(obj, method, modelviewArray);

Categories

Resources