This time i'm having an issue with the actual rendering of my model. I can load it all through the Libgdx loadObj() function, and render it using GL10_Triangles, however i keep getting missing triangles in my model (it seems like only half the models are being rendered). I've tried the old ObjLoad function (commented out) and also the different render styles but nothing seems to work.
And yes I have checked the model in Blender and the model is complete without missing faces.
See the print screen below, and the code below that.
Any help would be fantastic, it's very frustrating as i'm so close to getting this to work.
And here's the code.
public class LifeCycle implements ApplicationListener {
Mesh model;
private PerspectiveCamera camera;
public void create() {
InputStream stream = null;
camera = new PerspectiveCamera(45, 4, 4);
try
{
stream = Gdx.files.internal("Hammer/test_hammer.obj").read();
//model = ModelLoaderOld.loadObj(stream);
model = ObjLoader.loadObj(stream,true);
stream.close();
}
catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Gdx.gl.glEnable(GL10.GL_DEPTH_TEST);
Gdx.gl10.glTranslatef(0.0f, 0.0f, -3.0f);
}
protected float rotateZ = 0.1f;
protected float increment = 0.1f;
public void render()
{
Gdx.app.log("LifeCycle", "render()");
Gdx.gl.glClearColor(0.0f, 0.0f, 0.5f, 1.0f);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
camera.update();
camera.apply(Gdx.gl10);
Gdx.gl10.glTranslatef(0.0f, 0.0f, -3.0f);
Gdx.gl10.glRotatef(rotateZ, rotateZ, 5.0f, rotateZ);
model.render(GL10.GL_TRIANGLES);
rotateZ += increment;
System.out.println(""+rotateZ);
}
}
This actually looks like the OBJ file stores quads instead of triangles, but your loading routine just reads them as triangles (just reads the first 3 index groups of a face). Whereas Blender might (and should) be smart enough to handle quads, your loader routine isn't. So either write a better OBJ loader (but I guess this isn't your class), configure your OBJ loader to treat quads correctly (if possible), or export the model as triangles instead of quads (if possible).
Related
I'm currently making an Android player plugin for Unity. The basic idea is that I will play the video by MediaPlayer on Android, which provides a setSurface API receiving a SurfaceTexture as constructor parameter and in the end binds with an OpenGL-ES texture. In most other cases like showing an image, we can just send this texture in form of pointer/id to Unity, call Texture2D.CreateExternalTexture there to generate a Texture2D object and set that to an UI GameObject to render the picture. However, when it comes to displaying video frames, it's a little bit different since video playing on Android requires a texture of type GL_TEXTURE_EXTERNAL_OES while Unity only supports the universal type GL_TEXTURE_2D.
To solve the problem, I've googled for a while and known that I should adopt a kind of technology called "Render to texture". More clear to say, I should generate 2 textures, one for the MediaPlayer and SurfaceTexture in Android to receive video frames and another for Unity that should also has the picture data inside. The first one should be in type of GL_TEXTURE_EXTERNAL_OES (let's call it OES texture for short) and the second one in type of GL_TEXTURE_2D (let's call it 2D texture). Both of these generated textures are empty in the beginning. When bound with MediaPlayer, the OES texture will be updated during video playing, then we can use a FrameBuffer to draw the content of OES texture upon the 2D texture.
I've written a pure-Android version of this process and it works pretty well when I finally draw the 2D texture upon the screen. However, when I publish it as an Unity Android plugin and runs the same code on Unity, there won't be any pictures showing. Instead, it only displays a preset color from glClearColor, which means two things:
The transferring process of OES texture -> FrameBuffer -> 2D texture is complete and Unity do receive the final 2D texture. Because the glClearColor is called only when we draw the content of OES texture to FrameBuffer.
There are some mistakes during drawing happened after glClearColor, because we don't see the video frames pictures. In fact, I also call glReadPixels after drawing and before unbinding with the FrameBuffer, which is going to read data from the FrameBuffer we bound with. And it returns the single color's value that is same with the color we set in glClearColor.
In order to simplify the code I should provide here, I'm going to draw a triangle to a 2D texture through FrameBuffer. If we can figure out which part is wrong, we then can easily solve the similar problem to draw video frames.
The function will be called on Unity:
public int displayTriangle() {
Texture2D texture = new Texture2D(UnityPlayer.currentActivity);
texture.init();
Triangle triangle = new Triangle(UnityPlayer.currentActivity);
triangle.init();
TextureTransfer textureTransfer = new TextureTransfer();
textureTransfer.tryToCreateFBO();
mTextureWidth = 960;
mTextureHeight = 960;
textureTransfer.tryToInitTempTexture2D(texture.getTextureID(), mTextureWidth, mTextureHeight);
textureTransfer.fboStart();
triangle.draw();
textureTransfer.fboEnd();
// Unity needs a native texture id to create its own Texture2D object
return texture.getTextureID();
}
Initialization of 2D texture:
protected void initTexture() {
int[] idContainer = new int[1];
GLES30.glGenTextures(1, idContainer, 0);
textureId = idContainer[0];
Log.i(TAG, "texture2D generated: " + textureId);
// texture.getTextureID() will return this textureId
bindTexture();
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_NEAREST);
GLES30.glTexParameterf(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_S, GLES30.GL_CLAMP_TO_EDGE);
GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D,
GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE);
unbindTexture();
}
public void bindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId);
}
public void unbindTexture() {
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
}
draw() of Triangle:
public void draw() {
float[] vertexData = new float[] {
0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f,
1.0f, 1.0f, 0.0f
};
vertexBuffer = ByteBuffer.allocateDirect(vertexData.length * 4)
.order(ByteOrder.nativeOrder())
.asFloatBuffer()
.put(vertexData);
vertexBuffer.position(0);
GLES30.glClearColor(0.0f, 0.0f, 0.9f, 1.0f);
GLES30.glClear(GLES30.GL_DEPTH_BUFFER_BIT | GLES30.GL_COLOR_BUFFER_BIT);
GLES30.glUseProgram(mProgramId);
vertexBuffer.position(0);
GLES30.glEnableVertexAttribArray(aPosHandle);
GLES30.glVertexAttribPointer(
aPosHandle, 3, GLES30.GL_FLOAT, false, 12, vertexBuffer);
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 3);
}
vertex shader of Triangle:
attribute vec4 aPosition;
void main() {
gl_Position = aPosition;
}
fragment shader of Triangle:
precision mediump float;
void main() {
gl_FragColor = vec4(0.9, 0.0, 0.0, 1.0);
}
Key code of TextureTransfer:
public void tryToInitTempTexture2D(int texture2DId, int textureWidth, int textureHeight) {
if (mTexture2DId != -1) {
return;
}
mTexture2DId = texture2DId;
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, mTexture2DId);
Log.i(TAG, "glBindTexture " + mTexture2DId + " to init for FBO");
// make 2D texture empty
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_RGBA, textureWidth, textureHeight, 0,
GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, null);
Log.i(TAG, "glTexImage2D, textureWidth: " + textureWidth + ", textureHeight: " + textureHeight);
GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);
fboStart();
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER, GLES30.GL_COLOR_ATTACHMENT0,
GLES30.GL_TEXTURE_2D, mTexture2DId, 0);
Log.i(TAG, "glFramebufferTexture2D");
int fboStatus = GLES30.glCheckFramebufferStatus(GLES30.GL_FRAMEBUFFER);
Log.i(TAG, "fbo status: " + fboStatus);
if (fboStatus != GLES30.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("framebuffer " + mFBOId + " incomplete!");
}
fboEnd();
}
public void fboStart() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, mFBOId);
}
public void fboEnd() {
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, 0);
}
And finally some code on Unity-side:
int textureId = plugin.Call<int>("displayTriangle");
Debug.Log("native textureId: " + textureId);
Texture2D triangleTexture = Texture2D.CreateExternalTexture(
960, 960, TextureFormat.RGBA32, false, true, (IntPtr) textureId);
triangleTexture.UpdateExternalTexture(triangleTexture.GetNativeTexturePtr());
rawImage.texture = triangleTexture;
rawImage.color = Color.white;
Well, code above will not display the expected triangle but only a blue background. I add glGetError after nearly every OpenGL functions call while no errors are thrown.
My Unity version is 2017.2.1. For Android build, I shut down the experimental multithread rendering and other settings are all default(no texture compression, not use development build, so on). My app's minimum API level is 5.0 Lollipop and target API level is 9.0 Pie.
I really need some help, thanks in advance!
Now I found the answer: If you want to do any drawing jobs in your plugin, you should do it at native layer. So if you want to make an Android plugin, you should call OpenGL-ES APIs at JNI instead of Java side. The reason is that Unity only allows drawing graphics on its rendering thread. If you simply call OpenGL-ES APIs like I did at Java side as in question description, they will actually run on Unity main thread instead of rendering thread. Unity provides a method, GL.IssuePluginEvent, to call your own functions on rendering thread but it needs native coding since this function requires a function pointer as its callback. Here is a simple example to use it:
At JNI side:
// you can copy these headers from https://github.com/googlevr/gvr-unity-sdk/tree/master/native_libs/video_plugin/src/main/jni/Unity
#include "IUnityInterface.h"
#include "UnityGraphics.h"
static void on_render_event(int event_type) {
// do all of your jobs related to rendering, including initializing the context,
// linking shaders, creating program, finding handles, drawing and so on
}
// UnityRenderingEvent is an alias of void(*)(int) defined in UnityGraphics.h
UnityRenderingEvent get_render_event_function() {
UnityRenderingEvent ptr = on_render_event;
return ptr;
}
// notice you should return a long value to Java side
extern "C" JNIEXPORT jlong JNICALL
Java_com_abc_xyz_YourPluginClass_getNativeRenderFunctionPointer(JNIEnv *env, jobject instance) {
UnityRenderingEvent ptr = get_render_event_function();
return (long) ptr;
}
At Android Java side:
class YourPluginClass {
...
public native long getNativeRenderFunctionPointer();
...
}
At Unity side:
private void IssuePluginEvent(int pluginEventType) {
long nativeRenderFuncPtr = Call_getNativeRenderFunctionPointer(); // call through plugin class
IntPtr ptr = (IntPtr) nativeRenderFuncPtr;
GL.IssuePluginEvent(ptr, pluginEventType); // pluginEventType is related to native function parameter event_type
}
void Start() {
IssuePluginEvent(1); // let's assume 1 stands for initializing everything
// get your texture2D id from plugin, create Texture2D object from it,
// attach that to a GameObject, and start playing for the first time
}
void Update() {
// call SurfaceTexture.updateTexImage in plugin
IssuePluginEvent(2); // let's assume 2 stands for transferring TEXTURE_EXTERNAL_OES to TEXTURE_2D through FrameBuffer
// call Texture2D.UpdateExternalTexture to update GameObject's appearance
}
You still need to transfer texture and everything about it should happen at JNI layer. But don't worry, they are nearly the same as I did in question description but only in a different language than Java and there are a lot of materials about this process so you can surely make it.
Finally let me address the key to solve this problem again: do your native stuff at native layer and don't be addicted to pure Java... I'm totally surprised that there are no blog/answer/wiki to tell us just write our code in C++. Although there are some open-source implementations like Google's gvr-unity-sdk, they give a complete reference but you'll still be doubt that maybe you can finish the task without writing any C++ code. Now we know that we can't. However, to be honest, I think Unity have the ability to make this progress even easier.
How can I create a reticle using the Cardboard SDK for Android Studio and RajawaliVR 3D Renderer?
I search through out many websites and the Rajawali wiki on github to try and find a solution that would keep a 3D object in the center of the users view with perfect orientation. I did stumble upon a link on the WIKI but it did not offer a solution that worked with RajawaliVR.
After some trial and error I came up with this solution
First create a 3D Object in the renderer
Sphere RETICLE = new Sphere(1, 50, 32);
Material sphereMaterial = new Material();
sphereMaterial.enableLighting(true);
sphereMaterial.setDiffuseMethod(new DiffuseMethod.Lambert());
RETICLE.setMaterial(sphereMaterial);
try{
sphereMaterial.addTexture(tExture);
} catch (ATexture.TextureException error){
Log.d("DEBUG", "TEXTURE ERROR");
}
Next you will need the following code placed in your renderer class
public void centerObject(Object3D obj){
float[] newPosition4 = new float[4];
float[] posVec4 = {0, 0, -3, 1.0f};
float[] HeadViewMatrix_inv = new float[16];
Matrix4 HeadViewMatrix4 = new Matrix4();
HeadViewMatrix4.setAll(mHeadViewMatrix);
HeadViewMatrix4 = HeadViewMatrix4.inverse();
HeadViewMatrix4.toFloatArray(HeadViewMatrix_inv);
Matrix.multiplyMV(newPosition4, 0, HeadViewMatrix_inv, 0, posVec4, 0);
obj.setPosition(newPosition4[0], newPosition4[1], newPosition4[2]);
obj.setLookAt(getCurrentCamera().getPosition());
}
You can then call the centerObject method from the onRender method or fram a handler. (The on render method gets called everytime a new frame is drawn.)
#Override
public void onRender(long elapsedTime, double deltaTime) {
super.onRender(elapsedTime, deltaTime);
if(RETICLE != null)centerObject(RETICLE);
}
Placing a null check before calling centerObject() is important because the scene may render before the object is created.
Rajawali WIKI Link
The Rajawali WIKI link above offered some great clues but the issue has been closed. I did comment there as well with my solution as to help provide a turn key solution for the next person facing the same problem.
I hope the helps some people. Please post better, Improved or other solutions so others may benefit.
When I draw more around 100-200 textures all in the same screen, the device becomes very slow and the app crashes without any exceptions. Could you please let me know any best way to have 100 textures without compromising the performance.
I am using the TextureRegion from TextureAtlas.
MainGame
public void render(SpriteBatch sb) {
// TODO Auto-generated method stub
// System.out.println("BallPoolGame Screen - render");
batch = sb;
sb.setProjectionMatrix(camera.combined);
sb.begin();
sb.draw(BACKGROUND_BALL_POOL, 0, 0, SCREEN_WIDTH, SCREEN_HEIGHT);
cellManager.draw(sb);
ballManager.draw(sb);
sb.end();
}
private void setGameTextures() {
gameScreenAtlas = new TextureAtlas("data/texturetutorialpack.pack");
RED_BALL = gameScreenAtlas.findRegion("redball");
// RED_BALL.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
BLUE_BALL = gameScreenAtlas.findRegion("blueball");
// BLUE_BALL.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
GREEN_BALL = gameScreenAtlas.findRegion("greenball");
// GREEN_BALL.getTexture().setFilter(TextureFilter.Linear, TextureFilter.Linear);
}
CellManager
public void draw(SpriteBatch sb){
batch=sb;
showImageTexture(MODEL1,207,1);
if(showSelectedCell){
if(allPossiblePathSize>0)
setupBoardCellTexture();
showImage(CELL_SELECTED, rowCoordinate[cellRow], colCoordinate[cellCol]);
}
}
private void setupBoardCellTexture(){
for(CellGrid c : masterGrid){
if(cellTextureIndicator[c.getRow()][c.getCol()]==1){
showImage(CELL_ALL_PATH_TEXTURE,c.getRowCoordinate() ,c.getColCoordinate() );
}
}
}
private void showImage(TextureRegion tr, float rowCoordinate, float colCoordinate) {
batch.draw(tr, colCoordinate,rowCoordinate);
}
BallManager
public void draw(SpriteBatch sb) {
batch = sb;
setupBoardBallTexture();
if (moveTheBall) {
updateBallPosition();
showImage(ball.getTextureRegion(), moveRow + 6, moveCol + 6);
}
squeezeBalls.draw(sb);
}
You are missing some essential data about your app to answer than question:
How big is one texture on average (Size: widthxheight)
On which device is this error occuring (some devices might have less fillrate than others)
What texture filter does the TextureAtlas use (LINEAR, NEAREST, ...)
I guess that you are trying to draw many textures event if they are out of sight. If that is the case you have to implement a check if the cell is visible to the camera.
Another guess would be that you are trying to draw too many elements with the LINEAR TextureFilter. When using linear as a texture filter the gpu needs to sample way more points then with nearest (i think it was 4 times the samples; so in theory your gpu draws 400-800 textures; depending on images size that are too much for mobile gpu fillrates)
Try to describe more circumstances then i can give probably more insight in your problem.
I've had reports from a couple of users (from 85000 downloads) of the problem shown in the image below. No doubt it has occurred a few more times than this, but it's certainly rare.
I'm unable to reproduce the problem and don't believe it's specific to device as some users would appear to be playing the game perfectly happily on the same device models that have reported the problem.
The letters are drawn onto a frame buffer first to build them up from the circular background with the character drawn on top. They are then copied off and a new texture is created.
The background is also put together from multiple components on a frame buffer and copied off to create a texture, so I'm not too sure why that appears to work perfectly well when the letters don't. The background is drawn using the same FrameBuffer and the same SpriteBatch instance.
What it does look like
What it should look like
Method that create the images
private static Texture getTextureUsingGpu(String letter, Bubble.BubbleType bubbleType) {
if (!enabled)
return null;
StrokeFontHelper font = Assets.strokeFont;
font.setSettings(Fonts.BUBBLE_TEXT_SETTINGS);
TextureRegion tx = getBlockImage(letter, bubbleType);
FrameBuffer fb = TextureLoader.getFrameBuffer();
fb.begin();
Gdx.gl20.glClearColor(0.0f, 0.0f, 0.0f, 1);
// Make sure everything is really really clear! Trying to fix graphics glitches on some phones
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT | GL20.GL_STENCIL_BUFFER_BIT | GL20.GL_SUBPIXEL_BITS);
float width = tx.getRegionWidth();
float height = tx.getRegionHeight();
TextureLoader.sb.begin();
tx.flip(false, !tx.isFlipY());
TextureLoader.sb.disableBlending();
TextureLoader.sb.draw(tx, 0, 0);
TextureLoader.sb.enableBlending();
// Removed character drawing code to make it more readable
TextureLoader.sb.end();
Pixmap pm = ScreenUtils.getFrameBufferPixmap(0, 0, (int) width,
(int) height);
PixmapTextureData data = new PixmapTextureData(pm, Format.RGBA8888,
false, false, true);
Texture result = new Texture(data);
result.setFilter(TextureFilter.Linear, TextureFilter.Linear);
cacheTexture(letter, result, pm);
fb.end();
return result;
}
public static FrameBuffer getFrameBuffer() {
if (frameBuffer == null || frameBuffer.getWidth() != Gdx.graphics.getWidth() || frameBuffer.getHeight() != Gdx.graphics.getHeight()) {
if (frameBuffer != null)
frameBuffer.dispose();
// Create the highest quality frame buffer we can get away with
try {
frameBuffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
} catch (Exception e) {
try {
frameBuffer = new FrameBuffer(Pixmap.Format.RGB888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
} catch (Exception e2) {
frameBuffer = new FrameBuffer(Pixmap.Format.RGB565, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
}
}
// Set up the camera correctly for the frame buffer
camera = new OrthographicCamera(frameBuffer.getWidth(), frameBuffer.getHeight());
camera.position.set(frameBuffer.getWidth() * 0.5f, frameBuffer.getHeight() * 0.5f, 0);
camera.update();
sb.setProjectionMatrix(camera.combined);
}
return frameBuffer;
}
Edit
I've done some fiddling with this and have a very helpful user who has been testing versions for me. Here's what I've established.
If I use a ShapeRenderer rather than a SpriteBatch then I can draw over the whole area as expected.
It is almost certainly the point at which I draw textures to the FrameBuffer using the SpriteBatch that the problem occurs. It just doesn't draw to the bottom half of the textures. What's on the FreameBuffer is copied to the pixmap correctly.
Another Edit I've got a new visual glitch reported by a user. I don't know if this might shed some more light on a problem.
I'm trying out AndEngine (GLES2) in the last couple of days.
I'm having a problem with the SpriteExample from the Examples project.
In the SpriteExample in the example project the face_box.png sprite looks nice and sharp.
However when I copy the same code and the face_box.png file to my own separate project, the sprite looks pixelated.
Since the code is just the same I guess the problem is with some configuration settings, however I could not figure it out.
I'm running on Galaxy S2 with ICS.
Does anyone have any idea on what might cause the problem?
This is the code if anyone wondered -
public class AndEngineMapActivity extends SimpleBaseGameActivity implements OnClickListener {
// ===========================================================
// Constants
// ===========================================================
private static final int CAMERA_WIDTH = 800;
private static final int CAMERA_HEIGHT = 480;
// ===========================================================
// Fields
// ===========================================================
private ITexture mTexture;
private ITextureRegion mFaceTextureRegion;
// ===========================================================
// Constructors
// ===========================================================
// ===========================================================
// Getter & Setter
// ===========================================================
// ===========================================================
// Methods for/from SuperClass/Interfaces
// ===========================================================
#Override
public EngineOptions onCreateEngineOptions() {
final Camera camera = new Camera(0, 0, CAMERA_WIDTH, CAMERA_HEIGHT);
return new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(CAMERA_WIDTH, CAMERA_HEIGHT), camera);
}
#Override
public void onCreateResources() {
try {
this.mTexture = new BitmapTexture(this.getTextureManager(), new IInputStreamOpener() {
#Override
public InputStream open() throws IOException {
return getAssets().open("gfx/face_box.png");
}
});
this.mTexture.load();
this.mFaceTextureRegion = TextureRegionFactory.extractFromTexture(this.mTexture);
} catch (IOException e) {
Debug.e(e);
}
}
#Override
public Scene onCreateScene() {
this.mEngine.registerUpdateHandler(new FPSLogger());
final Scene scene = new Scene();
scene.setBackground(new Background(0.09804f, 0.6274f, 0.8784f));
/* Calculate the coordinates for the face, so its centered on the camera. */
final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion.getWidth()) / 2;
final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion.getHeight()) / 2;
/* Create the face and add it to the scene. */
final Sprite face = new Sprite(centerX, centerY, this.mFaceTextureRegion, this.getVertexBufferObjectManager());
scene.attachChild(face);
return scene;
}
#Override
public void onClick(final ButtonSprite pButtonSprite, final float pTouchAreaLocalX, final float pTouchAreaLocalY) {
runOnUiThread(new Runnable() {
#Override
public void run() {
Toast.makeText(AndEngineMapActivity.this, "Clicked", Toast.LENGTH_LONG).show();
}
});
}
// ===========================================================
// Methods
// ===========================================================
// ===========================================================
// Inner and Anonymous Classes
// ===========================================================
}
**Update: ** Following to JoenEye's advice I tried loading the texture differently with
#Override
public void onCreateResources() {
try {
this.mTexture = new BitmapTexture(this.getTextureManager(), new IInputStreamOpener() {
#Override
public InputStream open() throws IOException {
return getAssets().open("gfx/face_box.png");
}
}, TextureOptions.BILINEAR_PREMULTIPLYALPHA);
this.mTexture.load();
this.mFaceTextureRegion = TextureRegionFactory.extractFromTexture(this.mTexture);
} catch (IOException e) {
Debug.e(e);
}
}
The results have improved and the smiley face picture looks a bit better, but it's still not as sharp as in the Example's project.
****Another Update: ****
These are the images of the results I get.
-
This is the one from the original example project (best result)
This is the one from my project without the TextureOptions.BILINEAR_PREMULTIPLYALPHA
This is the one from my project with the TextureOptions.BILINEAR_PREMULTIPLYALPHA (current result
By the way way - Interesting result, once I created another empty project with only this class in it and it worked flawlessly and looked good.
So I guess it must be some kind of a configuration problem with my own project.
I'd be glad to get any more ideas!
Thanks!
I'll guess that the difference between the example project and the one with the bad result is either the subpixel location of the sprite or it being stretched/skewed slightly. Are you careful to draw the sprite exactly on a pixel boundry?
The issue is when a fragment is drawn, if the fragment is not perfectly lined up with the texture it is sampling, then it must generate a color that is not exactly what is in the texture. For example if you have a 10x10 pixel texture, and you draw it at screen coordinates (0.5, 0.5), then each pixel either needs to just pick the nearest texel (NEAREST sampling), or blend together nearby texels (LINEAR or BILINEAR sampling). NEAREST can have rounding problems here if two texels are equal distances from the sample point (which might explain the slightly ugly image in the first bad result). If you use LINEAR sampling, then it just blends together the nearest texels, which may give the slightly blurred image you see in the second result.
So to remedy this for small images with tiny features, you want to always make sure to draw the image so that it lines up exactly with a pixel boundry. Can you check if you are doing this in your application?
For more information I would do some google searches for 'pixel perfect sampling/rendering`, which will give you more information.
EDIT, I also noticed this, which may explain:
final float centerX = (CAMERA_WIDTH - this.mFaceTextureRegion.getWidth()) / 2;
final float centerY = (CAMERA_HEIGHT - this.mFaceTextureRegion.getHeight()) / 2;
You're doing integer division here, which could cause you to get the value of 'center' off by a half pixel if the image width/height is an odd number. Then when you pass this value to AndEngine your center is incorrect by a half pixel, explaining the subpixel problems.
Try using the standard way of loading images unless the method you use is necessary for some reason.
mTexture = new BitmapTexture(1024, 512, TextureOptions.BILINEAR_PREMULTIPLYALPHA);
BitmapTextureRegionFactory.createFromAsset(mTexture, this, "face_box.png", 0, 0);
You will probably end up using buildable textures that place the images on the texture automatically, but this could help you determine where the problem is.
Eventually I decided to just open up a new project and move all my code to the new project.
It works.
I know it's a bad solution, but it solved the problem.
If I'll ever come up with this problem again, and find a real solution, I would update it in here.