Using mipmaps results in garbage textures - android

I'm creating my own mipmaps when creating textures, and as soon as I enable mipmaps via GL_LINEAR_MIPMAP_NEAREST (or anything MIPMAP), the textures are just a very dark, blurry mess. If I switch to GL_LINEAR, they're fine.
Here's how I'm creating the texture:
glGenTextures(1, &m_TextureId);
glBindTexture( GL_TEXTURE_2D, id );
int level = 0;
jobject mippedBitmap = srcBitmap;
while (width >= 2 && height >= 2) {
jniEnv->CallStaticVoidMethod(s_GLUtilsClass, s_texImage2DMethodID,
GL_TEXTURE_2D, level, mippedBitmap, 0);
width >>= 1;
height >>= 1;
level++;
mippedBitmap = createScaledBitmap(jniEnv, srcBitmap, width, height, true);
}
I've omitted all Bitmap.recycle()/NewGlobalRef() calls for brevity. createScaledBitmap is obviously a JNI call to Bitmap.createScaledBitmap().
I also tried the other version of texImage2D that takes the format and type of the bitmap. I've verified that it's always RGBA.
EDIT: To elaborate on the textures - they're really almost black. I tried eraseColor() on the mips with bright colors, and they're still extremely dark.

glGenerateMipmap will do this task faster and much more convenient for you.

The code you're showing will not generate the lower mip levels (1x1 will be missing, e.g.), so your texture will be incomplete. That should make the rendering as if the texturing was not present at all. It's unclear from your description if this is what you're observing.
In all cases, you should provide the mipmaps all the way down to the 1x1 (and for non square textures, this requires some changes in the computation of the new texture size to keep passing 1, as in 4x1 -> 2x1 -> 1x1)

It may be the case that Bitmap.createScaledBitmap does not correct the gamma. Have a look at this thread for code to create gamma-corrected mipmaps

Related

Internal Texture Format

On Android using OpenGL ES 2.0 I try to perform certain performance tests using different internal texture formats.
Initially I have a lot of RGBA textures (png) which I want to load and store internally in a different format with OpenGL (for example RGB and LUMINANCE). I load my textures using glTexImage2D like this:
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),resourceId);
...
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(b);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(),
bitmap.getHeight(), 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, b);
This works fine, however if I change the first GLES20.GL_RGBA (The internalFormat parameter) to anything else (GLES20.GL_RGB or GLES20.GL_LUMINANCE) my texture appears all black. Changing the second GLES20.GL_RGBA to the same value will display something - but obviously not correctly as the original data is RGBA.
I thought maybe it has something to with the shader code - that maybe texture2D(..) returns a different value because the internal format of the texture is different. My shader code is simply:
gl_FragColor = texture2D(texture, fragment_texture_coordinate);
I tried changing this around too, but no luck yet. So I thought maybe glTex2DImage is not at all working as I think it does (I am not an expert on this area whatsoever).
What am I doing wrong?
Edit:
I overlooked this little detail on texImage2D. It appears that:
internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.
What I gather from this, is that if you want to store your textures different from their original format you'll have to convert it yourself.
Your fragment shader must be written to agree with the format you are giving to glTexImage2D(). For GL_RGB, it should force the alpha to 1.0, like this:
vec3 Color_RGB = texture2D(sampler2d, texCoordinate);
gl_FragColor = vec4(Color_RGB, 1.0);
But, for GL_RGBA, it should look like this:
vec4 Color_RGBA = texture2D(sampler2d, texCoordinate);
gl_FragColor = Color_RGBA;
And, as has been discussed, you can only use the Android Bitmap class for textures if your PNG files have no transparency. This article explains that:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Android: FBO to Bitmap. Scrambled result with NPOT image sizes on some devices

Honestly, I don't like to ask things this way, but I have no clue about this one!
Have you seen this before??
It can be seen that the image is scrambled following some defined pattern. This happens only in some (low end) devices, with Non Power of two images (FBO). It works well on other devices.
What I do, is to load an Android Bitmap to a FBO (works OK, as it shows ok on the screen). I do some editing (I paste a sticker, which in the image seems to be in the right place), and finally save the FBO into a Bitmap again. It works ok for a 512x512 FBO (the FBO has the image size), but no for that one (507x800).
Any Ideas??? I don't post code because I have no clue, please tell me and I'll add it.
This is the GL call to retrieve info from FBO
public Buffer toPixelBuffer(){
final int w = this.getWidth(); //colorTexture width
final int h = this.getHeight();
final ByteBuffer pixels = BufferUtils.newByteBuffer(w*h * 4);
Gdx.gl.glPixelStorei(GL10.GL_PACK_ALIGNMENT, 1);
Gdx.gl.glReadPixels(0,0, w, h, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixels);
pixels.clear();
return pixels;
}
I also don't have a buggy device with me to test right now :(
Thank you!
I had the exact same problem. I experienced this on Galaxy Ace, Galaxy Y, and some other devices.
After lot of testing I did found out that it wasn't even required POT textures, so keeping the texture size with a 64 pixel increment made the trick. So lets say if I have a 122x53 texture, I need to convert it to 128x64. An so on.
Next is the function I use to get a valid texture dimension. Call it for both Width and Height.
/**
* Some GPUs such as the "VideoCore IV HW" on the Samsung Galaxy Ace
* require texture (FBO) sizes to be in '64' increments (WTF!!!!)
*
* #param dimension
* Base dimension to calculate
* #return Resolved 64 dimension
*/
public static int calculate64Dimension(final int dimension)
{
return (((dimension - 1) >> 6) << 6) + 64;
}

Android opengl texture corruption

I am having problems with texture corruption on Android emulator (it runs fine on most android devices).
The picture above is a reference rendering produced by emulator running Android 4.1 Jelly Bean, everything looks as it should.
The second picture is captured in emulator running Android 1.6. Note the corruption of some of the disabled toolbar buttons (they are rendered with color 1f,1f,1f,0.5f)
The third picture is captured in the same emulator. The difference is that now score is rendered in the upper-right corner. Score is a bitmap font, it's texture is an alpha mask. Everything rendered after the score looses it's texture. Note that the previous screenshot also contained bitmap font rendered the same way (but using different texture).
A similar problem was present on one of the Samsung devices (I don't remember the model). When the floor texture was rendered, everything rendered after that lost texture. The problem did not manifest itself when I either a) did not bind the texture b) did bind the texture, but drew no triangles using it c) recreated the png asset from scratch.
Opengl settings:
gl.glDisable(GL10.GL_LIGHTING);
gl.glDisable(GL10.GL_CULL_FACE);
gl.glDisable(GL10.GL_DEPTH_TEST);
gl.glDisable(GL10.GL_DITHER);
gl.glDepthMask(false);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBlendFunc(GL10.GL_ONE,GL10.GL_ONE_MINUS_SRC_ALPHA);
gl.glShadeModel(GL10.GL_FLAT);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE, GL10.GL_MODULATE);
How textures are loaded:
public void doGLLoading(Engine renderer) {
GL10 gl=renderer.getGl();
int[] ids=new int[1];
gl.glGenTextures(1, ids,0);
id=ids[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, id);
Log.d("SpriteDraw", String.format("Texture %s has format %s",getPath(),bitmap.getConfig().toString()));
buildMipmap(gl, bitmap);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MIN_FILTER, minFilter);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_MAG_FILTER, magFilter);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_S, textureWrapS);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,GL10.GL_TEXTURE_WRAP_T, textureWrapT);
}
private void buildMipmap(GL10 gl, Bitmap bitmap) {
int level = 0;
int height = bitmap.getHeight();
int width = bitmap.getWidth();
while (height >= 1 || width >= 1) {
// First of all, generate the texture from our bitmap and set it to
// the according level
//TextureUtils.texImage2D(gl, GL10.GL_TEXTURE_2D, level, -1, bitmap, -1, 0);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, level, bitmap, 0);
if (height == 1 || width == 1) {
break;
}
// Increase the mipmap level
level++;
height /= 2;
width /= 2;
Bitmap bitmap2 = Bitmap.createScaledBitmap(bitmap, width, height,
true);
// Clean up
bitmap.recycle();
bitmap = bitmap2;
}
}
Notes: the font is rendered using gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); and GL10.glDrawArrays. The corruption affects not only the 1.6 emulator, but also the 2.x series of android, althought it is not as prominent (the alpha masks are still rendered incorrectly). All assets are correctly loaded as power of two bitmaps.
I suggest using 32bit textures, not grayscale, not "alphamasks" (what's that?)
Check the size of your texture, it should not exceed maximum size ( glGetInteger( GL_MAX_TEXTURE_SIZE ). Also make shure your textures are power of 2. Yes, you mentioned before they are, but if they are in scalable assets (drawable_x_dpi folders), they will be scaled by android. To avoid scaling, put them to "raw" folder.
Just for test, try to disable all filtering, including mipmaps - set GL_TEXTURE_WRAP_S, and GL_TEXTURE_WRAP_T to GL_NEAREST

Spritesheet programmatically cutting: best practices

I have a big spritesheet (3808x1632) composed by 42 frames.
I would present an animation with these frames and I use a thread to load a bitmap array with all the frames, with a splash screen waiting for its end.
I'm not using a SurfaceView (and a draw function of a canvas), I just load frame by frame in an ImageView in my main layout.
My approach is similar to Loading a large number of images from a spritesheet
The completion actually takes almost 15 seconds, not acceptable.
I use this kind of function:
for (int i=0; i<TotalFramesTeapotBG; i++) {
xStartTeapotBG = (i % framesInRowsTeapotBG) * frameWidthTeapotBG;
yStartTeapotBG = (i / framesInRowsTeapotBG) * frameHeightTeapotBG;
mVectorTeapotBG.add(Bitmap.createBitmap(framesBitmapTeapotBG, xStartTeapotBG, yStartTeapotBG, frameWidthTeapotBG, frameHeightTeapotBG));
}
framesBitmapTeapotBG is the big spritesheet.
Looking more deeply, I've read in the logcat that the createBitmap function takes a lot of time, maybe because the spritesheet is too big.
I found somewhere that I could make a window on the big spritesheet, using the rect function and canvas, creating small bitmaps to be loaded in the array, but it was not really clear. I'm talking about that post: cut the portion of bitmap
My question is: how can I speed the spritesheet cut?
Edit:
I'm trying to use this approach but I cannot see the final animation:
for (int i=0; i<TotalFramesTeapotBG; i++) {
xStartTeapotBG = (i % framesInRowsTeapotBG) * frameWidthTeapotBG;
yStartTeapotBG = (i / framesInRowsTeapotBG) * frameHeightTeapotBG;
Bitmap bmFrame = Bitmap.createBitmap(frameWidthTeapotBG, frameHeightTeapotBG, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmFrame);
Rect src = new Rect(xStartTeapotBG, yStartTeapotBG, frameWidthTeapotBG, frameHeightTeapotBG);
Rect dst = new Rect(0, 0, frameWidthTeapotBG, frameHeightTeapotBG);
c.drawBitmap(framesBitmapTeapotBG, src, dst, null);
mVectorTeapotBG.add(bmFrame);
}
Probably, the Bitmap bmFrame is not correctly managed.
The short answer is better memory management.
The sprite sheet you're loading is huge, and then you're making a copy of it into a bunch of little bitmaps. Supposing the sprite sheet can't be any smaller, I'd suggest taking one of two approaches:
Use individual bitmaps. This will reduce the memory copies as well as the number of times Dalvik will have to grow the heap. However, these benefits may be limited by the need to load many images off the filesystem instead of just one. This would be the case in a normal computer, but Android systems may get different results since they're run off flash memory.
Blit directly from your sprite sheet. When drawing, just draw straight from sprite sheet using something like Canvas.drawBitmap(Bitmap bitmap, Rect src, Rect dst, Paint paint). This will reduce your file loads to one large allocation that probably only needs to happen once in the lifetime of your activity.
I think the second option is probably the better of the two since it will be easier on the memory system and be less work for the GC.
Thanks to stevehb for the suggestion, I finally got it:
for (int i = 0; i < TotalFramesTeapotBG; i++) {
xStartTeapotBG = (i % framesInRowsTeapotBG) * frameWidthTeapotBG;
yStartTeapotBG = (i / framesInRowsTeapotBG) * frameHeightTeapotBG;
Bitmap bmFrame = Bitmap.createBitmap(frameWidthTeapotBG, frameHeightTeapotBG, Bitmap.Config.ARGB_8888);
Canvas c = new Canvas(bmFrame);
Rect src = new Rect(xStartTeapotBG, yStartTeapotBG, xStartTeapotBG+frameWidthTeapotBG, yStartTeapotBG+frameHeightTeapotBG);
Rect dst = new Rect(0, 0, frameWidthTeapotBG, frameHeightTeapotBG);
c.drawBitmap(framesBitmapTeapotBG, src, dst, null);
mVectorTeapotBG.add(bmFrame);
}
The computation time falls incredibly! :)
Use a LevelListDrawable. Cut the sprites into individual frames and drop them in your drawable resource directory. Either programmatically or through an xml based level-list drawable create your drawable. Then use ImageView.setImageLevel() to pick your frame.
I use a method of slicing based on rows and columns. However your sprite sheet is rather huge. You have to think you are putting that whole sheet into memory. 3808x1632x4 is the size of the image in memory.
Anyway, what I do is I take an image (lets say a 128x128) and then tell it there are 4 columns and 2 rows in the Sprite(bitmap, 4, 2) constructor. Then you can slice and dice based on that. bitmap.getWidth() / 4 etc... pretty simple stuff. However if you want to do some real stuff use OpenGL and use textures.
Oh I also forgot to mention there are some onDraw stuff that needs to happen. Basically you keep an index counter and slice a rectangle from the bitmap and draw that from a source rectangle to a destination rectangle on the canvas.

Opengl es 1.1, texture compression ETC1 and Mipmaping (complete set of mipmaps error)

When I activate the mipmaping on uncompressed texture, all is working perfectly.
When I do it on ETC1 texture, the texture is blank, certainly because le complete set of mipmaps was not given.
The code is very simple and works on iPhone (with PVR compression, of course).
It doesn't work on Android. The mipmap was build with an external tool, and past together.
I stop making mipmap at the size of 4, because glCompressedTexImage2D return an opengl error if try using mipmap lower.
for(u32 i=0; i<=levels; i++)
{
size = KC_TexByte(pagex, pagey, tex_type);
glCompressedTexImage2D(GL_TEXTURE_2D, i, type, pagex, pagey, 0, size, ptr);
pagex = MAX(pagex/2, 4);
pagey = MAX(pagey/2, 4);
ptr += size;
KC_Error(); // test openGL error
}
The reason your texture is blank is because it is required that the mipmap go all the way to 1x1.
I would imagine that the error you're getting with small compressed textures is because the texture format you're attempting to use (etc1?) doesn't support those sizes. You'd have to use non-compressed images at those small sizes...
Thanks, but your solution is not the right one; I found another solution.
you're right when you explain that all the mipmap is requiered, until size 1x1
you're wrong, we can't have different format between mipmap
The right way is:
using size to 1x1
keep in mind it's compressed data with bloc, so the size in BYTE doesn't divide by 4 each step. after 8x8, the size stay at the same value.
sx = size in X
sy = size in Y
byte = ((sx+3)/4)*((sy+3)/4) * 8 * 2; // 8 = bit per pixel
for(u32 i=0; i<=levels; i++)
Seems you'd want i < levels instead of <=.

Categories

Resources