My scene in OpenGL ES requires several large resolution textures, but they are grayscale, since I am using them just for masks. I need to reduce my memory use.
I have tried loading these textures with Bitmap.Config.ALPHA_8, and as RGB_565. ALPHA_8 seems to actually increase memory use.
Is there some way to get a texture loaded into OpenGL and have it use less than 16bits per pixel?
glCompressedTexImage2D looks like it might be promising, but from what I can tell, different phones offer different texture compression methods. Also, I don't know if the compression actually reduces memory use at runtime. Is the solution to store my textures in both ATITC and PVRTC formats? If so, how do I detect which format is supported by the device?
Thanks!
PVRTC, ATITC, S3TC and so forth, the GPU native compressed texture should reduce memory usage and improve rendering performance.
For example (sorry in C, you can implement it as using GL11.glGetString in Java),
const char *extensions = glGetString(GL_EXTENSIONS);
int isPVRTCsupported = strstr(extensions, "GL_IMG_texture_compression_pvrtc") != 0;
int isATITCsupported = strstr(extensions, "GL_ATI_texture_compression_atitc") != 0;
int isS3TCsupported = strstr(extensions, "GL_EXT_texture_compression_s3tc") != 0;
if (isPVRTCsupportd) {
/* load PVRTC texture using glCompressedTexImage2D */
} else if (isATITCsupported) {
...
Besides you can specify supported devices using texture format in AndroidManifest.xml.
The AndroidManifest.xml File - supports-gl-texture
EDIT:
MOTODEV - Understanding Texture Compression
With Imagination Technologies-based (aka PowerVR) systems, you should be able to use PVRTC 4bpp and (depending on the texture and quality requirements) maybe even 2bpp PVRTC variant.
Also, though I'm not sure what is exposed in Android systems, the PVRTextool lists I8 (i.e. greyscale 8bpp) as target texture format, which would give you a lossless option.
ETC1 texture compression is supported on all Android devices with Android 2.2 and up.
Related
I have a sequence of images/textures to display with threejs.
Firstly I loaded a lot of Raw RGB format data (Uint8Array) and fed into THREE.DataTexture. It was fine on PC and Android but seemed to hit the memory-bound and caused page reload on iOS safari.
var texture = new THREE.DataTexture(data, w, h, THREE.RGBFormat);
What is the best way to have a lot of texture or texture data in memory for threejs to use?
Must be compatible with PC/Android/iOS.
I changed all the data to Compressed Texture Formats and use THREE.CompressedTexture, for example Dxt1, to reduce memory usage. But DXT1 only works for PC.
// given texture_data, texutre_width, texture_height
var texture = new THREE.CompressedTexture(null, texture_width, texture_height,
THREE.RGB_S3TC_DXT1_Format, THREE.UnsignedByteType, THREE.UVMapping,
THREE.ClampToEdgeWrapping, THREE.ClampToEdgeWrapping,
THREE.LinearFilter, THREE.LinearFilter);
var material = new THREE.MeshBasicMaterial({ map: texture });
// create mipmaps with data
var mipmap = {
"data": texture_data,
"width": texture_width,
"height": texture_height
};
var mipmaps = [];
mipmaps.push(mipmap);
// set data to texture and update
texture.mipmaps = mipmaps;
texture.needsUpdate = true
material.needsUpdate = true;
Going forward, I found that the compatibility of compressed texture formats across platforms was an issue. Each format requires a certain WEBGL extension. For example Dxt1 requires S3TC extension.
if (renderer.extensions.get('WEBGL_compressed_texture_s3tc'))
alert('WEBGL_compressed_texture_s3tc is supported');
else
alert('WEBGL_compressed_texture_s3tc is NOT supported');
There was not one format that works for all three platforms(PC/iOS/Android). Details below:
ASTC/ETC/ETC1: only works for Android
ATC: don't work at all platforms
PVRTC: only works for iOS
S3TC/S3TC_SRGB: only works for PC
Another way was to encode all the textures to mp4, but decoding them to raw texture was not easy. I tried Broadway. It works but the cpu loading was heavy.
At last, I encoded all the data to basis format and use BasisTextureLoader to load them on browser. The compress rate was similar to h264 baseline, way better than dxt1.
BasisTextureLoader actually detects the compatibility at runtime and decode them to a certain CompressedTexture format for threejs to use.
Hope it helps.
Can someone answer me how come this line:
GLES30.glTexImage2D(GLES30.GL_TEXTURE_2D, 0, GLES30.GL_R16F, width, height, 0, GLES30.GL_RED, GLES30.GL_HALF_FLOAT, myBuffer);
works on tegra4 but doesn't work on ARM Mali-T628 MP6?
I am not attaching this to a framebuffer by the way, I am using this as a read only texture. The code returned on ARM is 1280 where Tegra 'doesn't complain' at all.
Also, I know that Tegra4 got extension for half float textures, and that specific Mali doesn't have that extension, but since it's OpenGL ES 3.0, shouldn't it support such textures?
That call looks completely valid to me. Error 1280 is GL_INVALID_ENUM, which suggests that one of the 3 enum type arguments is invalid. But each one by itself, as well as the combination of them, is spec compliant.
The most likely explanation is a driver bug. I found that several ES 3.0 drivers have numerous issues, so it's not a big surprise to discover problems.
The section below was written under the assumption that the texture would be used as a render target (FBO attachment). Please ignore if you are looking for a direct answer to the question.
GL_R16F is not color-renderable in standard ES 3.0.
If you pull up the spec document, which can be found on www.khronos.org (direct link), table 3.13 on pages 130-132 lists all texture formats and their properties. R16F does not have the checkmark in the "Color-renderable" column, which means that it can not be used as a render target.
Correspondingly, R16F is also listed under "Texture-only color formats" in section "Required Texture Formats" on pages 129-130.
This means that the device needs the EXT_color_buffer_half_float extension to support rendering to R16F. This is still the case in ES 3.1 as well.
I'm currently working on a project where I have to use RenderScript, so i started learning about it, and it's a great technology, because, just like openGL, it lets you use computational code that goes to a native level, and doesn't have to use the dalvik vm. This part of the code, being processed much faster than if you would use normal android code.
I started working with image processing and what i was wondering is:
Is it possible to resize a bitmap using RenderScript? this should be much faster then resizing an bitmap using android code. Plus, renderscript can process information that is bigger than 48mB (limit on some phones for each process).
While you could use Rendscript to do the bitmap resize, I'm not sure if that's the best choice. A quick look at the Android code base shows that Java API does go into native code to do a bitmap resize, although if the resize algorithm isn't to your needs, you'll have to implement your own.
There are a number of answers on SO for getting the bitmap scaled efficiently. My recommendation is to try those, and if they still aren't doing what your want, either as quickly or how the results appear visually to then investigate into writing your own. If you still want to write your own, do use the performance tools available to see if you really are faster or just reinventing the wheel.
You can use the below function to resize the image.
private Bitmap resize(Bitmap inBmp) {
RenderScript mRs = RenderScript.create(getApplication());
Bitmap outBmp = Bitmap.createBitmap(OUTPUT_IMAGE_WIDTH, inBmp.getHeight() * OUTPUT_IMAGE_WIDTH /inBmp.getWidth(), inBmp.getConfig());
ScriptIntrinsicResize siResize = ScriptIntrinsicResize.create(mRs);
Allocation inAlloc = Allocation.createFromBitmap(mRs, inBmp);
Allocation outAlloc = Allocation.createFromBitmap(mRs, outBmp);
siResize.setInput(inAlloc);
siResize.forEach_bicubic(outAlloc);
outAlloc.copyTo(outBmp);
inAlloc.destroy();
outAlloc.destroy();
siResize.destroy();
return outBmp;
}
OUTPUT_IMAGE is the integer value specifying the width of the output image.
NOTE: While using the RenderScript Allocation you have to be very careful as they lead to memory leakages.
In order to minimize the memory usage of bitmaps, yet still try to maximize the quality of them, I would like to ask a simple question:
Is there a way for me to check if a given image file (.png file) has transparency using the API, without checking every pixel in it?
If the image doesn't have any transparency, it would be the best to use a different bitmap format that uses only the RGB values.
The problem is that Android also doesn't have a format for just the 3 colors. Only RGB_565, which they say that degrade the quality of the image and that should have dithering feature enabled.
Is there also a way to read only the RGB values and be able to show them?
For me bitmap.hasAlpha() works fine to check first if the bitmap has alpha values. Afterwards you have to run through the pixels and create a second bitmap with no alpha I would suggest.
Let's start a bit off-topic
the problem is that android also doesn't have a format for just the 3 colors . only RGB_565 , which they say that degrade the quality of the image and that should have dithering feature enabled.
The reason for that problem is not really Android specific. It's about performance while drawing images. You get the best performance if the pixeldata fits exactly in 1 32bit memory cell.
So the most obvious good pixel format is the ARGB_8888 format which uses exactly 32bit (24 for the color 8 for alpha). While drawing you don't need to do anything but to loop over the image data and each cell you read can be drawn directly. The only downside is the required memory to work with such images, both when they just sit in memory and while displaying them since the graphic hardware has to transfer more data.
The second best option is to use a format where several pixels fit into 1 cell. Using 2 pixels in 32bit you have 16bit per pixel left and one of the formats using 16bit is the 565 format. 5bit red, 6bit green, 5bit blue. While drawing this you can still work on memory cells separately and all you have to do is to split 1 cell in parts. Due to the smaller memory size required for images, drawing can sometimes be even faster than using 32bit colors. Since in the beginning of android memory was a much bigger problem they chose this format to be the default.
And the worst category of formats are those where pixels don't fit into those cells. If you take just the 3 colors you get 24 bit and those need to be distributed over 2 cells in 3 out of 4 cases. For example the second pixel would use the remaining 8 bit from the first cell & the first 16bit of the next cell. The extra work required to work with 24bit colors is so big that it is not used. And when drawing images you usually have alpha at some point anyways and if not you simply use 32bit but ignore the alpha bits.
So the 16bit approach looks ugly & the 24 bit approach does not make sense. And since the memory limitations of Android are not as tight as they were and the hardware got faster, Android has switched it's default to 32bit (explained in even more details in http://www.curious-creature.org/2010/12/08/bitmap-quality-banding-and-dithering/)
Back to your real question
is there a way for me to check if a given image file (png file) has transparency using the API , without checking every pixel in it?
I don't know. But JPEG images don't support alpha and PNG images usually have alpha. You could simply abuse the file extension to get it right in most cases.
But I would suggest you don't bother with all that and simply use ARGB_8888 and apply the nice image loading techniques detailed in the Android Training documentation about Displaying Bitmaps Efficiently.
The reason people run into memory problems is usually either that they have way more images loaded in memory than they currently display or they use giant images that can't be displayed on the small screen of a phone. And in my opinion it makes more sense to add good memory management than complicating your code to downgrade the image quality.
There is a way to check if a PNG file has transparency, or at least if it supports it:
public final static int COLOR_GREY = 0;
public final static int COLOR_TRUE = 2;
public final static int COLOR_INDEX = 3;
public final static int COLOR_GREY_ALPHA = 4;
public final static int COLOR_TRUE_ALPHA = 6;
private final static int DECODE_BUFFER_SIZE = 16 * 1024;
private final static int HEADER_DECODE_BUFFER_SIZE = 1024;
/** given an inputStream of a png file , returns true iff found that it has transparency (in its header) */
private static boolean isPngInputStreamContainTransparency(final InputStream pngInputStream) {
try {
// skip: png signature,header chunk declaration,width,height,bitDepth :
pngInputStream.skip(12 + 4 + 4 + 4 + 1);
final byte colorType = (byte) pngInputStream.read();
switch (colorType) {
case COLOR_GREY_ALPHA:
case COLOR_TRUE_ALPHA:
return true;
case COLOR_INDEX:
case COLOR_GREY:
case COLOR_TRUE:
return false;
}
return true;
} catch (final Exception e) {
}
return false;
}
Other than that, I don't know if such a thing is possible.
i've found the next links which could be helpful for checking if the png file has transparency . sadly, it's a solution only for png files . rest of the files (like webP , bmp, ...) need to have a different parser .
links:
http://www.java2s.com/Code/Java/2D-Graphics-GUI/PNGDecoder.htm
http://hg.l33tlabs.org/twl/file/tip/src/de/matthiasmann/twl/utils/PNGDecoder.java
http://www.java-gaming.org/index.php/topic,24202
I have played for a while with OpenGL on Android on various devices. And unless I'm wrong, the default rendering is always performed with the RGB565 pixel format.
I would however like to render more accurate colors using RGB888.
The GLSurfaceView documentation mentions two methods which relate to pixel formats:
the setFormat() method exposed by SurfaceHolder, as returned by SurfaceView.getHolder()
the GLSurfaceView.setEGLConfigChooser() family of methods
Unless I'm wrong, I think I only need to use the later. Or is using SurfaceHolder.setFormat() relevant here?
The documentation of the EGLConfigChooser class mentions EGL10.eglChooseConfig(), to discover which configurations are available.
In my case it is ok to fallback to RGB565 if RGB888 isn't available, but I would prefer this to be quite rare.
So, is it possible to use RGB888 on most devices?
Are there any compatibility problems or weird bugs with this?
Do you have an example of a correct and reliable way to setup the GLSurfaceView for rendering RGB888?
On newer devices, most of them should support RGBA8888 as a native format. One way to force RGBA color format is to set the translucency of the surface, you'd still want to pick the EGLConfig to best guess the config for the channels in addition to the depth and stencil buffers.
setEGLConfigChooser(8, 8, 8, 8, 0, 0);
getHolder().setFormat(PixelFormat.RGBA_8888);
However, if I read your question correctly you're asking for RGB888 support (alpha don't care) in other words, RGBX8888 which might not be supported by all devices (driver vendor limitation).
Something to keep in mind about performance though, since RGBA8888 is the color format natively supported by most GPUs hardware it's best to avoid any other color format (non natively supported) since that usually translate into a color conversion underneath adding non necessary work load to the GPU.
This is how I do it;
{
window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// cocos2d will inherit these values
[window setUserInteractionEnabled:YES];
[window setMultipleTouchEnabled:NO];
// must be called before any othe call to the director
[Director useFastDirector];
[[Director sharedDirector] setDisplayFPS:YES];
// create an openGL view inside a window
[[Director sharedDirector] attachInView:window];
// Default texture format for PNG/BMP/TIFF/JPEG/GIF images
// It can be RGBA8888, RGBA4444, RGB5_A1, RGB565
// You can change anytime.
[Texture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_];
glClearColor(0.7f,0.7f,0.6f,1.0f);
//glClearColor(1.0f,1.0f,1.0f,1.0f);
[window makeKeyAndVisible];
[[Director sharedDirector] runWithScene:[GameLayer node]];
}
I hope that helps!