Internal Texture Format - android

On Android using OpenGL ES 2.0 I try to perform certain performance tests using different internal texture formats.
Initially I have a lot of RGBA textures (png) which I want to load and store internally in a different format with OpenGL (for example RGB and LUMINANCE). I load my textures using glTexImage2D like this:
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),resourceId);
...
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(b);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(),
bitmap.getHeight(), 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, b);
This works fine, however if I change the first GLES20.GL_RGBA (The internalFormat parameter) to anything else (GLES20.GL_RGB or GLES20.GL_LUMINANCE) my texture appears all black. Changing the second GLES20.GL_RGBA to the same value will display something - but obviously not correctly as the original data is RGBA.
I thought maybe it has something to with the shader code - that maybe texture2D(..) returns a different value because the internal format of the texture is different. My shader code is simply:
gl_FragColor = texture2D(texture, fragment_texture_coordinate);
I tried changing this around too, but no luck yet. So I thought maybe glTex2DImage is not at all working as I think it does (I am not an expert on this area whatsoever).
What am I doing wrong?
Edit:
I overlooked this little detail on texImage2D. It appears that:
internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.
What I gather from this, is that if you want to store your textures different from their original format you'll have to convert it yourself.

Your fragment shader must be written to agree with the format you are giving to glTexImage2D(). For GL_RGB, it should force the alpha to 1.0, like this:
vec3 Color_RGB = texture2D(sampler2d, texCoordinate);
gl_FragColor = vec4(Color_RGB, 1.0);
But, for GL_RGBA, it should look like this:
vec4 Color_RGBA = texture2D(sampler2d, texCoordinate);
gl_FragColor = Color_RGBA;
And, as has been discussed, you can only use the Android Bitmap class for textures if your PNG files have no transparency. This article explains that:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Related

Rendering a float buffer on screen with OpenGL ES

I have a FloatBuffer as an output from the neural network, where the RGB channels are encoded with [-1 .. +1] values. I would like to render them on-screen, using GLSurfaceView. What is the best way to handle it?
I can dump the buffer into SSBO and write a compute shader, which maps it to ByteBuffer of [0 .. 255] range, then somehow bind it to regular texture. Or maybe I can set up my compute shader to output directly to some texture buffer? Or maybe I am supposed to read my SSBO directly from the fragment shader (and implement my own linear interpolation)?
So, which is the best way to render stuff via OpenGL ES? Please, help.
You can try to load it with but it depends how many update you need per seconde. That is to test with you machine.
First Bind your texture (you must create one) then when your input buffer is ready use
GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, width, height, GLES30.GL_RGB, GLES20.GL_FLOAT, InputFloatBuffer);
It work well with ByteBuffer and i did not try with Float but there is no Signed_float format.
Use a kernel to change the signed float to byte.

How to get 24bit color information by glReadPixels() on android?

To save processed image by OpenGL ES, I made codes as follows. And it works well.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 4);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, bb);
try {
TJCompressor tjCompressor = new TJCompressor(bb.array(), 0, 0, mWidth, 0, mHeight, TJ.PF_RGB);
tjCompressor.setJPEGQuality(100);
tjCompressor.setSubsamp(TJ.SAMP_444);
return tjCompressor.compress(0);
} catch (Exception e) {
e.printStackTrace();
}
After that, to get 24bit color information without alpha channel for saving memory and processing time, I changed the line #1 and #2 of the codes as follows.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 3);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGB, GL_UNSIGNED_BYTE, bb);
And then additionally, I removed EGL_ALPHA_SIZE at mGL(GL10 instance)'s EGLConfig. And I passed GLES20.GL_RGB as internal format parameter, when GLUtils.texImage2D() method is called.
However, the result indicates there is something wrong. The result image has only black color, and when I checked the data of bb buffer after glReadPixels() method calling, I found all data is zero. I need advice. Help, plz.
In core GLES2, the only valid format/type combos for glReadPixels are:
GL_RGBA/GL_UNSIGNED_BYTE
Optional framebuffer-specific format/type queried via glGetIntegerv with GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE respectively
In GLES2 without extensions, if GL_IMPLEMENTATION_COLOR_READ_FORMAT/GL_IMPLEMENTATION_COLOR_READ_TYPE don't yield something useful, you're stuck with GL_RGBA/GL_UNSIGNED_BYTE, unfortunately.
With GLES3, you can glReadPixels into the bound GL_PACK_BUFFER, and glMapBufferRange that, though again, you're limited by format.
I'll note that drivers are prone to emulating tightly-packed rgb8 24-bit formats, instead implementing only the better aligned formats like rgba8, rgb565, and rgba4. A renderable format exposed as "rgb8" is potentially just rgbx8 behind the scenes.
Highly driver dependent, but if you don't care about keeping the contents of the framebuffer around, you might be able to win back some memory bandwidth with EXT_discard_framebuffer. (or glInvalidateFramebuffer in GLES3)
You might also look into EGL_KHR_lock_surface3.

Handling large bitmaps on Android - int[] larger than max heap size

I'm using very large bitmaps and I store data in a big int[]. The images can be really large and I can't downsample them (I'm getting the bitmaps over the wire and rendering them).
The problem I'm hitting is on very large bitmaps (bitmap size = 64MB), where I try to allocate int array with size 16384000. I'm testing this on Samsung Galaxy SII, which should have enough memory to handle this, but it seems there is a "cap" on heap size. The method Runtime.getRuntime().maxMemory() returns 64MB, so this is the max heap size for this particular device.
The API level is set to 10, so I can't use android:largeHeap attribute suggested elsewhere (and I don't even know if that would help).
Is there any way to allocate more than 64MB? I tried allocating the array in native (using JNI NewIntArray function), but that fails as well. It seems as though it is bound by the same limit as jvm.
I could however allocate memory on the native side using NewDirectByteBuffer, but since this byte buffer is not backed by an array, I can not get access to int[] (using asIntBuffer().array() in java which I need in order to display the image using setPixels or createBitmap. I guess OpenGL would be a way to go, but I have (so far) 0 experience with OpenGL.
Is there a way to somehow access allocated memory as int[] that I am missing?
So, the only way I've found so far is to allocate image using NDK. Furthermore, since Bitmap does not use existing Buffer as pixel "storage" (the method copyPixelsFromBuffer is also bound to memory limits; and judging by the method name, it copies the data anyway).
The solution (I've only prototyped it roughly) is to malloc whatever the size of the image is, and fill it using c/c++ and then use ByteBuffer in Java with OpenGLES.
The current prototype creates a simple plane and applies this image as a texture to it (luckily, OpenGLES methods take Buffer as input, which seems to work as expected). I'm using glTexImage2D to apply this buffer as a texture to a plane. Here is a sample, where mImageData is ByteBuffer allocated (and filled) on the native side.
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureId = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureId);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, 4000, 4096, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, mImageData);
I assume OP already solved this, but if you have a Stream, you can stream the bitmap into a file and read that file using inSampleSize

How to load a mesh in Android OpenGL

How to load a Mesh (3dmax) in Android OpenGL into a ByteBuffer ?
example:
float triangleCoords[] = {
-0.5f, -0.25f , 0, // 0,
0.5f, -0.25f , 0, // 1,
0.0f, 0.559016994f, 0 // 2,
};
ByteBuffer vbb = ByteBuffer.allocateDirect(triangleCoords.length * _4_FLOAT_LENGTH);
vbb.order(ByteOrder.nativeOrder());
triangleVB = vbb.asFloatBuffer();
triangleVB.put(triangleCoords);
triangleVB.position(0);
how to load a mesh like an array or Coords or ineed a library for this job
It basically boils down to loading the data from the file into your data structures (like the vertex array from your code snippet) and providing these to OpenGL for rendering (in your case glDrawArrays, for example). This is generally the same procedure, no matter if working with Android, Windows, Plan9, OpenGL or Direct3D. Only the implementation details matter, therefore this question is a bit broad.
You can look at this description of the 3DS file format (hinted in your question), which also has links to tutorials for loading these files. Although the 3DS format is a quite easy to read binary format, for a start you might also look into the Wavefront OBJ format, a simple ASCII file format that nearly every modeling software can export to (but this should also hold for 3DS). Loading these formats into a bunch of simple vertex arrays should boil down to some few lines of code.
These should get you started. If you encounter any specific problems implementing these mesh loading functionalities on your particular platform, feel free to ask a more specific question.

Using mipmaps results in garbage textures

I'm creating my own mipmaps when creating textures, and as soon as I enable mipmaps via GL_LINEAR_MIPMAP_NEAREST (or anything MIPMAP), the textures are just a very dark, blurry mess. If I switch to GL_LINEAR, they're fine.
Here's how I'm creating the texture:
glGenTextures(1, &m_TextureId);
glBindTexture( GL_TEXTURE_2D, id );
int level = 0;
jobject mippedBitmap = srcBitmap;
while (width >= 2 && height >= 2) {
jniEnv->CallStaticVoidMethod(s_GLUtilsClass, s_texImage2DMethodID,
GL_TEXTURE_2D, level, mippedBitmap, 0);
width >>= 1;
height >>= 1;
level++;
mippedBitmap = createScaledBitmap(jniEnv, srcBitmap, width, height, true);
}
I've omitted all Bitmap.recycle()/NewGlobalRef() calls for brevity. createScaledBitmap is obviously a JNI call to Bitmap.createScaledBitmap().
I also tried the other version of texImage2D that takes the format and type of the bitmap. I've verified that it's always RGBA.
EDIT: To elaborate on the textures - they're really almost black. I tried eraseColor() on the mips with bright colors, and they're still extremely dark.
glGenerateMipmap will do this task faster and much more convenient for you.
The code you're showing will not generate the lower mip levels (1x1 will be missing, e.g.), so your texture will be incomplete. That should make the rendering as if the texturing was not present at all. It's unclear from your description if this is what you're observing.
In all cases, you should provide the mipmaps all the way down to the 1x1 (and for non square textures, this requires some changes in the computation of the new texture size to keep passing 1, as in 4x1 -> 2x1 -> 1x1)
It may be the case that Bitmap.createScaledBitmap does not correct the gamma. Have a look at this thread for code to create gamma-corrected mipmaps

Categories

Resources