glReadPixels + FBO is not working - android

On Android - ES 2. Reading from the framebuffer by calling glReadPixels + FBO. However, the byte array is 0. Interesting enough is that when I remove the binding code (leave the glReadPixels) it works.
Made me wonder if I didn't bind the buffer correctly, although when I check the framebuffer status (glCheckFramebufferStatus) I get GLES20.GL_FRAMEBUFFER_COMPLETE.
Any idea what I've done wrong?
int frameIdIndex=0,renderIdIndex=1,textureIdIndex=2;
int[] bufferId=new int[3];
Bitmap takeOne(Context cntxt) {
DisplayMetrics dm = cntxt.getResources().getDisplayMetrics();
int width = dm.widthPixels;
int height = dm.heightPixels;
//id index 0 frameId, 1 renderId 2 textureId;
GLES20.glGenFramebuffers(1,bufferId,frameIdIndex);
GLES20.glGenRenderbuffers(1, bufferId, renderIdIndex);
GLES20.glGenTextures(1, bufferId, textureIdIndex);
// bind texture and load the texture mip-level 0
// texels are RGB565
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,bufferId[textureIdIndex]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGBA,width,height,0,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_SHORT_5_6_5,null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
// bind renderbuffer and create a 16-bit depth buffer
// width and height of renderbuffer = width and height of
// the texture
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, bufferId[renderIdIndex]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER,GLES20.GL_DEPTH_COMPONENT16,width,height);
//bind the frameBuffer;
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,bufferId[frameIdIndex]);
//specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,bufferId[textureIdIndex],0);
//specify renderbuffer as depth_attachment
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER,GLES20.GL_DEPTH_ATTACHMENT,GLES20.GL_RENDERBUFFER,bufferId[renderIdIndex]);
//check for framebuffer complete
int status= GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
if(status !=GLES20.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("status:"+status+", hex:"+Integer.toHexString(status));
}
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
// Making created bitmap (from OpenGL points) compatible with
// Android
// bitmap
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
// cleanup
GLES20.glDeleteRenderbuffers(1, bufferId,renderIdIndex);
GLES20.glDeleteFramebuffers(1, bufferId ,frameIdIndex);
GLES20.glDeleteTextures(1, bufferId,textureIdIndex);
return bitmap;
}

Your formats and types are somewhat mixed up. This glTexImage2D() should already give you a GL_INVALID_OPERATION call if you check with glGetError():
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,
GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_SHORT_5_6_5, null);
GL_UNSIGNED_SHORT_5_6_5 can only be used with a format of GL_RGB. From the documentation:
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
To avoid this error condition, the call needs to be:
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,
GLES20.GL_RGB, width, height, 0,
GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, null);
The glReadPixels() call itself looks fine to me, so I believe that should work once you got a valid texture to render to.
The bitmap.setPixels() call might be problematic. The documentation says that it expects ARGB colors, and you will have RGBA here. But that's beyond the main scope of your question.

Related

OpenGL ES texture wrap giving weird results sometimes

I am using following method to create OpenGL ES texture on android.
private int createTexture(int width, int height, int i) {
int[] mTextureHandles = new int[1];
GLES20.glGenTextures(1, mTextureHandles, 0);
int textureID = mTextureHandles[0];
GLES20.glActiveTexture(GLES20.GL_TEXTURE0 + i);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureID);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
return textureID;
}
Then uploading a bitmap to this texture and simply rendering it using GLSurfaceView's Renderer.
Most of the times it is working as expected,
But randomly the texture is displayed like this. (here GL_TEXTURE_WRAP_S mode is GLES20.GL_CLAMP_TO_EDGE
After changing wrap mode.
If GL_TEXTURE_WRAP_S = GLES20.GL_REPEAT then (again randomly) texture is displayed like this(notice the color change).
I have already tried using power of 2 texture.
Code for vertex buffer
private FloatBuffer createVertexBuffer(RectF glCoords) {
// RectF glCoords contains gl vertices.
// current Rect read from Logcat.
// left = -0.5833333, top = 0.5, right = 0.5833334, bottom = -0.5
float[] vertices = new float[]{glCoords.left, glCoords.top, 0, 1, // V1 - top left
glCoords.left, glCoords.bottom, 0, 1, // V2 - bottom left
glCoords.right, glCoords.bottom, 0, 1, // V3 - bottom right
glCoords.right, glCoords.top, 0, 1 // V4 - top right
};
FloatBuffer vBuffer = ByteBuffer.allocateDirect(vertices.length * bytesPerFloat)
.order(ByteOrder.nativeOrder()).asFloatBuffer();
vBuffer.put(vertices);
vBuffer.position(0);
return vBuffer;
}
IndicesBuffer is not used as I am using Triangle-Fan to render triangles.
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 6);
It would be really helpful if someone can point out as to what could be causing this
Your geometry seems to be a 4-vertex rectangle (quad), but the code wants to draw a fan with 6 vertices - GLES20.glDrawArrays(GLES20.GL_TRIANGLE_FAN, 0, 6);
Those extra 2 vertices aren't initialized, not specified in the vertex array. OpenGL will read beyond the defined part of the array. The results would be random, and I'd even expect crashes.
Replace 6 with 4 in glDrawArrays() call, this should fix the problem.

Can't enable depth test in Android OpenGL

I am new to OpenGL, work on Android. Found good code example in Web with possibility to record everything. But I found one big problem: depth test doesn't work, so in my models all faces that opposite to camera are lighted even if they are behind another faces:
I think some mistakes could be in this part of code:
/**
* Prepares the off-screen framebuffer.
*/
private void prepareFramebuffer(int width, int height) {
GlUtil.checkGlError("prepareFramebuffer start");
int[] values = new int[1];
// Create a texture object and bind it. This will be the color buffer.
GLES20.glGenTextures(1, values, 0);
GlUtil.checkGlError("glGenTextures");
mOffscreenTexture = values[0]; // expected > 0
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mOffscreenTexture);
GlUtil.checkGlError("glBindTexture " + mOffscreenTexture);
// Create texture storage.
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
// Set parameters. We're probably using non-power-of-two dimensions, so
// some values may not be available for use.
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GlUtil.checkGlError("glTexParameter");
// Create framebuffer object and bind it.
GLES20.glGenFramebuffers(1, values, 0);
GlUtil.checkGlError("glGenFramebuffers");
mFramebuffer = values[0]; // expected > 0
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFramebuffer);
GlUtil.checkGlError("glBindFramebuffer " + mFramebuffer);
// Create a depth buffer and bind it.
GLES20.glGenRenderbuffers(1, values, 0);
GlUtil.checkGlError("glGenRenderbuffers");
mDepthBuffer = values[0]; // expected > 0
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, mDepthBuffer);
GlUtil.checkGlError("glBindRenderbuffer " + mDepthBuffer);
// Allocate storage for the depth buffer.
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16,
width, height);
GlUtil.checkGlError("glRenderbufferStorage");
// Attach the depth buffer and the texture (color buffer) to the framebuffer object.
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT,
GLES20.GL_RENDERBUFFER, mDepthBuffer);
GlUtil.checkGlError("glFramebufferRenderbuffer");
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, mOffscreenTexture, 0);
GlUtil.checkGlError("glFramebufferTexture2D");
// See if GLES is happy with all this.
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
if (status != GLES20.GL_FRAMEBUFFER_COMPLETE) {
// throw new RuntimeException("Framebuffer not complete, status=" + status);
}
// Switch back to the default framebuffer.
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GlUtil.checkGlError("prepareFramebuffer done");
}
Another important things that I have already done:
/**
* Prepares window surface and GL state.
*/
private void prepareGl(Surface surface) {
Log.d(TAG, "prepareGl");
EglCore.logCurrent("before creating");
GlUtil.checkGlError("debug");
if (mWindowSurface == null)
mWindowSurface = new WindowSurface(mEglCore, surface, false);
mWindowSurface.makeCurrent();
EglCore.logCurrent("after creation");
glClearColor(0f, 0f, 0f, 1f); //glClearColor(0.15f, 0.15f, 0.15f, 1f);
GlUtil.checkGlError("debug");
glEnable(GL_DEPTH_TEST);
GlUtil.checkGlError("debug");
glEnable(GL_CULL_FACE);
GlUtil.checkGlError("debug");
glCullFace(GL_BACK);
GlUtil.checkGlError("debug");
glShadeModel(GL_SMOOTH);
GlUtil.checkGlError("debug");
glEnable(GL_BLEND);
GlUtil.checkGlError("debug");
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
What am I doing wrong? Where can be the mistake?
Add onDraw code:
private void draw() {
GlUtil.checkGlError("draw start");
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDepthFunc(GL_LESS);
if (leftEye != null) {
resetMatrices();
setModelViewMatrix();
translateMEyes();
eyeTextureProgram.useProgram();
eyeTextureProgram.setViewAndTexture(modelViewProjectionMatrix, textureEye);
setProgramParams(eyeTextureProgram, textureEye);
leftEye.bindData(eyeTextureProgram);
leftEye.draw();
}
if (rightEye != null) {
rightEye.bindData(eyeTextureProgram);
rightEye.draw();
}
glDepthFunc(GL_LEQUAL);
resetMatrices();
setModelViewMatrix();
setProgramParams(faceTextureProgram, textureFace);
headBase.bindData(faceTextureProgram);
headBase.draw();
GlUtil.checkGlError("draw done");
}
Set the clearing value of the depth buffer to 1.0. Clearing value of the depth buffer should be the max value of the depth range which by default is 0.0 - 1.0.
glClearDepth(1.0)
Also, ensure that you clear both the depth and color buffer before your drawing.
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)

Loading compressed textures in Android OpenglES 2.0

I'm trying to optimize my app by having compressed textures. Since a lot of my textures require alpha I can't use ETC1.
I've successfully compressed textures to other formats using hints from this post. Android OpenGL Texture Compression
Problem is I can't seem to adapt my code, to read this textures. The only thing I get with this code is black (colour, instead of texture).
Here is the method, that loads textures:
public static int loadCompressedTexture(Context context, int resourceId){
final int[] textureObjectIds = new int[1];
glGenTextures(1, textureObjectIds, 0);
if(textureObjectIds[0] == 0){
logTextureHelper(Log.WARN, "Could not generate a new OpenGL texture object");
return 0;
}
final InputStream bitmap = context.getResources().openRawResource(resourceId);
byte[] buffer;
ByteBuffer bf;
try {
buffer = new byte[bitmap.available()];
bitmap.read(buffer);
bf = ByteBuffer.wrap(buffer);
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);//binds texture to texture object
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);//minimization filter
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);//send texture data to OpenGL to the CURRENTLY BOUND object
GLES20.glCompressedTexImage2D(GLES20.GL_TEXTURE_2D, 0, GL10.GL_PALETTE4_RGBA8_OES, 512, 512, 0, bf.capacity(), bf);
//glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0); //unbind texture
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return textureObjectIds[0];
}
I currently trying to load resource that is 512x512px, compressed with PvrTexTool, using PVRTC 4bpp RGBA encoding.
Can anyone see what the problem is? Or better yet, point me to an example that works? From what I found I could only find examples for ETC1, who use some ETCUtil to load textures.
EDIT2: Ok I've solved it.
There were 2 problems. First you need to use these texture filters.
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
Secondly I was not aware that pvrtc had a seperate header. The correct offset is thus 67 bytes.
PVRTC format specification
This code now works correctly:
buffer = new byte[bitmap.available()];
bitmap.read(buffer);
int offset = 67; // 52 bit = header, 15 bit = metadata
bf = ByteBuffer.wrap(buffer, offset, buffer.length-offset);
bf.order(ByteOrder.LITTLE_ENDIAN);
long version = bf.getInt(0) & 0xFFFFFFFFL;
long flags = bf.getInt(4);
long pixelFormat = bf.getLong(8);
long colorF = bf.getInt(16);
long chanel = bf.getInt(20);
int height = bf.getInt(24);
int width = bf.getInt(28);
long depth = bf.getInt(32);
long nsurf = bf.getInt(36);
long nface = bf.getInt(40);
long mipC = bf.getInt(44);
long mSize = bf.getInt(48);
long fourCC = bf.getInt(52)& 0xFFFFFFFFL;
long key = bf.getInt(56)& 0xFFFFFFFFL;
long dataSize = bf.getInt(60)& 0xFFFFFFFFL;
// Log.d("TextureHelper","Buffer size: "+version+" "+flags+" "+pixelFormat+" "+colorF+" "+chanel+" "+height+" w: "+width+" h: "+height+" "+depth+" "+nsurf+" "+nface+" "+mipC+" "+mSize);
// Log.d("TextureHelper","Buffer size: "+fourCC+" "+key+" "+dataSize);
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glCompressedTexImage2D(GL_TEXTURE_2D, 0,GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG, width, height, 0, bf.capacity()-offset, bf);
Log.d("TextureHelper","Buffer size: "+bf.capacity()+" : "+buffer.length+" error:"+GLES20.glGetError());
glBindTexture(GL_TEXTURE_2D, 0); //unbind texture
You want to load PVRTC 4bpp RGBA texture via glCompressedTexImage2D? You should use COMPRESSED_RGBA_PVRTC_4BPPV1_IMG instead of GL10.GL_PALETTE4_RGBA8_OES.
IMG_texture_compression_pvrtc
https://www.khronos.org/registry/gles/extensions/IMG/IMG_texture_compression_pvrtc.txt
I'm not sure but it seems Android GLES20 doesn't have those constants, so you need to define it.
// PowerVR Texture compression constants
public static final int GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG = 0x8C00;
public static final int GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG = 0x8C01;
public static final int GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG = 0x8C02;
public static final int GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG = 0x8C03;
And you should check glGetError after glCompressedTexImage2D calling. The value of glGetError should be GL_NO_ERROR.

Problems reading ATI compressed textures

I'm trying to use compressed textures on my android application. I have a problem loading textures, textures seem to be "cut-off" on the right side of the object.
For compressing textures I'm using ATI's "TheCompressonator".
For testing I'm using Nexus 5.
I'm suspecting the problem is with my "calculated" size of the texture, but can't find any refenreces / specification of this compression format.
Does anyone know how to properly read this file format?
Here is screenshot from nexus:
Here is how it should have looked (don't mind black object textures, the image for that was missing)
Here is my code snippet.
final int[] textureObjectIds = new int[1];
glGenTextures(1, textureObjectIds, 0);
if(textureObjectIds[0] == 0){
logTextureHelper(Log.WARN, "Could not generate a new OpenGL texture object");
return 0;
}
final InputStream bitmap = context.getResources().openRawResource(resourceId);
byte[] buffer;
ByteBuffer bf;
try {
buffer = new byte[bitmap.available()];
bitmap.read(buffer);
int offset = 0; // 64 bit = header, 15 bit = metadata
bf = ByteBuffer.wrap(buffer, offset, buffer.length-offset);
bf.order(ByteOrder.LITTLE_ENDIAN);
int height = bf.getInt(16);
int width = bf.getInt(12);
int size = ((height + 3) / 4) * ((width + 3) / 4) * 16;
Log.d("TextureHelper","Buffer size: "+width+" "+height+" "+size);
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glCompressedTexImage2D(GL_TEXTURE_2D, 0,ATC_RGBA_EXPLICIT_ALPHA_AMD, width, height, 0, size, bf);
Log.d("TextureHelper","Buffer size: "+bf.capacity()+" : "+buffer.length+" error:"+GLES20.glGetError());
glBindTexture(GL_TEXTURE_2D, 0); //unbind texture
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return textureObjectIds[0];
EDIT: Solution
buffer = new byte[bitmap.available()];
bitmap.read(buffer);
int offset = 128; // 64 bit = header, 15 bit = metadata
bf = ByteBuffer.wrap(buffer, offset, buffer.length-offset);
bf.order(ByteOrder.LITTLE_ENDIAN);
int height = bf.getInt(16);
int width = bf.getInt(12);
int size = ((height + 3) / 4) * ((width + 3) / 4) * 16;
Log.d("TextureHelper","Buffer size: "+width+" "+height+" "+size);
bf.compact();///////SOLUTION!
bf.position(0);
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glCompressedTexImage2D(GL_TEXTURE_2D, 0,ATC_RGBA_EXPLICIT_ALPHA_AMD, width, height, 0, size, bf);
Log.d("TextureHelper","Buffer size: "+bf.capacity()+" : "+buffer.length+" error:"+GLES20.glGetError());
glBindTexture(GL_TEXTURE_2D, 0); //unbind texture
Note: You need to have data at 0 position, if you just offset position(128) it will throw invalid pointer exception.
Your size computation looks correct, but you're uploading the header as image data. After you extract the width and height from the header, offset your byte buffer by the appropriate amount to skip the header and start pointing at the image data.
There are a few ways you could do this, but something like this may work as an example (this could be optimized to remove the second bytebuffer). Also, I'm not sure what texture format you're using, but let's assume your header is, oh, 124 bytes long.
// assumption that buffer[] is the fully loaded contents of the file
byte[] buffer = ... // load entire file from input stream
// read the header
ByteBuffer bfHeader = ByteBuffer.wrap(buffer);
bfHeader.order(ByteOrder.LITTLE_ENDIAN);
int height = bfHeader.getInt(16);
int width = bfHeader.getInt(12);
// offset to image data
int headerSize = 124; // (replace with correct header size)
ByteBuffer bfData = ByteBuffer.wrap(buffer, headerSize, buffer.length-headerSize);
bfData.order(ByteOrder.LITTLE_ENDIAN);
// load image data
int size = ((height + 3) / 4) * ((width + 3) / 4) * 16;
glBindTexture(GL_TEXTURE_2D, textureObjectIds[0]);
GLES20.glCompressedTexImage2D(GL_TEXTURE_2D, 0,ATC_RGBA_EXPLICIT_ALPHA_AMD, width, height, 0, size, bfData);

Convert OpenGL ES 2.0 rendered texture to bitmap and back

I'd like to blur the rendered texture with RenderScript and for it I need to convert it to bitmap format and to use it I need to convert it back to OpenGL texture.
The render to texture is working. The problem has to be somewhere here but I don't understand why it doesn't work. I'm getting a black screen
public void renderToTexture(){
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[0]);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, renderTex[0], 0);
// attach render buffer as depth buffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, depthRb[0]);
// check status
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
drawRender();
Bitmap bitmap = SavePixels(0,0,texW,texH);
//blur bitmap and get back a bluredBitmap not yet implemented
texture = TextureHelper.loadTexture(bluredBitmap, 128);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture);
drawRender2();
}
To create a bitmap I read pixels from the framebuffer because didn't find any other way to do it but I'm open for other methods
public static Bitmap SavePixels(int x, int y, int w, int h)
{
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
Here is the bitmap to texture code:
public static int loadTexture(final Bitmap pics, int size)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Read in the resource
final Bitmap bitmap = pics;
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glEnable(GLES20.GL_BLEND);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
You can see Android MediaCodec stuff, also can directly see ExtractMpegFramesTest_egl14.java, and the code snippet is here:
[/**
* Saves][1] the current frame to disk as a PNG image.
*/
public void saveFrame(String filename) throws IOException {
// glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). To use the Bitmap
// constructor that takes an int[] array with pixel data, we need an int[] filled
// with little-endian ARGB data.
//
// If we implement this as a series of buf.get() calls, we can spend 2.5 seconds just
// copying data around for a 720p frame. It's better to do a bulk get() and then
// rearrange the data in memory. (For comparison, the PNG compress takes about 500ms
// for a trivial frame.)
//
// So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer
// get() into a straight memcpy on most Android devices. Our ints will hold ABGR data.
// Swapping B and R gives us ARGB. We need about 30ms for the bulk get(), and another
// 270ms for the color swap.
//
// We can avoid the costly B/R swap here if we do it in the fragment shader (see
// http://stackoverflow.com/questions/21634450/ ).
//
// Having said all that... it turns out that the Bitmap#copyPixelsFromBuffer()
// method wants RGBA pixels, not ARGB, so if we create an empty bitmap and then
// copy pixel data in we can avoid the swap issue entirely, and just copy straight
// into the Bitmap from the ByteBuffer.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside-down relative to what appears on screen if the
// typical GL conventions are used. (For ExtractMpegFrameTest, we avoid the issue
// by inverting the frame when we render it.)
//
// Allocating large buffers is expensive, so we really want mPixelBuf to be
// allocated ahead of time if possible. We still get some allocations from the
// Bitmap / PNG creation.
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
mPixelBuf);
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mPixelBuf.rewind();
bmp.copyPixelsFromBuffer(mPixelBuf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
if (VERBOSE) {
Log.d(TAG, "Saved " + mWidth + "x" + mHeight + " frame as '" + filename + "'");
}
}
You should have used:
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
Your for loop is supposed to convert RGBA to ARGB_8888

Categories

Resources