I'd like to blur the rendered texture with RenderScript and for it I need to convert it to bitmap format and to use it I need to convert it back to OpenGL texture.
The render to texture is working. The problem has to be somewhere here but I don't understand why it doesn't work. I'm getting a black screen
public void renderToTexture(){
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[0]);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, renderTex[0], 0);
// attach render buffer as depth buffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, depthRb[0]);
// check status
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
drawRender();
Bitmap bitmap = SavePixels(0,0,texW,texH);
//blur bitmap and get back a bluredBitmap not yet implemented
texture = TextureHelper.loadTexture(bluredBitmap, 128);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
GLES20.glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GLES20.glClear( GLES20.GL_DEPTH_BUFFER_BIT | GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture);
drawRender2();
}
To create a bitmap I read pixels from the framebuffer because didn't find any other way to do it but I'm open for other methods
public static Bitmap SavePixels(int x, int y, int w, int h)
{
int b[]=new int[w*(y+h)];
int bt[]=new int[w*h];
IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i<h; i++, k++)
{
for(int j=0; j<w; j++)
{
int pix=b[i*w+j];
int pb=(pix>>16)&0xff;
int pr=(pix<<16)&0x00ff0000;
int pix1=(pix&0xff00ff00) | pr | pb;
bt[(h-k-1)*w+j]=pix1;
}
}
Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
Here is the bitmap to texture code:
public static int loadTexture(final Bitmap pics, int size)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Read in the resource
final Bitmap bitmap = pics;
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glEnable(GLES20.GL_BLEND);
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
return textureHandle[0];
}
You can see Android MediaCodec stuff, also can directly see ExtractMpegFramesTest_egl14.java, and the code snippet is here:
[/**
* Saves][1] the current frame to disk as a PNG image.
*/
public void saveFrame(String filename) throws IOException {
// glReadPixels gives us a ByteBuffer filled with what is essentially big-endian RGBA
// data (i.e. a byte of red, followed by a byte of green...). To use the Bitmap
// constructor that takes an int[] array with pixel data, we need an int[] filled
// with little-endian ARGB data.
//
// If we implement this as a series of buf.get() calls, we can spend 2.5 seconds just
// copying data around for a 720p frame. It's better to do a bulk get() and then
// rearrange the data in memory. (For comparison, the PNG compress takes about 500ms
// for a trivial frame.)
//
// So... we set the ByteBuffer to little-endian, which should turn the bulk IntBuffer
// get() into a straight memcpy on most Android devices. Our ints will hold ABGR data.
// Swapping B and R gives us ARGB. We need about 30ms for the bulk get(), and another
// 270ms for the color swap.
//
// We can avoid the costly B/R swap here if we do it in the fragment shader (see
// http://stackoverflow.com/questions/21634450/ ).
//
// Having said all that... it turns out that the Bitmap#copyPixelsFromBuffer()
// method wants RGBA pixels, not ARGB, so if we create an empty bitmap and then
// copy pixel data in we can avoid the swap issue entirely, and just copy straight
// into the Bitmap from the ByteBuffer.
//
// Making this even more interesting is the upside-down nature of GL, which means
// our output will look upside-down relative to what appears on screen if the
// typical GL conventions are used. (For ExtractMpegFrameTest, we avoid the issue
// by inverting the frame when we render it.)
//
// Allocating large buffers is expensive, so we really want mPixelBuf to be
// allocated ahead of time if possible. We still get some allocations from the
// Bitmap / PNG creation.
mPixelBuf.rewind();
GLES20.glReadPixels(0, 0, mWidth, mHeight, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE,
mPixelBuf);
BufferedOutputStream bos = null;
try {
bos = new BufferedOutputStream(new FileOutputStream(filename));
Bitmap bmp = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mPixelBuf.rewind();
bmp.copyPixelsFromBuffer(mPixelBuf);
bmp.compress(Bitmap.CompressFormat.PNG, 90, bos);
bmp.recycle();
} finally {
if (bos != null) bos.close();
}
if (VERBOSE) {
Log.d(TAG, "Saved " + mWidth + "x" + mHeight + " frame as '" + filename + "'");
}
}
You should have used:
GLES20.glReadPixels(0, 0, w, h, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, ib);
Your for loop is supposed to convert RGBA to ARGB_8888
Related
I am looking for an example on how to render a bitmap to the surface provided by MediaCodec so I can encode and then mux them into mp4 video.
The closest well know example I see is this EncodeAndMuxTest. Unfortunately with my limited OpenGL knowledge I have not been able to convert the example to use bitmaps instead of the raw OpenGL Frames it currently generates. Here is the example's Frame generation Method.
private void generateSurfaceFrame(int frameIndex) {
frameIndex %= 8;
int startX, startY;
if (frameIndex < 4) {
// (0,0) is bottom-left in GL
startX = frameIndex * (mWidth / 4);
startY = mHeight / 2;
} else {
startX = (7 - frameIndex) * (mWidth / 4);
startY = 0;
}
GLES20.glClearColor(TEST_R0 / 255.0f, TEST_G0 / 255.0f, TEST_B0 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_SCISSOR_TEST);
GLES20.glScissor(startX, startY, mWidth / 4, mHeight / 2);
GLES20.glClearColor(TEST_R1 / 255.0f, TEST_G1 / 255.0f, TEST_B1 / 255.0f, 1.0f);
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glDisable(GLES20.GL_SCISSOR_TEST);
}
Can someone please show me how to modify this to render a bitmap instead or point me to an example that does this? I'm assuming that this is all I need to do in order to get bitmaps to render to the surface (but I may be wrong).
Edit: this is the method I replaced generateSurfaceFrame with that doesn't create any input to the encoder surface so far:
private int generatebitmapframe()
{
final int[] textureHandle = new int[1];
try {
int id = _context.getResources().getIdentifier("drawable/other", null, _context.getPackageName());
// Temporary create a bitmap
Bitmap bmp = BitmapFactory.decodeResource(_context.getResources(), id);
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] != 0)
{
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bmp, 0);
}
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error loading texture.");
}
//Utils.testSavebitmap(bmp, new File(OUTPUT_DIR, "testers.bmp").getAbsolutePath());
}
catch (Exception e) { e.printStackTrace(); }
return textureHandle[0];
}
in my android App, i have a Frame Buffer Object that takes me the rendered Scene As a texture.
the app is an Origami game and user can fold a paper freely:
in every Fold, the current rendered scene saves to a texture using fbo and then i redraw the paper with new coordinates with new texture attached to it, to seem like folded paper. and this way the user can fold the paper as many time as he wants.
I want in every frame Check the rendered scene, to determinate does the user riches to the final shape (assume that i have the final shape in a 2d-array with 0 and 1 filled, 0 for transparency and 1 for colored pixels)
what i want, is to some How, Convert this Texture to A 2d-Array filled with 0 and 1,
0 for transparency pixel, and 1 for Colored pixel of texture.
i need this to then compare this result with a previously Known 2d-Array to determinate if the texture is the shape i want or not.
is it possible to save the texture data to an array?
i cant use glreadPixels because it is so heavy and its not possible to call it every frame.
here is the FBO class (i want to have renderTex[0] as array):
public class FBO {
int [] fb, renderTex;
int texW;
int texH;
public FBO(int width,int height){
texW = width;
texH = height;
fb = new int[1];
renderTex= new int[1];
}
public void setup(GL10 gl){
// generate
((GL11ExtensionPack)gl).glGenFramebuffersOES(1, fb, 0);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glGenTextures(1, renderTex, 0);// generate texture
gl.glBindTexture(GL10.GL_TEXTURE_2D, renderTex[0]);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
//texBuffer = ByteBuffer.allocateDirect(buf.length*4).order(ByteOrder.nativeOrder()).asIntBuffer();
//gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,GL10.GL_MODULATE);
gl.glTexImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, texW, texH, 0, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, null);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
public boolean RenderStart(GL10 gl){
Log.d("TextureAndFBO", ""+renderTex[0] + " And " +fb[0]);
// Bind the framebuffer
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, fb[0]);
// specify texture as color attachment
((GL11ExtensionPack)gl).glFramebufferTexture2DOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, GL11ExtensionPack.GL_COLOR_ATTACHMENT0_OES, GL10.GL_TEXTURE_2D, renderTex[0], 0);
int error = gl.glGetError();
if (error != GL10.GL_NO_ERROR) {
Log.d("err", "FIRST Background Load GLError: " + error+" ");
}
int status = ((GL11ExtensionPack)gl).glCheckFramebufferStatusOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES);
if (status != GL11ExtensionPack.GL_FRAMEBUFFER_COMPLETE_OES)
{
Log.d("err", "SECOND Background Load GLError: " + status+" ");;
return true;
}
gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
return true;
}
public void RenderEnd(GL10 gl){
((GL11ExtensionPack)gl).glBindFramebufferOES(GL11ExtensionPack.GL_FRAMEBUFFER_OES, 0);
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glBindTexture(GL10.GL_TEXTURE_2D, 0);
gl.glColor4f(1.0f,1.0f,1.0f,1.0f);
gl.glDisable(GL10.GL_TEXTURE_2D);
}
public int getTexture(){
return renderTex[0];
}
public int getFBO(){
return fb[0];
}
}
If you are using openGL ES 3.0 and later then pbo would be a good solution. But I think you can use EGLImage. Because this only needs OpenGL ES 1.1 or 2.0.
The function to create an EGLImageKHR is:
EGLImageKHR eglCreateImageKHR(EGLDisplay dpy,
EGLContext ctx,
EGLenum target,
EGLClientBuffer buffer,
const EGLint *attrib_list)
To allocate an ANativeWindowBuffer, Android has a simple wrapper called GraphicBuffer:
GraphicBuffer *window = new GraphicBuffer(width, height, PIXEL_FORMAT_RGBA_8888, GraphicBuffer::USAGE_SW_READ_OFTEN | GraphicBuffer::USAGE_HW_TEXTURE);
struct ANativeWindowBuffer *buffer = window->getNativeBuffer();
EGLImageKHR *image = eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, *attribs);
to read pixels from an FBO use one of these two methods below:
void EGLImageTargetTexture2DOES(enum target, eglImageOES image)
void EGLImageTargetRenderbufferStorageOES(enum target, eglImageOES image)
These two methods will esablish all the properties of the target GL_TEXTURE_2D or GL_RENDERBUFFER
uint8_t *ptr;
glBindTexture(GL_TEXTURE_2D, texture_id);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, image);
window->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &ptr);
memcpy(pixels, ptr, width * height * 4);
window->unlock();
To accomplish what you want, you need to use a PBO (Pixel Buffer Object): You can map it to an array to read it if it were a regular array.
OpenGL ARB_pixel_buffer_object extension is very close to
ARB_vertex_buffer_object. It simply expands ARB_vertex_buffer_object
extension in order to store not only vertex data but also pixel data
into the buffer objects. This buffer object storing pixel data is
called Pixel Buffer Object (PBO). ARB_pixel_buffer_object extension
borrows all VBO framework and APIs, plus, adds 2 additional "target"
tokens. These tokens assist the PBO memory manger (OpenGL driver) to
determine the best location of the buffer object; system memory,
shared memory or video memory. Also, the target tokens clearly specify
that the bound PBO will be used in one of 2 different operations;
GL_PIXEL_PACK_BUFFER_ARB to transfer pixel data to a PBO, or
GL_PIXEL_UNPACK_BUFFER_ARB to transfer pixel data from PBO.
It can be created similiar to other buffer objects:
glGenBuffers(1, &pbo);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glBufferData(GL_PIXEL_PACK_BUFFER, size, 0, GL_DYNAMIC_READ);
Then you can read from an FBO (or a texture) easily:
glReadBuffer(GL_COLOR_ATTACHMENT0);
glBindBuffer(GL_PIXEL_PACK_BUFFER, pbo);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, 0);
GLubyte *array = (GLubyte*)glMapBufferRange(GL_PIXEL_PACK_BUFFER, 0, size, GL_MAP_READ_BIT);
// TODO: Do your checking of the shape inside of this 'array' pointer or copy it somewhere using memcpy()
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
Here GL_COLOR_ATTACHMENT0 is used as input - see the specification of glReadBuffer for further details how to specify front or backbuffer to be used.
I'm trying to implement dynamic environment reflections on Android using OpenGL ES 2.0.
For this I set my camera in the place of my reflective object and render to an off-screen renderbuffer in 6 different directions (two per axis) to build a cube map, but that's very slow, so my idea is to make the cube map have a lower resolution in order to speed things up. I thought this should be simple but I don't understand my observations.
I want to see the results of those 6 renders to check if the results are as expected, so I export them to disk as png files before rendering the next one. I rendered once with framebuffer of 1024x1024 and once with 256x256. However, when I look at the exported files, I can see that the 256.png only contains a fraction of the content of the bigger one. I was expecting them to have the same content (field of view if you like) but different resolutions ("bigger pixels"), but that's not what happens.
I have static constants REFLECTION_TEX_WIDTH and REFLECTION_TEX_HEIGHT to set the width and height of the created textures and renderbuffers, and I use those constants for both creation and exporting. But the exported files never cover as much area as I expect. When I set those dimensions really large, like 2000 each, the rendered area seems to cover about 1080x1550 pixel, the rest of the file remains black. Can someone tell me what's going on here?
I'm not sure if the problem is in my understanding of how the framebuffer works or if the rendering is correct but the problem is introduced in my file export... those file export methods are copied from the internet, I don't really understand them.
I want to render the same area/FOV but in a coarser resolution. Is that too much to ask?
Some code then.
Initializations:
// create 6 textures for the dynamic environment reflection
final int skyboxFaces=6;
final int[] textureId=new int[1];
GLES20.glGenTextures(1, textureId, 0);
skyboxTexture=textureId[0];
ShaderFactory.checkGLError("initRendering::createSkyboxTextures");
GLES20.glBindTexture(GLES20.GL_TEXTURE_CUBE_MAP, skyboxTexture);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_CUBE_MAP, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_CUBE_MAP, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_CUBE_MAP, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_CUBE_MAP, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
for(int i=0; i < skyboxFaces; i++)
{
GLES20.glTexImage2D(GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GLES20.GL_RGBA,
REFLECTION_TEX_WIDTH, REFLECTION_TEX_HEIGHT, 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, null);
ShaderFactory.checkGLError("initRendering::setSkyboxTexture(" + i + ")");
}
// create renderbuffer and bind 16-bit depth buffer
renderBuffers=new int[1];
GLES20.glGenRenderbuffers(1, renderBuffers, 0);
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, renderBuffers[0]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16,
REFLECTION_TEX_WIDTH, REFLECTION_TEX_HEIGHT);
ShaderFactory.checkGLError("initRendering::createRenderbuffer");
frameBuffers=new int[1];
GLES20.glGenFramebuffers(1, frameBuffers, 0);
// GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
// GLES20.GL_RENDERBUFFER, frameBuffers[0]);
ShaderFactory.checkGLError("initRendering::createFrameBuffer");
Then in the renderloop I do the following:
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frameBuffers[0]);
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, renderBuffers[0]);
// assign the cubemap texture to the framebuffer
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_CUBE_MAP_POSITIVE_X + bufferId, skyboxTexture, 0);
// assign the depth renderbuffer to the framebuffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT,
GLES20.GL_RENDERBUFFER, frameBuffers[0]);
// clear the current framebuffer (color and depth)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
And to export the results as files I do this:
public void savePNG(final int x, final int y, final int w, final int h, final String name)
{
final Bitmap bmp=savePixels(x, y, w, h);
try
{
final File file=new File(Environment.getExternalStoragePublicDirectory(
Environment.DIRECTORY_PICTURES), name);
final File parent=file.getParentFile();
// create parent directories if necessary
if(null != parent && !parent.isDirectory())
parent.mkdirs();
// delete existing file to avoid mixing old data with new
if(file.exists())
file.delete();
final FileOutputStream fos=new FileOutputStream(file);
bmp.compress(CompressFormat.PNG, 100, fos);
fos.flush();
fos.close();
context.sendBroadcast(new Intent(Intent.ACTION_MEDIA_SCANNER_SCAN_FILE, Uri.fromFile(file)));
}
catch(final FileNotFoundException e)
{
// TODO Auto-generated catch block
LOG.error("problem " + e);
}
catch(final IOException e)
{
// TODO Auto-generated catch block
LOG.error("problem " + e);
}
}
// TODO: magic imported code
public Bitmap savePixels(final int x, final int y, final int w, final int h)
{
final int b[]=new int[w * (y + h)];
final int bt[]=new int[w * h];
final IntBuffer ib=IntBuffer.wrap(b);
ib.position(0);
GLES20.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA, GL10.GL_UNSIGNED_BYTE, ib);
for(int i=0, k=0; i < h; i++, k++)
{
// OpenGL bitmap is incompatible with Android bitmap and needs some correction.
for(int j=0; j < w; j++)
{
final int pix=b[i * w + j];
final int pb=(pix >> 16) & 0xff;
final int pr=(pix << 16) & 0x00ff0000;
final int pix1=(pix & 0xff00ff00) | pr | pb;
bt[(h - k - 1) * w + j]=pix1;
}
}
final Bitmap sb=Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888);
return sb;
}
It looks like you're missing to set the viewport before rendering to the FBO. During the setup of FBO rendering, add this call:
glViewport(0, 0, REFLECTION_TEX_WIDTH, REFLECTION_TEX_HEIGHT);
You can place it around where you have the glClear(). Don't forget to set it back to the size of the default framebuffer after you're done with the FBO rendering, and before rendering to the default framebuffer.
The viewport size is global state, and defaults to the initial size of the default framebuffer. So anytime you use an FBO with a size different from the default draw surface, you need to set the viewport accordingly.
On Android - ES 2. Reading from the framebuffer by calling glReadPixels + FBO. However, the byte array is 0. Interesting enough is that when I remove the binding code (leave the glReadPixels) it works.
Made me wonder if I didn't bind the buffer correctly, although when I check the framebuffer status (glCheckFramebufferStatus) I get GLES20.GL_FRAMEBUFFER_COMPLETE.
Any idea what I've done wrong?
int frameIdIndex=0,renderIdIndex=1,textureIdIndex=2;
int[] bufferId=new int[3];
Bitmap takeOne(Context cntxt) {
DisplayMetrics dm = cntxt.getResources().getDisplayMetrics();
int width = dm.widthPixels;
int height = dm.heightPixels;
//id index 0 frameId, 1 renderId 2 textureId;
GLES20.glGenFramebuffers(1,bufferId,frameIdIndex);
GLES20.glGenRenderbuffers(1, bufferId, renderIdIndex);
GLES20.glGenTextures(1, bufferId, textureIdIndex);
// bind texture and load the texture mip-level 0
// texels are RGB565
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D,bufferId[textureIdIndex]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D,0,GLES20.GL_RGBA,width,height,0,GLES20.GL_RGBA,GLES20.GL_UNSIGNED_SHORT_5_6_5,null);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
// bind renderbuffer and create a 16-bit depth buffer
// width and height of renderbuffer = width and height of
// the texture
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, bufferId[renderIdIndex]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER,GLES20.GL_DEPTH_COMPONENT16,width,height);
//bind the frameBuffer;
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER,bufferId[frameIdIndex]);
//specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER,GLES20.GL_COLOR_ATTACHMENT0,GLES20.GL_TEXTURE_2D,bufferId[textureIdIndex],0);
//specify renderbuffer as depth_attachment
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER,GLES20.GL_DEPTH_ATTACHMENT,GLES20.GL_RENDERBUFFER,bufferId[renderIdIndex]);
//check for framebuffer complete
int status= GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);
if(status !=GLES20.GL_FRAMEBUFFER_COMPLETE) {
throw new RuntimeException("status:"+status+", hex:"+Integer.toHexString(status));
}
int screenshotSize = width * height;
ByteBuffer bb = ByteBuffer.allocateDirect(screenshotSize * 4);
bb.order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA,
GL10.GL_UNSIGNED_BYTE, bb);
int pixelsBuffer[] = new int[screenshotSize];
bb.asIntBuffer().get(pixelsBuffer);
final Bitmap bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.RGB_565);
bitmap.setPixels(pixelsBuffer, screenshotSize - width, -width,
0, 0, width, height);
pixelsBuffer = null;
short sBuffer[] = new short[screenshotSize];
ShortBuffer sb = ShortBuffer.wrap(sBuffer);
bitmap.copyPixelsToBuffer(sb);
// Making created bitmap (from OpenGL points) compatible with
// Android
// bitmap
for (int i = 0; i < screenshotSize; ++i) {
short v = sBuffer[i];
sBuffer[i] = (short) (((v & 0x1f) << 11) | (v & 0x7e0) | ((v & 0xf800) >> 11));
}
sb.rewind();
bitmap.copyPixelsFromBuffer(sb);
// cleanup
GLES20.glDeleteRenderbuffers(1, bufferId,renderIdIndex);
GLES20.glDeleteFramebuffers(1, bufferId ,frameIdIndex);
GLES20.glDeleteTextures(1, bufferId,textureIdIndex);
return bitmap;
}
Your formats and types are somewhat mixed up. This glTexImage2D() should already give you a GL_INVALID_OPERATION call if you check with glGetError():
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,
GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_SHORT_5_6_5, null);
GL_UNSIGNED_SHORT_5_6_5 can only be used with a format of GL_RGB. From the documentation:
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
To avoid this error condition, the call needs to be:
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,
GLES20.GL_RGB, width, height, 0,
GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, null);
The glReadPixels() call itself looks fine to me, so I believe that should work once you got a valid texture to render to.
The bitmap.setPixels() call might be problematic. The documentation says that it expects ARGB colors, and you will have RGBA here. But that's beyond the main scope of your question.
Okay, let me preface this by saying that I am very new to OpenGL as it pertains to Android, and while I've been reading up on it for some time, I cannot get past this roadblock in my coding.
Currently I am trying to write a class to load textures from .png file located in my drawables folder onto a .obj model that I made in Blender. I did a UV unwrap on the Blender model, and then used the uv unwrap as a guide for the .png file.
The issue currently is that I am able to load the texture onto the model, but it is one solid color which seems to be coming from the texture file. Clearly I don't understand enough about UV texturing in Blender, but there are so many different OpenGL libraries, and so much variation from PC to Android that it's really hard to wrap my head around what works where.
I would be extremely grateful if somebody could help me with this. Here's some of the relevant code, I'll post more as necessary:
from TextureLoader:
public Texture getTexture(GL10 gl, final int ref) throws IOException {
Texture tex = (Texture) table.get(ref);
if (tex != null) {
return tex;
}
Log.i("Textures:", "Loading texture: " + ref);
tex = getTexture(gl, ref,
GL10.GL_TEXTURE_2D, // target
GL10.GL_RGBA, // dst pixel format
GL10.GL_LINEAR, // min filter (unused)
GL10.GL_NEAREST);
table.put(ref,tex);
return tex;
}
public Texture getTexture(GL10 gl, final int ref,
int target,
int dstPixelFormat,
int minFilter,
int magFilter) throws IOException {
if (!sReady) {
throw new RuntimeException("Texture Loader not prepared");
}
int srcPixelFormat = 0;
// create the texture ID for this texture
int id = createID(gl);
Texture texture = new Texture(target, id);
// bind this texture
gl.glBindTexture(target, id);
Bitmap bitmap = loadImage(ref);
texture.setWidth(bitmap.getWidth());
texture.setHeight(bitmap.getHeight());
if (bitmap.hasAlpha()) {
srcPixelFormat = GL10.GL_RGBA;
} else {
srcPixelFormat = GL10.GL_RGB;
}
// convert that image into a byte buffer of texture data
ByteBuffer textureBuffer = convertImageData(bitmap);
if (target == GL10.GL_TEXTURE_2D) {
gl.glTexParameterf(target, GL10.GL_TEXTURE_MIN_FILTER, minFilter);
gl.glTexParameterf(target, GL10.GL_TEXTURE_MAG_FILTER, magFilter);
}
GLUtils.texImage2D(target, 0, bitmap, 0);
/*gl.glTexImage2D(target,
0,
dstPixelFormat,
get2Fold(bitmap.getWidth()),
get2Fold(bitmap.getHeight()),
0,
srcPixelFormat,
GL10.GL_UNSIGNED_BYTE,
textureBuffer);*/
bitmap.recycle();
return texture;
}
/**
* Get the closest greater power of 2 to the fold number
*
* #param fold The target number
* #return The power of 2
*/
private int get2Fold(int fold) {
int ret = 2;
while (ret < fold) {
ret *= 2;
}
return ret;
}
/**
* Convert the buffered image to a texture
*
* #param bufferedImage The image to convert to a texture
* #param texture The texture to store the data into
* #return A buffer containing the data
*/
private ByteBuffer convertImageData(Bitmap bitmap) {
ByteBuffer imageBuffer = null;
ByteArrayOutputStream stream = new ByteArrayOutputStream();
bitmap.compress(Bitmap.CompressFormat.PNG, 100, stream);
byte[] data = stream.toByteArray();
imageBuffer = ByteBuffer.allocateDirect(data.length);
imageBuffer.order(ByteOrder.nativeOrder());
imageBuffer.put(data, 0, data.length);
imageBuffer.flip();
return imageBuffer;
}
/**
* Creates an integer buffer to hold specified ints
* - strictly a utility method
*
* #param size how many int to contain
* #return created IntBuffer
*/
protected IntBuffer createIntBuffer(int size) {
ByteBuffer temp = ByteBuffer.allocateDirect(4 * size);
temp.order(ByteOrder.nativeOrder());
return temp.asIntBuffer();
}
private Bitmap loadImage(int ref) {
Bitmap bitmap = null;
Matrix flip = new Matrix();
flip.postScale(1f, -1f);
// This will tell the BitmapFactory to not scale based on the device's pixel density:
BitmapFactory.Options opts = new BitmapFactory.Options();
opts.inScaled = false;
Bitmap temp = BitmapFactory.decodeResource(sContext.getResources(), ref, opts);
bitmap = Bitmap.createBitmap(temp, 0, 0, temp.getWidth(), temp.getHeight(), flip, true);
temp.recycle();
return bitmap;
}
from Texture:
public void bind(GL10 gl) {
gl.glBindTexture(target, textureID);
gl.glEnable(GL10.GL_TEXTURE_2D);
}
as it is called:
public void render() {
//Clear Screen And Depth Buffer
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
gl.glEnable(GL10.GL_LIGHTING);
gl.glPushMatrix();
gl.glTranslatef(0.0f, -1.2f, z); //Move down 1.2 Unit And Into The Screen 6.0
gl.glRotatef(xrot, 1.0f, 0.0f, 0.0f); //X
gl.glRotatef(yrot, 0.0f, 1.0f, 0.0f); //Y
texture.bind(gl);
model.draw(gl);
gl.glPopMatrix();
}