Multisampled FBO's in OpenGL ES 3.0 - android

How is it possible to have a multisampled texture as part of an FBO in OpenGL ES 3.0 (Android)?
The method glTexImage2DMultisample does not seem to exist.
I also want to call glReadPixels on this texture later on in this code,
so the multisampled texture should also be readable.
Is there some kind of extension or utility I would need to use for this?

You want glTexStorage2DMultisample. In general writing multisampled data back to memory is expensive, and needs a resolve using glBlitFramebuffer to consolidate to a single sample.
Consider using this extension to get a "free" resolve on most tile-based architectures.
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_multisampled_render_to_texture.txt

Actually, opengl es not support for texture multisampling, glTexStorage2DMultisample not work for texture multisampling. opengl es only support renderbuffer for multisampling, in my case, I resolved multisampling by create a renderbuffer, works charm.
how I did this:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA8, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGI("ERROR::FRAMEBUFFER:: Framebuffer is not complete!");
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
then render:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, mScreenWidth, mScreenHeight, 0, 0, mScreenWidth, mScreenHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
this works in opengl es 3.2, Android platform.

Related

Using a depth and stencil renderbuffer attachment for framebuffer in OpenGL ES

I want to create a framebuffer in OpenGL ES 2 with a color texture and depth and stencil renderbuffers. However, OpenGL ES doesn't seem to have GL_DEPTH24_STENCIL8 or GL_DEPTH_STENCIL_ATTACHMENT. Using two separate renderbuffers gives the error "Stencil and z buffer surfaces have different formats! Returning GL_FRAMEBUFFER_UNSUPPORTED!"
Is this not possible in OpenGL ES?
My FBO creation code:
private int width, height;
private int framebufferID,
colorTextureID,
depthRenderBufferID,
stencilRenderBufferID;
public FBO(int w, int h) {
width = w;
height = h;
int[] array = new int[1];
//Create the FrameBuffer and bind it
glGenFramebuffers(1, array, 0);
framebufferID = array[0];
glBindFramebuffer(GL_FRAMEBUFFER, framebufferID);
//Create the texture for color, so it can be rendered to the screen
glGenTextures(1, array, 0);
colorTextureID = array[0];
glBindTexture(GL_TEXTURE_2D, colorTextureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (java.nio.ByteBuffer) null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// attach the texture to the framebuffer
glFramebufferTexture2D( GL_FRAMEBUFFER, // must be GL_FRAMEBUFFER
GL_COLOR_ATTACHMENT0, // color attachment point
GL_TEXTURE_2D, // texture type
colorTextureID, // texture ID
0); // mipmap level
glBindTexture(GL_TEXTURE_2D, 0);
// is the color texture okay? hang in there buddy
FBOUtils.checkCompleteness(framebufferID);
//Create the depth RenderBuffer
glGenRenderbuffers(1, array, 0);
depthRenderBufferID = array[0];
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
//Create stencil RenderBuffer
glGenRenderbuffers(1, array, 0);
stencilRenderBufferID = array[0];
glBindRenderbuffer(GL_RENDERBUFFER, stencilRenderBufferID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
// bind renderbuffers to framebuffer object
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferID);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRenderBufferID);
// make sure nothing screwy happened
FBOUtils.checkCompleteness(framebufferID);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Packed depth/stencil surfaces are not a standard part of OpenGL ES 2.0, but are added via this extension:
https://www.khronos.org/registry/gles/extensions/OES/OES_packed_depth_stencil.txt
If the extension is supported on your platform (it usually is), the token names from OpenGL will generally work, but note that most have an _OES postfix because it is an OES extension, e.g. the internal format token is GL_DEPTH24_STENCIL8_OES.
The extension doesn't define a single combined attachment point such as GL_DEPTH_STENCIL_ATTACHMENT (that is added in OpenGL ES 3.0), but you can attach the same renderbuffer to one or both of the single attachment points. Note that it is not allowed to attach two different depth or stencil surfaces to the depth and stencil attachment points if you have attached a packed depth/stencil surface to the other (i.e. if you attach a packed depth/stencil to one attachment point, the other can either be attached to the same packed surface or unused).
In short, this is implementation dependent. What you try in the posted code with using a separate renderbuffer for depth and stencil is basically legal in ES 2.0. But then there's this paragraph in the spec:
[..] some implementations may not support rendering to particular combinations of internal formats. If the combination of formats of the images attached to a framebuffer object are not supported by the implementation, then the framebuffer is not complete under the clause labeled FRAMEBUFFER_UNSUPPORTED.
That's exactly the GL_FRAMEBUFFER_UNSUPPORTED error you are seeing. Your implementation apparently does not like the combination of depth and stencil buffer, and is at liberty to refuse supporting it while still being spec compliant.
There's one other aspect that makes your code device dependent. The combination of format and type you're using for your texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, (java.nio.ByteBuffer) null);
basically corresponds to an RGBA8 internal format (even though that naming is not used in the ES 2.0 spec). In base ES 2.0, this is not a color-renderable format. If you want something that is supported across the board, you'll have to use GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1 for the type. Well, theoretically a device can refuse to support almost any format. The only strict requirement is that it supports at least one format combination.
Rendering to RGBA8 formats is available on many devices as part of the OES_rgb8_rgba8 extension.
As already pointed out in another answer, combined depth/stencil formats are not part of base ES 2.0, and only available with the OES_packed_depth_stencil extension.

android read pixels from GL_TEXTURE_EXTERNAL_OES

I am trying to read pixels/data from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
The reason for binding the texture to that target is because in order get live camera feed on android a SurfaceTexture needs to be created from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
Since android uses OpenGL ES I can't use glGetTexImage() to read the image data.
Therefore I am binding the target to an FBO and then reading it using readPixels(). This is my code:
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//Attach 2D texture to this FBO
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
status("glFramebufferTexture2D() returned error %d", glGetError());
However I am getting error 1282 (GL_INVALID_OPERATION) for some reason.
I think this might be the problem:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
You should not attach the cameraTexture to the framebuffer, instead you should generate a new texture in the format of GL_TEXTURE_2D
glGenTextures(1, mTextureHandle, 0);
glBindTexture(GL_TEXTURE_2D, mTextureHandle[0]);
...
cameraTexture is the one you get from SurfaceTexture, and it is the source used for rendering. This new texture is the one you should render to (which could be used later in the rendering pipeline). Then do this:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, mTextureHandle[0], 0);
cameraTexture is drawn to the framebuffer's attached texture, using a simple shader program to draw, bind the cameraTexture when the shader program is in use:
glBindTexture(GL_TEXTURE_EXTERNAL_OES, cameraTexture);
The GL_TEXTURE_EXTERNAL_OES texture target is usually in a YUV color space and glReadPixels() requires the target to be RGB. It probably does not do the color space conversion automatically. However, you could do the conversion in your own fragment shader which renders RGB into another texture and then use glReadPixels() to read that.
texture for YUV420 to RGB conversion in OpenGL ES

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

How to render to a specific mip-level in OpenGLES?

Anyone know how to render to specific mip-level texture?
Currently I am binding the mip-level texture by:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, textID, mip-level);
Then later in my code, I will do something like this:
glBindFramebuffer(GL_FRAMEBUFFER, FBO_ID);
drawArrays(...);
But I the shader is not executed!!!
If textID is anything other than 0, this should be generating a GL_INVALID_VALUE error.
GL_INVALID_VALUE is generated if level is not 0 and texture is not 0.
I suggest you take a look at glFramebufferTexture2D for OpenGL ES. It is valid to do what you want in normal OpenGL but not in OpenGL ES :-\

glReadPixels(Depth_Component) not working in Android OpenGL ES20

I use the following code to retrieve the depth buffer:
FloatBuffer pixels = ByteBuffer
.allocateDirect(4).order(ByteOrder.nativeOrder()).asFloatBuffer();
GLES20.glReadPixels(pointx, pointy, 1, 1,
GLES20.GL_DEPTH_COMPONENT16, GLES20.GL_FLOAT, pixels);
Problem is, whichever point I am requesting, the pixels is giving me 0.0;
I have enabled the following in onSurfaceCreated:
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
GLES20.glDepthMask(true);
GLES20.glClearColor(1, 1, 1, 1);
I've been struggling with this issue for days! Please help.
According to the OpenGL ES 2.0 docs, glReadPixels() doesn't support reading the depth buffer. What does glGetError() return?

Categories

Resources