Related
How is it possible to have a multisampled texture as part of an FBO in OpenGL ES 3.0 (Android)?
The method glTexImage2DMultisample does not seem to exist.
I also want to call glReadPixels on this texture later on in this code,
so the multisampled texture should also be readable.
Is there some kind of extension or utility I would need to use for this?
You want glTexStorage2DMultisample. In general writing multisampled data back to memory is expensive, and needs a resolve using glBlitFramebuffer to consolidate to a single sample.
Consider using this extension to get a "free" resolve on most tile-based architectures.
https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_multisampled_render_to_texture.txt
Actually, opengl es not support for texture multisampling, glTexStorage2DMultisample not work for texture multisampling. opengl es only support renderbuffer for multisampling, in my case, I resolved multisampling by create a renderbuffer, works charm.
how I did this:
glGenFramebuffers(1, &fbo);
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glGenRenderbuffers(1, &rbo);
glBindRenderbuffer(GL_RENDERBUFFER, rbo);
glRenderbufferStorageMultisample(GL_RENDERBUFFER, 4, GL_RGBA8, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGI("ERROR::FRAMEBUFFER:: Framebuffer is not complete!");
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
then render:
glBindFramebuffer(GL_READ_FRAMEBUFFER, fbo);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glBlitFramebuffer(0, 0, mScreenWidth, mScreenHeight, 0, 0, mScreenWidth, mScreenHeight,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
this works in opengl es 3.2, Android platform.
I'm in the process of porting a game, written in C++ (with SDL), to Android. I've done it before with an older project on a wing and a prayer (with some previous help from stackoverflow!), but although the result was a bit sloppy, it worked on pretty much every Android device I threw it at.
At the time I was too tired to refactor and simplify what I had. This time I'm creating an engine that in theory I won't have to touch again, and I'm bringing my old code across. While I've got everything compiling and running, it's not rendering anything.
Specifically, OpenGL ES 2.0 is throwing up a GL_FRAMEBUFFER_UNSUPPORTED message. I draw everything to a texture (it was 320x240 - that'll change eventually), and then draw that texture so that it covers the screen. It's refusing to draw to that texture - I can draw directly to the screen (which at least gives me hope the shaders are fine), but that's not very helpful.
My framebuffer code looks a bit like this:
_superDuperFrameBuffer = 0;
_depthRenderBuffer = 0;
glGenFramebuffers(1, &_superDuperFrameBuffer);
//-----
glGenTextures(1, &_screenTexture);
glBindTexture(GL_TEXTURE_2D, _screenTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCREENWIDTH2, SCREENHEIGHT2, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); //float?
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
if(_currentWidth < SCREENWIDTH2*2 || _currentHeight < SCREENHEIGHT2*2) {
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
else {
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glBindTexture(GL_TEXTURE_2D, 0);
//-----
glGenRenderbuffers(1, &_depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, SCREENWIDTH2, SCREENHEIGHT2);
glGenRenderbuffers(1, &_stencilRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _stencilRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, SCREENWIDTH2, SCREENHEIGHT2);
glBindFramebuffer(GL_FRAMEBUFFER, _superDuperFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _screenTexture, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, _stencilRenderBuffer);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE) {
__android_log_print(ANDROID_LOG_VERBOSE, "LOG", "FRAMEBUFFER BORK %x", status);
}
Is there something obvious I'm doing wrong here? It's a simple 2D game - nothing fancy, and it all seemingly worked before. I don't know what the best settings are for Android - there's not really anything out there to explain how to do things properly.
From what I'm reading, GL_FRAMEBUFFER_UNSUPPORTED is supposed to turn up when one of the settings disagrees with another, or if the device doesn't support framebuffers (which I know it does, because my last game worked well enough!)
Also concerned code might not be getting cleaned up properly - once or twice earlier versions have shown signs of life, but after restarting the game, they haven't. I haven't ruled out the idea that there's an outside influence messing with things, but I figured I'd check to see if the above code was correct first.
Testing on a 2013 Nexus 7.
Thanks in advance!
As stated in a comment, the framebuffer works without the depth and stencil renderbuffers bound. This is a common problem with GL[ES], because there is no method of querying supported combinations and/or formats. You've just got to try it and see if it works - and if it doesn't, the actual problem is not always clear. From the GLES 2.0 spec:
4.1 Per-Fragment Operations
All OpenGL 2.0 per-fragment operations are supported, except for occlusion queries, logic-ops, alpha test
and color index related operations. Depth and stencil operations are supported, but a selected config is not
required to include a depth or stencil buffer with the caveat that an OpenGL ES 2.0 implementation must
support at least one config with a depth bit depth of 16 or higher and a stencil bit depth of 8 or higher.
However, it does not require any specific depth and/or stencil format or combination thereof. The glCheckFramebufferStatus manpage states:
GL_DEPTH_COMPONENT16 is the only depth-renderable format. GL_STENCIL_INDEX8 is the only stencil-renderable format.
This statement can (and is) modified by extensions. It also doesn't explicitly state that the two in combination necessarily form a valid combination which will cause glCheckFramebufferStatus to report success, and in some drivers, they do not (Adreno 320 may be one of those).
The fact is that many (most) Android GPUs support the GL_OES_packed_depth_stencil extension, which provides the DEPTH24_STENCIL8_OES format, so if this extension is available, DEPTH24_STENCIL8_OES should be preferred over a GL_DEPTH_COMPONENT16 and GL_STENCIL_INDEX8 combination, as it is definitely supported with that driver. A renderbuffer created with this format should be bound to both the GL_DEPTH_ATTACHMENT and GL_STENCIL_ATTACHMENT (simultaneously). This extension became standard in GLES 3.0, meaning that every GLES 3.0 capable device should have it.
Of course, if you don't need the depth/stencil attachments, then, they are just a waste of memory, and you shouldn't use them. Although, if you are just porting arbitrary rendering code, it's probably safer to have them bound.
I am trying to read pixels/data from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
The reason for binding the texture to that target is because in order get live camera feed on android a SurfaceTexture needs to be created from an OpenGL texture which is bound to GL_TEXTURE_EXTERNAL_OES.
Since android uses OpenGL ES I can't use glGetTexImage() to read the image data.
Therefore I am binding the target to an FBO and then reading it using readPixels(). This is my code:
GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
//Attach 2D texture to this FBO
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
status("glFramebufferTexture2D() returned error %d", glGetError());
However I am getting error 1282 (GL_INVALID_OPERATION) for some reason.
I think this might be the problem:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_EXTERNAL_OES, cameraTexture, 0);
You should not attach the cameraTexture to the framebuffer, instead you should generate a new texture in the format of GL_TEXTURE_2D
glGenTextures(1, mTextureHandle, 0);
glBindTexture(GL_TEXTURE_2D, mTextureHandle[0]);
...
cameraTexture is the one you get from SurfaceTexture, and it is the source used for rendering. This new texture is the one you should render to (which could be used later in the rendering pipeline). Then do this:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, mTextureHandle[0], 0);
cameraTexture is drawn to the framebuffer's attached texture, using a simple shader program to draw, bind the cameraTexture when the shader program is in use:
glBindTexture(GL_TEXTURE_EXTERNAL_OES, cameraTexture);
The GL_TEXTURE_EXTERNAL_OES texture target is usually in a YUV color space and glReadPixels() requires the target to be RGB. It probably does not do the color space conversion automatically. However, you could do the conversion in your own fragment shader which renders RGB into another texture and then use glReadPixels() to read that.
texture for YUV420 to RGB conversion in OpenGL ES
Background:
Android native camera app uses a OpenGL_1.0 context to display camera preview and gallery pictures. Now I want to add a live-filter on the native camera preview.
To add a live-filter on my own camera app preview is simple --- just use the OpenGL_2.0 to do the image-processing and display. Since OpenGL_1.0 doses't support image-process and somehow it is used for display in the Android native camera app. *I now want to create a new GL context based on OpenGL_2.0 for image-processing and pass the processed image to the other GL context based on OpenGL_1.0 for display.*
Problem:
The problem is how to transfer the processed image from the GL-context-process (based on OpenGL_2.0) to the GL-context-display (based on OpenGL_1.0). I have tried to use FBO: first copy the image pixels from texture in GL-context-process and then set them back to another texture in GL-context-display. But copy pixels from texture is quite slow, typically takes hundreds of milliseconds. That is too slow for camera preview.
*Is there a better way to transfer textures from one GL context to another? Especially, when one GL context is based on OpenGL_2.0 while the other is based on OpenGL_1.0.*
I have found a solution using EGLImage. Just in case someone finds it useful:
Thread #1 that loads a texture:
EGLContext eglContext1 = eglCreateContext(eglDisplay, eglConfig, EGL_NO_CONTEXT, contextAttributes);
EGLSurface eglSurface1 = eglCreatePbufferSurface(eglDisplay, eglConfig, NULL); // pbuffer surface is enough, we're not going to use it anyway
eglMakeCurrent(eglDisplay, eglSurface1, eglSurface1, eglContext1);
int textureId; // texture to be used on thread #2
// ... OpenGL calls skipped: create and specify texture
//(glGenTextures, glBindTexture, glTexImage2D, etc.)
glBindTexture(GL_TEXTURE_2D, 0);
EGLint imageAttributes[] = {
EGL_GL_TEXTURE_LEVEL_KHR, 0, // mip map level to reference
EGL_IMAGE_PRESERVED_KHR, EGL_FALSE,
EGL_NONE
};
EGLImageKHR eglImage = eglCreateImageKHR(eglDisplay, eglContext1, EGL_GL_TEXTURE_2D_KHR, reinterpret_cast<EGLClientBuffer>(textureId), imageAttributes);
Thread #2 that displays 3D scene:
// it will use eglImage created on thread #1 so make sure it has access to it + proper synchronization etc.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// texture parameters are not stored in EGLImage so don't forget to specify them (especially when no additional mip map levels will be used)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
// texture state is now like if you called glTexImage2D on it
Reference:
http://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis
https://groups.google.com/forum/#!topic/android-platform/qZMe9hpWSMU
Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.