I am currently working on a platform in the native Android framework where I use GraphicBuffer to allocate memory and then create an EGLImage from it. This is then used as a texture in OpenGL which I render to (with a simple fullscreen quad).
The problem is when I read the rendered pixel data from the GraphicBuffer, I expect it to be in a linear RGBA format in memory but the result is a texture which contains three parallell smaller clones of the image and with overlapping pixels. Maybe that description doesn't say much but the point is the actual pixel data makes sense but the memory layout seems to be something other than linear RGBA. I assume this is because the graphics drivers store the pixels in an internal format other than linear RGBA.
If I render to a standard OpenGL texture and read with glReadPixels everything works fine, so I assume the problem lies with my custom memory allocation with GraphicBuffer.
If the reason is the drivers' internal memory layout, is there any way of forcing the layout to linear RGBA? I have tried most of the usage flags supplied to the GraphicBuffer constructor with no success. If not, is there a way to output the data differently in the shader to "cancel out" the memory layout?
I am building Android 4.4.3 for Nexus 5.
//Allocate graphicbuffer
outputBuffer = new GraphicBuffer(outputFormat.width, outputFormat.height, outputFormat.bufferFormat,
GraphicBuffer::USAGE_SW_READ_OFTEN |
GraphicBuffer::USAGE_HW_RENDER |
GraphicBuffer::USAGE_HW_TEXTURE);
/* ... */
//Create EGLImage from graphicbuffer
EGLint eglImageAttributes[] = {EGL_WIDTH, outputFormat.width, EGL_HEIGHT, outputFormat.height, EGL_MATCH_FORMAT_KHR,
outputFormat.eglFormat, EGL_IMAGE_PRESERVED_KHR, EGL_FALSE, EGL_NONE};
EGLClientBuffer nativeBuffer = outputBuffer->getNativeBuffer();
eglImage = _eglCreateImageKHR(display, EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, nativeBuffer, eglImageAttributes);
/* ... */
//Create output texture
glGenTextures(1, &outputTexture);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
_glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
/* ... */
//Create target fbo
glGenFramebuffers(1, &targetFBO);
glBindFramebuffer(GL_FRAMEBUFFER, targetFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
/* ... */
//Read from graphicbuffer
const Rect lockBoundsOutput(quadRenderer->outputFormat.width, quadRenderer->outputFormat.height);
status_t statusgb = quadRenderer->getOutputBuffer()->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &result);
I managed to find the answer myself and I was wrong all along. The simple reason was that although I was rendering a 480x1080 texture the memory allocated was padded to 640x1080 so I just needed to remove the padding after each row and the output texture made sense.
Related
I'm working on displaying yuv pictures with OpenGL ES3.0 on Android. I convert yuv pixels to rgb in a fragment shader. At first, I need to pass yuv pixel data to OpenGL as a texture.
When the yuv data is 8 bit-depth, I program like below and it works:
GLenum glError;
GLuint tex_y;
glGenTextures(1, &tex_y);
glBindTexture(GL_TEXTURE_2D, tex_y);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// pix_y is y component data.
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, y_width, y_height, 0, GL_LUMINANCE,GL_UNSIGNED_BYTE, pix_y);
glGenerateMipmap(GL_TEXTURE_2D);
glError = glGetError();
if (glError != GL_NO_ERROR) {
LOGE(TAG, "y, glError: 0x%x", glError);
}
However there are some yuv formats with more depth like YUV420P10le. I don't want to lose the benefit of more depth, so I convert yuv data which has more than 8 bit depth to 16 bit by shifting data (for example: yuv420p10le, y_new = y_old << 6)。 Now I want to generate a 16 bit depth texture, but I always fail GL_INVALID_OPERATION. Below is the code to create a 16 bit texture:
// rest part is the same with 8 bit texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, y_width, y_height, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, pix_y);
I've tried many format combinations in https://registry.khronos.org/OpenGL-Refpages/es3.0/html/glTexImage2D.xhtml, none of them succeed.
By the way, I also tested on MacOS OpenGL 3.3 and succeeded, I just need to pass data as one channel data of RGB..
// Code on MacOS OpenGL3.3. The data format depends on y depth, GL_UNSIGNED_BYTE or
// GL_UNSIGNED_SHORT. Using this config, I can access the data in RED channel of textures
// which is normalized to [0.0f, 1.0f]. However, this config doesn't work on OpenGL ES3.0
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, y_width, y_height, 0, GL_RED, dataFormat, y);
I want to create a framebuffer in OpenGL ES 2 with a color texture and depth and stencil renderbuffers. However, OpenGL ES doesn't seem to have GL_DEPTH24_STENCIL8 or GL_DEPTH_STENCIL_ATTACHMENT. Using two separate renderbuffers gives the error "Stencil and z buffer surfaces have different formats! Returning GL_FRAMEBUFFER_UNSUPPORTED!"
Is this not possible in OpenGL ES?
My FBO creation code:
private int width, height;
private int framebufferID,
colorTextureID,
depthRenderBufferID,
stencilRenderBufferID;
public FBO(int w, int h) {
width = w;
height = h;
int[] array = new int[1];
//Create the FrameBuffer and bind it
glGenFramebuffers(1, array, 0);
framebufferID = array[0];
glBindFramebuffer(GL_FRAMEBUFFER, framebufferID);
//Create the texture for color, so it can be rendered to the screen
glGenTextures(1, array, 0);
colorTextureID = array[0];
glBindTexture(GL_TEXTURE_2D, colorTextureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, (java.nio.ByteBuffer) null);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// attach the texture to the framebuffer
glFramebufferTexture2D( GL_FRAMEBUFFER, // must be GL_FRAMEBUFFER
GL_COLOR_ATTACHMENT0, // color attachment point
GL_TEXTURE_2D, // texture type
colorTextureID, // texture ID
0); // mipmap level
glBindTexture(GL_TEXTURE_2D, 0);
// is the color texture okay? hang in there buddy
FBOUtils.checkCompleteness(framebufferID);
//Create the depth RenderBuffer
glGenRenderbuffers(1, array, 0);
depthRenderBufferID = array[0];
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBufferID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
//Create stencil RenderBuffer
glGenRenderbuffers(1, array, 0);
stencilRenderBufferID = array[0];
glBindRenderbuffer(GL_RENDERBUFFER, stencilRenderBufferID);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
// bind renderbuffers to framebuffer object
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBufferID);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRenderBufferID);
// make sure nothing screwy happened
FBOUtils.checkCompleteness(framebufferID);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
Packed depth/stencil surfaces are not a standard part of OpenGL ES 2.0, but are added via this extension:
https://www.khronos.org/registry/gles/extensions/OES/OES_packed_depth_stencil.txt
If the extension is supported on your platform (it usually is), the token names from OpenGL will generally work, but note that most have an _OES postfix because it is an OES extension, e.g. the internal format token is GL_DEPTH24_STENCIL8_OES.
The extension doesn't define a single combined attachment point such as GL_DEPTH_STENCIL_ATTACHMENT (that is added in OpenGL ES 3.0), but you can attach the same renderbuffer to one or both of the single attachment points. Note that it is not allowed to attach two different depth or stencil surfaces to the depth and stencil attachment points if you have attached a packed depth/stencil surface to the other (i.e. if you attach a packed depth/stencil to one attachment point, the other can either be attached to the same packed surface or unused).
In short, this is implementation dependent. What you try in the posted code with using a separate renderbuffer for depth and stencil is basically legal in ES 2.0. But then there's this paragraph in the spec:
[..] some implementations may not support rendering to particular combinations of internal formats. If the combination of formats of the images attached to a framebuffer object are not supported by the implementation, then the framebuffer is not complete under the clause labeled FRAMEBUFFER_UNSUPPORTED.
That's exactly the GL_FRAMEBUFFER_UNSUPPORTED error you are seeing. Your implementation apparently does not like the combination of depth and stencil buffer, and is at liberty to refuse supporting it while still being spec compliant.
There's one other aspect that makes your code device dependent. The combination of format and type you're using for your texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_BYTE, (java.nio.ByteBuffer) null);
basically corresponds to an RGBA8 internal format (even though that naming is not used in the ES 2.0 spec). In base ES 2.0, this is not a color-renderable format. If you want something that is supported across the board, you'll have to use GL_UNSIGNED_SHORT_5_6_5, GL_UNSIGNED_SHORT_4_4_4_4, or GL_UNSIGNED_SHORT_5_5_5_1 for the type. Well, theoretically a device can refuse to support almost any format. The only strict requirement is that it supports at least one format combination.
Rendering to RGBA8 formats is available on many devices as part of the OES_rgb8_rgba8 extension.
As already pointed out in another answer, combined depth/stencil formats are not part of base ES 2.0, and only available with the OES_packed_depth_stencil extension.
In my Android 4.3 application, I would like to load a texture from a local png onto a TextureView. I do not know OpenGL and I am using the code from the GLTextureActivity hardware acceleration test. I am pasting also the loading texture part here:
private int loadTexture(int resource) {
int[] textures = new int[1];
glActiveTexture(GL_TEXTURE0);
glGenTextures(1, textures, 0);
checkGlError();
int texture = textures[0];
glBindTexture(GL_TEXTURE_2D, texture);
checkGlError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
Bitmap bitmap = BitmapFactory.decodeResource(mResources, resource);
GLUtils.texImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bitmap, GL_UNSIGNED_BYTE, 0);
checkGlError();
bitmap.recycle();
return texture;
}
I am running the code in two devices: Nexus 7 and Galaxy Nexus phone, and I notice a huge speed difference between the two. For Nexus 7, the drawing part takes about 170 ms, although for the Galaxy Nexus it takes 459 ms. The most time consuming operation is the loading of the texture and especially the texImage2D call. I have read that there are devices with chips that are slow on texImage2D-texSubImage2D functions but how can someone tell which are those devices and how can I avoid to use those functions to achieve the same result?
Thank you in advance.
// EDIT: the glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call seems to also be significantly slower in the phone device. Why is this happening? How could I avoid it?
Are you loading textures on each frame redraw? That's just not right - you should load textures only once before main rendering loop. You won't get instant textures loading even on the fastest possible device - you load resource, decode bitmap from it and then load it to GPU. This takes some time.
Background:
Android native camera app uses a OpenGL_1.0 context to display camera preview and gallery pictures. Now I want to add a live-filter on the native camera preview.
To add a live-filter on my own camera app preview is simple --- just use the OpenGL_2.0 to do the image-processing and display. Since OpenGL_1.0 doses't support image-process and somehow it is used for display in the Android native camera app. *I now want to create a new GL context based on OpenGL_2.0 for image-processing and pass the processed image to the other GL context based on OpenGL_1.0 for display.*
Problem:
The problem is how to transfer the processed image from the GL-context-process (based on OpenGL_2.0) to the GL-context-display (based on OpenGL_1.0). I have tried to use FBO: first copy the image pixels from texture in GL-context-process and then set them back to another texture in GL-context-display. But copy pixels from texture is quite slow, typically takes hundreds of milliseconds. That is too slow for camera preview.
*Is there a better way to transfer textures from one GL context to another? Especially, when one GL context is based on OpenGL_2.0 while the other is based on OpenGL_1.0.*
I have found a solution using EGLImage. Just in case someone finds it useful:
Thread #1 that loads a texture:
EGLContext eglContext1 = eglCreateContext(eglDisplay, eglConfig, EGL_NO_CONTEXT, contextAttributes);
EGLSurface eglSurface1 = eglCreatePbufferSurface(eglDisplay, eglConfig, NULL); // pbuffer surface is enough, we're not going to use it anyway
eglMakeCurrent(eglDisplay, eglSurface1, eglSurface1, eglContext1);
int textureId; // texture to be used on thread #2
// ... OpenGL calls skipped: create and specify texture
//(glGenTextures, glBindTexture, glTexImage2D, etc.)
glBindTexture(GL_TEXTURE_2D, 0);
EGLint imageAttributes[] = {
EGL_GL_TEXTURE_LEVEL_KHR, 0, // mip map level to reference
EGL_IMAGE_PRESERVED_KHR, EGL_FALSE,
EGL_NONE
};
EGLImageKHR eglImage = eglCreateImageKHR(eglDisplay, eglContext1, EGL_GL_TEXTURE_2D_KHR, reinterpret_cast<EGLClientBuffer>(textureId), imageAttributes);
Thread #2 that displays 3D scene:
// it will use eglImage created on thread #1 so make sure it has access to it + proper synchronization etc.
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
// texture parameters are not stored in EGLImage so don't forget to specify them (especially when no additional mip map levels will be used)
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImage);
// texture state is now like if you called glTexImage2D on it
Reference:
http://software.intel.com/en-us/articles/using-opengl-es-to-accelerate-apps-with-legacy-2d-guis
https://groups.google.com/forum/#!topic/android-platform/qZMe9hpWSMU
I'm trying to render AV frames grabbed and converted from a MPEG4 video using Gstreamer to an Android (2.2)-opengl texture. I've pretty much exhausted google and not found an answer.
Basically, I am using Gstreamer uridecodebin to decode the frame, and then convert the frame to RGB, and then glTexSubImage2D() to create an openGL texture from it, but can't seem to get anything to work.The texture is getting colored when I get the decoded data (RGB) from Gstreamer.
I am getting the video size as 320 * 256 and my Texture size is 512 * 256 & I am using glDrawTexiOES(0,0,videowidth,videoheight), I am not getting any errors related to opengl, but the texture is blank( different color frames), though the Audio works fine.
Here is my code:Native OnDraw:
if (theGStPixelBuffer != 0) {
glBindTexture (GL_TEXTURE_2D, s_texture);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glPixelStorei( GL_UNPACK_ALIGNMENT, 2);
glTexSubImage2D (GL_TEXTURE_2D, 0, 0, 0, theTexWidth,
theTexHeight, GL_RGB, GL_UNSIGNED_BYTE,
GST_BUFFER_DATA(theGStPixelBuffer));
check_gl_error("glTexSubImage2D");
theGStPixelBuffer = 0;
}
glDrawTexiOES(0, 0, 0, theTexWidth, theTexHeight);
check_gl_error("glDrawTexiOES")
I have encounter the same problem ;you can get the bitmap and use martix class to resize the bitmap.