EGL ERROR: "texture resource is NULL, no level was specified" - android

I get an EGL error: EGL ERROR: type = 0x824c, severity = 0x9146, message = "texture resource is NULL, no level was specified"
This error appears when executing glTextSubImage for texId1 in the first 3 lines of code below. No errors on the texId2. Wondering if anyone else has any ideas on what this error could be?
This error is visible in the debugMessagecallback and the associated glGetError() is GL_INVALID_OPERATION.
//render loop
glBindTexture(GL_TEXTURE_2D, (GLuint)texId1);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, g_textureWidth, g_textureHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixelsdata1);
glBindTexture(GL_TEXTURE_2D, 0); //unbind tex
glBindTexture(GL_TEXTURE_2D, (GLuint)texId2);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, g_textureWidth, g_textureHeight, GL_RGBA, GL_UNSIGNED_BYTE, pixelsdata2);
glBindTexture(GL_TEXTURE_2D, 0); //unbind tex

Okay here's what happened and thus picking this as the self-answer to this post. The unity code where I created the textures, I also tried to default it to white color. I did so by assigning m_tex = Texture2D.whiteTexture. The intent was to do something like m_tex.whiteTexture; which is not allowed so ended up where I did:-)
Under the hood looks like Unity is using glTexStorage2D() call on Texture2D.whiteTexture that I recreated accidentally, making it immutable. The very fact that I had to resize my original tex should made it obvious for me that I got a new texture instead of color change, but so much for my focus on interpreting gl calls and error codes in my native code. After commenting these whiteTexture and resize no more gl errors:
m_tex = new Texture2D(1920, 1080, TextureFormat.RGBA32, false);
m_tex.filterMode = FilterMode.Bilinear;
//m_tex = Texture2D.whiteTexture;
//m_tex.Resize(1920, 1080);
m_tex.Apply();
One more note I continue to use just the glTexSubImage2D call with pixeldata passed in the same call as in the original post above. I don't use glTexImage. I guess Unity is using ARB_texture_storage extension on all texture creation thus making these textures immutable. Here's the note from the extension:
When using this extension, it is no longer possible to supply texture
data using TexImage*. Instead, data can be uploaded using TexSubImage*,
or produced by other means (such as render-to-texture, mipmap generation,
or rendering to a sibling EGLImage).
ThankYou #Rabbid76 and #solidpixel. I am sure all the comments will useful for others. Cheers:-)

Related

How to get 24bit color information by glReadPixels() on android?

To save processed image by OpenGL ES, I made codes as follows. And it works well.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 4);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, bb);
try {
TJCompressor tjCompressor = new TJCompressor(bb.array(), 0, 0, mWidth, 0, mHeight, TJ.PF_RGB);
tjCompressor.setJPEGQuality(100);
tjCompressor.setSubsamp(TJ.SAMP_444);
return tjCompressor.compress(0);
} catch (Exception e) {
e.printStackTrace();
}
After that, to get 24bit color information without alpha channel for saving memory and processing time, I changed the line #1 and #2 of the codes as follows.
ByteBuffer bb = ByteBuffer.allocate(mWidth * mHeight * 3);
mGL.glReadPixels(0, 0, mWidth, mHeight, GL_RGB, GL_UNSIGNED_BYTE, bb);
And then additionally, I removed EGL_ALPHA_SIZE at mGL(GL10 instance)'s EGLConfig. And I passed GLES20.GL_RGB as internal format parameter, when GLUtils.texImage2D() method is called.
However, the result indicates there is something wrong. The result image has only black color, and when I checked the data of bb buffer after glReadPixels() method calling, I found all data is zero. I need advice. Help, plz.
In core GLES2, the only valid format/type combos for glReadPixels are:
GL_RGBA/GL_UNSIGNED_BYTE
Optional framebuffer-specific format/type queried via glGetIntegerv with GL_IMPLEMENTATION_COLOR_READ_FORMAT and GL_IMPLEMENTATION_COLOR_READ_TYPE respectively
In GLES2 without extensions, if GL_IMPLEMENTATION_COLOR_READ_FORMAT/GL_IMPLEMENTATION_COLOR_READ_TYPE don't yield something useful, you're stuck with GL_RGBA/GL_UNSIGNED_BYTE, unfortunately.
With GLES3, you can glReadPixels into the bound GL_PACK_BUFFER, and glMapBufferRange that, though again, you're limited by format.
I'll note that drivers are prone to emulating tightly-packed rgb8 24-bit formats, instead implementing only the better aligned formats like rgba8, rgb565, and rgba4. A renderable format exposed as "rgb8" is potentially just rgbx8 behind the scenes.
Highly driver dependent, but if you don't care about keeping the contents of the framebuffer around, you might be able to win back some memory bandwidth with EXT_discard_framebuffer. (or glInvalidateFramebuffer in GLES3)
You might also look into EGL_KHR_lock_surface3.

android - Can't read pixels from GraphicBuffer at adreno GPU, by Karthik's method(Hacky alternatives of glReadPixels)

Since July, I have developed Android Application to edit video files like .avi, .flv etc. I use FFMPEG and OpenGL ES 2.0 to implement this application.
Because it is required too many calculations to execute a filter effect like "Blur" by CPU, I decide to use OpenGl ES 2.0 for applying filter effect to a frame of video by using GPU and Shader.
What I try to do is 'Using shader to apply a filter effect to a frame of video and get pixels which are stored in Frame Buffer'.
So I have to use glReadPixels only OpenGl ES 2.0 method that can be used to get pixels from FrameBuffer. But according to many GPU Development Guides, using glReadPixels was not recommended and guide books warned the potential risk when using glReadPixels. Also, the performance of glReadPixels differs depending on GPU version and vendor. I cannot concretely decide to use glReadPixels and tried to find other method for getting pixels which is result of GPU calculation.
After a few days, I found the hacky method for getting pixels data by using Android GraphicBuffer.
Here is the link.
From this link, I tried Karthik's method to my codes.
Only difference is:
//render method I made.
void renderFrame(){
/* some codes to init */
glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
/* Set the viewport according to the FBO's texture. */
glViewport(0, 0, mTexWidth , mTexHeight);
/* Clear screen on FBO. */
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Different Code compare to Karthik's.
contents->setTexture();
contents->draw(mPositionVarIndex, mTextrueCoIndex);
contents->releaseData();
/* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
glBindFramebuffer(GL_FRAMEBUFFER,0);
LOGI("Read Graphic Buffer");
// Just in case the buffer was not created yet
void* vaddr;
// Lock the buffer and retrieve a pointer where we are going to write the data
buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
if (vaddr == NULL) {
LOGE("lock error");
buffer->unlock();
return;
}
/* some codes that use the pixels from GraphicBuffer...*/
}
void setTexture(){
glGenTextures(1, mTexture);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, mWidth, mHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, mData);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
}
void releaseData(){
glDeleteTextures(1, mTexture);
glDeleteBuffers(1, mVbo);
}
void draw(int positionIndex, int textureIndex){
mVbo[0] = create_vbo(lengthOfArray*sizeOfFloat*2, NULL, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, lengthOfArray*sizeOfFloat, this->vertexData);
glEnableVertexAttribArray(positionIndex);
// checkGlError("glEnableVertexAttribArray");
glVertexAttribPointer(positionIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
// checkGlError("glVertexAttribPointer");
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, mVbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, lengthOfArray*sizeOfFloat, lengthOfArray*sizeOfFloat, this->mImgTextureData);
glEnableVertexAttribArray(textureIndex);
glVertexAttribPointer(textureIndex, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(lengthOfArray*sizeOfFloat));
glBindBuffer(GL_ARRAY_BUFFER, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mTexture[0]);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 6);
checkGlError("glDrawArrays");
}
I use a texture and render frame to fill the Buffer. I have 2 Test Phones, One is Samsung Galaxy S 2, which renderer is Mali-400MP. The other is LG Optimus G Pro, and renderer is Adreno(TM) 320. Galaxy S2 works well with above code and Karthik's method. But in case of LG smartphone, there are some problems.
E/libgenlock(17491): perform_lock_unlock_operation: GENLOCK_IOC_DREADLOCK failed (lockType0x1,err=Connection timed out fd=47)
E/gralloc(17491): gralloc_lock: genlock_lock_buffer (lockType=0x2) failed
W/GraphicBufferMapper(17491): lock(...) failed -22 (Invalid argument)
Accroding to this link,
On Qualcomm hardware pre-Android-4.2, a Qualcomm-specific mechanism,
named Genlock, is used.
Only I could see the error related to GenLock, so I carefully guessed at some problem between GraphicBuffer and Qualcomm GPU. After that, I searched and read the code of Gralloc.cpp, GraphicBufferMapper.cpp, GraphicBuffer.cpp and *.h for finding reasons of those errors, but failed.
My questions are:
Is it right approach to get filter effect from GPU calculation? If not, how to get a filter effect like "Blur" which requires so many calculations?
Is Karthik's method not working for Qualcomm GPU? I want to know that why those errors occured only at Qualcomm GPU, Adreno.
Make sure your GraphicBuffer allocation has GRALLOC_USAGE_SW_READ_OFTEN specified. Without it you may not be able to lock the buffer from code running on the CPU.
Unrelated but possibly suggestive of a better approach: see the CameraToMpegTest example, which does a trivial edit to live camera input using a GLES 2.0 shader.
Update: there's now an example of applying filters with the GPU in Grafika. You can see a screenrecorded demo here.

Internal Texture Format

On Android using OpenGL ES 2.0 I try to perform certain performance tests using different internal texture formats.
Initially I have a lot of RGBA textures (png) which I want to load and store internally in a different format with OpenGL (for example RGB and LUMINANCE). I load my textures using glTexImage2D like this:
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),resourceId);
...
int size = bitmap.getRowBytes() * bitmap.getHeight();
ByteBuffer b = ByteBuffer.allocate(size);
bitmap.copyPixelsToBuffer(b);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, bitmap.getWidth(),
bitmap.getHeight(), 0, GLES20.GL_RGBA,
GLES20.GL_UNSIGNED_BYTE, b);
This works fine, however if I change the first GLES20.GL_RGBA (The internalFormat parameter) to anything else (GLES20.GL_RGB or GLES20.GL_LUMINANCE) my texture appears all black. Changing the second GLES20.GL_RGBA to the same value will display something - but obviously not correctly as the original data is RGBA.
I thought maybe it has something to with the shader code - that maybe texture2D(..) returns a different value because the internal format of the texture is different. My shader code is simply:
gl_FragColor = texture2D(texture, fragment_texture_coordinate);
I tried changing this around too, but no luck yet. So I thought maybe glTex2DImage is not at all working as I think it does (I am not an expert on this area whatsoever).
What am I doing wrong?
Edit:
I overlooked this little detail on texImage2D. It appears that:
internalformat must match format. No conversion between formats is supported during texture image processing. type may be used as a hint to specify how much precision is desired, but a GL implementation may choose to store the texture array at any internal resolution it chooses.
What I gather from this, is that if you want to store your textures different from their original format you'll have to convert it yourself.
Your fragment shader must be written to agree with the format you are giving to glTexImage2D(). For GL_RGB, it should force the alpha to 1.0, like this:
vec3 Color_RGB = texture2D(sampler2d, texCoordinate);
gl_FragColor = vec4(Color_RGB, 1.0);
But, for GL_RGBA, it should look like this:
vec4 Color_RGBA = texture2D(sampler2d, texCoordinate);
gl_FragColor = Color_RGBA;
And, as has been discussed, you can only use the Android Bitmap class for textures if your PNG files have no transparency. This article explains that:
http://software.intel.com/en-us/articles/porting-opengl-games-to-android-on-intel-atom-processors-part-1

Android OpenGL ES 2.0 -- glReadPixels() and glTexImage2D() drawing a black texture?

I'm working on some Android code for caching and redrawing a framebuffer object's color buffer between the loss and recreation of EGL contexts. Development is primarily happening on a Xoom tablet running Honeycomb. Anyway, what I'm trying to do is store the result of calling glReadPixels() on the FBO in a direct ByteBuffer, then use that buffer with glTexImage2D() and draw it back into the (now cleared) framebuffer. All of this seems to work fine — the ByteBuffer contains the right values ([-1, 0, 0, -1] etc. for a pixel, according to Java's inability to understand unsigned bytes), no GlErrors seem to be thrown, and the quad is drawn to the right part of the screen (currently the top-left quarter of the framebuffer for testing purposes).
However, no matter what I try, glTexImage2D() always outputs a plain black texture. I've had some issues with this before — when displaying Bitmaps, I eventually gave up trying to use the basic GLES20.glTexImage2D() with Buffers and skipped to using GLUtils.glTexImage2D(), which processes the Bitmap for you. Unfortunately, that's less of an option here (I did actually try converting the ByteBuffer to a Bitmap so I could use GLUtils, without much success), so I've really run out of ideas.
Can anyone think of anything that could be causing glTexImage2D() to not correctly process a perfectly good ByteBuffer? Any and all suggestions would be welcome.
ByteBuffer pixelBuffer;
void storePixels() {
try {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbuf);
pixelBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
gfx.checkGlError("store Pixels");
}catch (OutOfMemoryError e) {
pixelBuffer = null;
}
}
void redrawPixels() {
GLES20.glBindFramebuffer(GL20.GL_FRAMEBUFFER, fbuf);
int[] texId = new int[1];
GLES20.glGenTextures(1, texId, 0);
int bufferTex = texId[0];
GLES20.glBindTexture(GL20.GL_TEXTURE_2D, bufferTex);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, repeatX ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, repeatY ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGBA, width, height, 0, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
gfx.drawTexture(bufferTex, width, height, Transform.IDENTITY, width/2, height/2, false, false, 1);
GLES20.glDeleteTextures(1, IntBuffer.wrap(new int[] {bufferTex}));
pixelBuffer = null;
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
}
gfx.drawTexture() builds a quad and draws it to the currently bound framebuffer, by the way. That code has been well-tested in other parts of my project — it shouldn't be the issue here.
For those of you playing along at home, this code is in fact totally valid. Remember when I swore blind that gfx.drawTexture() has been well-tested and shouldn't be the issue here"? Yeah, it was totally the issue there. I was buffering vertices to draw without actually flushing them through a glDrawElements() call. Whoops.

how to convert AVFrame to texture used by glTexImage2D?

as you know that AVFrame has 2 property:pFrame->data, pFrame->linesize. After i read frame from video /sdcard/test.mp4 (android platform), and convert to RGB AVFrame vice:
img_convert_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,
pCodecCtx->pix_fmt,
target_width, target_height, PIX_FMT_RGB24, SWS_BICUBIC,
NULL, NULL, NULL);
if(img_convert_ctx == NULL) {
LOGE("could not initialize conversion context\n");
return;
}
sws_scale(img_convert_ctx, (const uint8_t* const*)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
I get pFrameRGB after converted. I need to texture it in opengl by using glTextImage2D:
// Create The Texture
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 256, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
// i don't know data in here is pFrameRGB->data or not,if not, how to convert it to approriate format for glTextImage so i can display video in opengl, using AVFrame and gltextImage2D. All helps from u are very appreciated.
The actual pixel data for the pFrameRGB is in pFrameRGB->data[0]... In your glTexImage2D call use GL_RGBA instead of GL_RGB. Thats what I did to get it working anyway. Assuming you have your texture and vertex coordinates created correctly then that should get it working. I can post some code examples if that doesn't help.
Two things: (1) is target_width and target_height powers of 2? If not, you'll get a blank screen. You need to scale the picture to a power of 2. (2) where it says "data" in glTexImage, you need to use pFrameRGB->data[0] as Kieran pointed out.
One more thing, you don't need to use RGBA (unless it's faster. I haven't tested it out). If you do, you change the ffmpeg target format to PIX_FMT_RGBA and then change both instances of GL_RGB to GL_RGBA.
Other than that, I don't see anything wrong with your code. I have almost the same exact code and it runs just fine.

Categories

Resources