Problem: Passing a ByteBuffer wrapping a byte[] containing image data into GLES20.glTexImage2D(...) is working for me if and only if the ByteBuffer has no offset - otherwise (if there's an offset set) it crashes unhelpfully with something like:
Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x61616162636575 in tid 32016 (GLThread 4912), pid 31806
Background: I have a single byte[] of YUV (I420) data for a 720p image - and I need to read this out as 3 separate ByteBuffers to pass to an OpenGL shader which has 3 different sampler2D channels (1 channel per buffer).
Ideally, since all the information is there in memory, I would create three ByteBuffers wrapping different parts of the byte[] like this:
ByteBuffer yBuffer = ByteBuffer.wrap(rawYUVData, 0, yBufferSize);
ByteBuffer uBuffer = ByteBuffer.wrap(rawYUVData, yBufferSize, uBufferSize);
ByteBuffer vBuffer = ByteBuffer.wrap(rawYUVData, yBufferSize+uBufferSize, uBufferSize);
And then in my renderer code, I call OpenGL glTexImage2D three times also:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, yTexDataHandle);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE, imgWidth, imgHeight, 0, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, yBuffer);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, uTexDataHandle);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE, uWidth, uHeight, 0, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, uBuffer);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, vTexDataHandle);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE, uWidth, uHeight, 0, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, vBuffer);
This is crashing. I can fix the crash just fine by copying the contents of the above uBuffer and vBuffer into new byte[]s, then rewrapping the ByteBuffers over the new arrays.
But I'm copying data needlessly, and 30x per second.
Can I not wrap ByteBuffers over the contiguous byte[], with offsets, and use these in OpenGL for some reason? Do I really have to copy all the time?
I've checked it's not an issue in my shader file, by removing the calls to texture2D() that involve the offending samplers.
Even just adding an offset of 1 to my construction of yBuffer, and it crashes where with 0 it works fine. (rawYUVData.length is always > yBufferSize + 1)
Related
I want to compress already compressed images and pass the final double compressed data to glCompressedTexImage2D. For this I have followed below steps:
glGenTextures(1, textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexImage2D(GL_TEXTURE_2D, level, internalformat, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, data);
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0,GL_TEXTURE_COMPRESSED_IMAGE_SIZE,
&compressedSize );
if (compressedSize > 0){
/* call glCompressedTexImage2D to render this double compressed image.*/
}
But I am getting only 0 in compressedSize. Seems the data is not getting compressed.
GL_TEXTURE_COMPRESSED_IMAGE_SIZE isn't part of the OpenGLES 2 spec (or even OpenGLES 3.0 or 3.1)
My android app passes in an OpenGL texture2D to my OpenCL kernel, however when I use clCreateFromGLTexture2D() with mipmap level 1, I get CL_INVALID_MIPLEVEL.
I am creating my OpenGL texture and generating mipmaps like this:
GLES20.glGenTextures ( 2, targetTex, 0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 1, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
The texture is then rendered to by binding it with a FBO:
targetFramebuffer = IntBuffer.allocate(1);
GLES20.glGenFramebuffers(1, targetFramebuffer);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, targetFramebuffer.get(0));
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, targetTex[0], 0);
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
I then create a cl memory object like so:
mem_images[0] = clCreateFromGLTexture2D(m_clContext, CL_MEM_READ_ONLY, GL_TEXTURE_2D, 1, in_tex, &err);
The memory object is created successfully when the miplevel is set to 0 and all the kernels work fine. It's only when the miplevel is set to that the above function call gives me CL_INVALID_MIPLEVEL error.
OpenCL specification docs (http://www.khronos.org/registry/cl/sdk/1.1/docs/man/xhtml/clCreateFromGLTexture2D.html) say that a possible reason could be that the OpenGL implementation does not support creating from non-zero mipmap levels. However I don't know if this is definitely the case and am not sure how to find out.
EDIT:
After Andon's reply I changed my OpenGL texture generation to following:
GLES20.glGenTextures ( 2, targetTex, 0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR_MIPMAP_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, image_width, image_height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
and after drawing into the FBO which is bound with the texture, this is how I generate mipmaps:
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, targetTex[0]);
GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 0);
GLES20.glFinish();
I still get the same error when trying to create a CL memory object from GL texture with mipmap level not equal to 0.
There is something very unusual about the dimensions of your second-level mipmap in this code. The dimensions of mipmap LOD N+1 should be LOD N{w,h}/2. You can have LODs with the same resolution in GL, but those LODs will not be mipmaps. To make your 2D texture mipmap complete, you need log2 (max (width,height))-1 LODs in addition to the base image 0, and each should be 1/4 the resolution (1/2 each dimension) of the last. I have to agree with CL on this one, LOD 1 is not really a mipmap.
Moreover, you are calling glGenerateMipmap (...) on a newly created texture that has no data store. That is a meaningless thing to do, glGenerateMipmap (...) will build your mip-chain for you starting with the base LOD image; at the time you called this, there was no base LOD.
Your call to GLES20.glGenerateMipmap(GLES20.GL_TEXTURE_2D); really ought to be moved to a point after you draw into your FBO. Done properly, you do not have to manually allocate storage for lower detail mipmap LODs. This function will take care of that for you. Thus, you should actually remove the code that allocates LOD 1 in the first place.
I'm strageling with openGl and can't find a solution for the following problem.
I'm using openGl with Ndk and trying to generate ByteBuffer of pixels from my texture and send it to the native c.
The texture image is black and white image, meaning byte-wise:
black: expected result - 0(R) 0(G) 0(B) 1(A) - 0001
white: expected result - 1(R) 1(G) 1(B) 1(A) - 1111
The sign is not relevant here sence both numbers are positive.
This is true because in Java the state of the primitive is always signed.
Complication: we are getting a buffer of unsigned bytes
so the expectations are:
black: 0(R) 0(G) 0(B) 1(A) - 0001 stays the same because left bit of every byte is 0.
white: 1(R) 1(G) 1(B) 1(A) - bit sign is always on(meaning negative number), so -127,-127,-127,-127
BUT, the strange output I'm getting is: -10 -10 -10 -1(RGBA),-10 -10 -10 -1,-10 -10 -10 -1,-10 -10 -10 -1
why there is sign if I asked for unsigned bytes?
Edit:
Partial explanation is that java types are still signed, so on debugger/logs/prints the value will be null depending on the left most(big endian) bit. I still don't know why this bit is 1.
why the values are -10 and -1? at least alpha byte should always be 1.
I'm setting the ByteBuffer as following:
int[] frame = new int[1];
GLES20.glGenFramebuffers(1, frame, 0);
RendererUtils.checkGlError("glGenFramebuffers");
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, frame[0]);
RendererUtils.checkGlError("glBindFramebuffer");
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, texture, 0);
RendererUtils.checkGlError("glFramebufferTexture2D");
ByteBuffer buffer = ByteBuffer.allocate(width * height * 4);
//GLES20.glPixelStorei(pname, param);
GLES20.glReadPixels(0, 0, width, height, GLES20.GL_RGBA, arrayNativeType, buffer); // todo take unsigned int
RendererUtils.checkGlError("glReadPixels");
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0);
RendererUtils.checkGlError("glBindFramebuffer");
GLES20.glDeleteFramebuffers(1, new int[]{0}, 0);
RendererUtils.checkGlError("glDeleteFramebuffer");
Unbelivable answer, I checked everything memory and ndk wise and at the end the problem was in my black&white shader.
Explanation: the previous wrong shader took the highest rgb value and set the r g and b according to it, that way we got the black:
float greyValue = max(fragColor.r, max(fragColor.g, fragColor.b));
gl_FragColor = vec4(greyValue, greyValue, greyValue, 1.0);
The error is that I don't know the precise color and its byte representation.
The fix was instead of greyValue to put 1, and if its value is 0 than to put 0 for r g and b.
I hope it clear.
I get the yuv data from camera , and send them to opengl, then I use fragment shader to convert the data to RGBA format and show it on the screen. Everything goes well but when I use glReadPixels to get the RGBA data from framebuffer to int array, I get wrong data.
// I use VBO to draw
glBindBuffer(GL_ARRAY_BUFFER, squareVerticesBufferID);
glVertexAttribPointer(gvPositionHandle, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(gvPositionHandle);
glBindBuffer(GL_ARRAY_BUFFER, textureVerticesBuferID);
glVertexAttribPointer(gvTextureHandle, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(gvTextureHandle);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, squareVerticesIndexBufferID);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_INT, 0);
// Then I use glReadPixels to read the RGBA data
unsigned char *returnDataPointer = (unsigned char*) malloc(width * height * 4);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, returnDataPointer);
Unfortunately I get wrong data, the last thousands elements in the array are 0s,the same code works well on ios, did I miss something?
I work on Android 4.0.3 and use OpenGL ES 2 from the NDK.
I'm working on some Android code for caching and redrawing a framebuffer object's color buffer between the loss and recreation of EGL contexts. Development is primarily happening on a Xoom tablet running Honeycomb. Anyway, what I'm trying to do is store the result of calling glReadPixels() on the FBO in a direct ByteBuffer, then use that buffer with glTexImage2D() and draw it back into the (now cleared) framebuffer. All of this seems to work fine — the ByteBuffer contains the right values ([-1, 0, 0, -1] etc. for a pixel, according to Java's inability to understand unsigned bytes), no GlErrors seem to be thrown, and the quad is drawn to the right part of the screen (currently the top-left quarter of the framebuffer for testing purposes).
However, no matter what I try, glTexImage2D() always outputs a plain black texture. I've had some issues with this before — when displaying Bitmaps, I eventually gave up trying to use the basic GLES20.glTexImage2D() with Buffers and skipped to using GLUtils.glTexImage2D(), which processes the Bitmap for you. Unfortunately, that's less of an option here (I did actually try converting the ByteBuffer to a Bitmap so I could use GLUtils, without much success), so I've really run out of ideas.
Can anyone think of anything that could be causing glTexImage2D() to not correctly process a perfectly good ByteBuffer? Any and all suggestions would be welcome.
ByteBuffer pixelBuffer;
void storePixels() {
try {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fbuf);
pixelBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder());
GLES20.glReadPixels(0, 0, width, height, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
gfx.checkGlError("store Pixels");
}catch (OutOfMemoryError e) {
pixelBuffer = null;
}
}
void redrawPixels() {
GLES20.glBindFramebuffer(GL20.GL_FRAMEBUFFER, fbuf);
int[] texId = new int[1];
GLES20.glGenTextures(1, texId, 0);
int bufferTex = texId[0];
GLES20.glBindTexture(GL20.GL_TEXTURE_2D, bufferTex);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MAG_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_MIN_FILTER, GL20.GL_LINEAR);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_S, repeatX ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GL20.GL_TEXTURE_2D, GL20.GL_TEXTURE_WRAP_T, repeatY ? GL20.GL_REPEAT
: GL20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GL20.GL_TEXTURE_2D, 0, GL20.GL_RGBA, width, height, 0, GL20.GL_RGBA, GL20.GL_UNSIGNED_BYTE, pixelBuffer);
gfx.drawTexture(bufferTex, width, height, Transform.IDENTITY, width/2, height/2, false, false, 1);
GLES20.glDeleteTextures(1, IntBuffer.wrap(new int[] {bufferTex}));
pixelBuffer = null;
GLES20.glBindFrameBuffer(GLES20.GL_FRAMEBUFFER, 0);
}
gfx.drawTexture() builds a quad and draws it to the currently bound framebuffer, by the way. That code has been well-tested in other parts of my project — it shouldn't be the issue here.
For those of you playing along at home, this code is in fact totally valid. Remember when I swore blind that gfx.drawTexture() has been well-tested and shouldn't be the issue here"? Yeah, it was totally the issue there. I was buffering vertices to draw without actually flushing them through a glDrawElements() call. Whoops.