Why Black Screen on native Android - android

I am trying to "software render" a pixel array to the screen in Android 12, using C and OpenGLES.
Relevant code - where I am expecting a 0xff0000ff (red) or 0xff00ffff (violet) rectangle:
GLbitfield *rectangle;
GLbitfield *picture;
void Renderer::render() {
for (int i = 0; i<1024*768; i++)
*(picture + i) = 0xff00ffff;
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, 1024, 768, GL_RGB_INTEGER, GL_UNSIGNED_INT, picture);
// necessary IDK
glBlitFramebuffer( 0, 0, 1024, 768,0, 0, 1024, 768, GL_COLOR_BUFFER_BIT, GL_LINEAR);
eglSwapBuffers( display_, surface_ );
}
GLuint texid;
void Renderer::initRenderer() {
rectangle = (GLbitfield*)malloc(1024 * 768 * sizeof(GLbitfield));
picture = (GLbitfield*)malloc(1024 * 768 * sizeof(GLbitfield));
for( int i=0; i<1024*768; i++ )
*(rectangle + i ) = 0xff0000ff;
glGenTextures(1, &texid);
glBindTexture( GL_TEXTURE_2D, texid);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, 1024, 768, 0, GL_RGBA, GL_UNSIGNED_INT, rectangle);
glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texid, 0 );
[...]
}
so yeah this compiles and runs with a black screen. I just want GL to "let me through" with my red/purple pixels so that I can change the array *picture later in my render-loop to sth cool.
Or is it kinda totally wrong or even unfeasible? Performance-wise that ARM in the phone should be more than capable to do some software trickery, IMHO..
btw, I really thank Khronos for removing glWritePixels() from GLES as this would have seemed to be the ideal solution to me. No need for some stupid texture workaround!
Thanks in advance for your expertise.

Related

Use glEGLImageTargetTexture2DOES to replace glReadPixels on Android

Given a textureId, I need to extract pixel data from the texture.
glReadPixels works, but it is extremely slow on some devices, even with FBO/PBO. (On Xiaomi MI 5, it is 65 ms, and even slower with PBO). So I decided to use Hardwarebuffer and eglImageKHR, which should be much faster. However, I cannot get it to work. The screen goes black, and nothing is read into the data.
//attach inputTexture to FBO
glBindFrameBuffer(GL_FRAMEBUFFER, fbo)
glFrameBufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0)
//allocate hardware buffer
AHardwareBuffer* buffer = nullptr;
AHardwareBuffer_Desc desc = {};
desc.width = width;
desc.height = height;
desc.layers = 1;
desc.usage = AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN | AHARDWAREBUFFER_USAGE_CPU_WRITE_NEVER
| AHARDWAREBUFFER_USAGE_GPU_COLOR_OUTPUT ;
desc.format = AHARDWAREBUFFER_FORMAT_R8G8B8A8_UNORM;
//create eglImageKHR
EGLint eglAttributes[] = {EGL_IMAGE_PRESERVED_KHR, EGL_TRUE, EGL_NONE};
EGLClientBuffer clientBuffer = eglGetNativeClientBufferANDROID(buffer);
EGLImageKHR eglImageKhr = eglCreateImageKHR(eglDisplay, eglContext,
EGL_NATIVE_BUFFER_ANDROID, clientBuffer, eglAttributes);
//read pixels to hardware buffer ????
glBindTexture(GL_TEXTURE_2D, textureId);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImageKhr);
//copy pixels into memory
void *readPtr;
AHardwareBuffer_lock(buffer, AHARDWAREBUFFER_USAGE_CPU_READ_OFTEN, -1, nullptr, (void**)&readPtr);
memcpy(writePtr, readPtr, width * 4);
AHardwareBuffer_unlock(buffer, nullptr);
This is my code with glReadPixels, and it just can get the pixels after attaching the texture to framebuffer.
GLES30.glBindFramebuffer(GLES30.GL_FRAMEBUFFER, fboBuffer!![0])
GLES30.glFramebufferTexture2D(GLES30.GL_FRAMEBUFFER,
GLES30.GL_COLOR_ATTACHMENT0,
GLES30.GL_TEXTURE_2D,
inputTextureId,
0)
GLES30.glReadPixels(0, 0, width, height, GLES30.GL_RGBA, GLES30.GL_UNSIGNED_BYTE, byteBuffer)
Please tell me where I did wrong :(
//read pixels to hardware buffer ????
glBindTexture(GL_TEXTURE_2D, textureId);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, eglImageKhr);
This doesn't do any reading of data. This is just setting up the texture to refer to the EGLImage. If you want data to be copied into it you need to either do glCopyTexImage2D() or bind it to a framebuffer and render in to it.

Why would an OpenGL ES render on iOS be flipped vertically?

I've got some C code to render some OpenGL stuff and it's running on both Android and iOS. On Android it looks fine. But on iOS it is flipped vertically.
Here's some simple code to demonstrate (only copied the relevant parts because OpenGL C code is long-winded):
GLfloat vVertices[] = {
0.0f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f
};
glViewport(0, 0, context->width, context->height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(data->programObject);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
On Android it looks like this:
But on iOS it looks like this:
The only thing that differs between the two platforms is the initialization code for OpenGL ES, since all the OpenGL code is shared C code. However, I can't spot anything obviously wrong with the init code.
Here's the init code (I removed most error handling because there are no errors being triggered apart from the one I left in):
- (void)initGL {
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_context];
[self createCVBufferWithSize:_renderSize withRenderTarget:&_target withTextureOut:&_texture];
glBindTexture(CVOpenGLESTextureGetTarget(_texture), CVOpenGLESTextureGetName(_texture));
glTexImage2D(GL_TEXTURE_2D, // target
0, // level
GL_RGBA, // internalformat
_renderSize.width, // width
_renderSize.height, // height
0, // border
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
NULL); // data
// HACK: we always get an "error" here (GL_INVALID_OPERATION) despite everything working. See https://stackoverflow.com/questions/57104033/why-is-glteximage2d-returning-gl-invalid-operation-on-ios
glGetError();
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _renderSize.width, _renderSize.height);
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(_texture), 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
} else {
NSLog(#"Successfully initialized GL");
char* glRendererName = getGlRendererName();
char* glVersion = getGlVersion();
char* glShadingLanguageVersion = getGlShadingLanguageVersion();
NSLog(#"OpenGL renderer name: %s, version: %s, shading language version: %s", glRendererName, glVersion, glShadingLanguageVersion);
}
}
And here's the code that creates the actual texture (using EAGL):
- (void)createCVBufferWithSize:(CGSize)size
withRenderTarget:(CVPixelBufferRef *)target
withTextureOut:(CVOpenGLESTextureRef *)texture {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_textureCache);
if (err) return;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);
CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height,
kCVPixelFormatType_32BGRA, attrs, target);
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_textureCache,
*target,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
size.width,
size.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
texture);
CFRelease(empty);
CFRelease(attrs);
}
Can anyone tell me why iOS is flipped like this? I've since noticed other people with the same problem, such as here but haven't found a solution yet.

Freetype Library gives triple textures and weird symbols

Currently working on an android ndk/ opengl project and I'm trying to use freetype as my font rendering library, but I keep getting a weird error when I render text to the screen. Here is what it is showing for a few sample texts: (note: the bottom one is supposed to say "This is")
Setup:
void TextRenderer::SetupGlyphs(std::string fontPath, int size){
__android_log_print(ANDROID_LOG_INFO, "SetupGlyphs", "Font location: %s", fontPath.c_str());
if(shadersInitialized == 0)
CreateShader();
glUseProgram(this->shader);
// FreeType
FT_Library ft;
if (FT_Init_FreeType(&ft))
__android_log_print(ANDROID_LOG_INFO, "SetupGlyphs", "ERROR::FREETYPE: Could not init FreeType Library.");
FT_Face face;
if (FT_New_Face(ft, fontPath.c_str(), 0, &face))
__android_log_print(ANDROID_LOG_INFO, "SetupGlyphs", "ERROR::FREETYPE: Failed to load font: %s", fontPath.c_str());
FT_Set_Pixel_Sizes(face, 0, size);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
for (GLubyte c = 0; c < 128; c++){
if(FT_Load_Char(face, c, FT_LOAD_RENDER)){
__android_log_print(ANDROID_LOG_INFO, "SetupGlyphs", "ERROR::FREETYPE: Failed to load Glyph");
continue;
}
GLuint texture;
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGB,
face->glyph->bitmap.width,
face->glyph->bitmap.rows,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
face->glyph->bitmap.buffer
);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Character character = {
texture,
ivec2(face->glyph->bitmap.width, face->glyph->bitmap.rows),
ivec2(face->glyph->bitmap_left, face->glyph->bitmap_top),
static_cast<GLuint>(face->glyph->advance.x)
};
characters.insert(std::pair<GLchar, Character>(c, character));
}
glBindTexture(GL_TEXTURE_2D, 0);
FT_Done_Face(face);
FT_Done_FreeType(ft);
}
Rendering:
void TextRenderer::RenderTexts()
{
if(shadersInitialized == 0)
CreateShader();
// Activate corresponding render state
glUseProgram(this->shader);
GLuint projectionLocation = glGetUniformLocation(this->shader, "projection");
glUniformMatrix4fv(projectionLocation, 1, GL_FALSE, projectionMatrix);
for (int i=0; i<projects.size(); i++) {
ProjectLabel project = projects.at(i);
glUniform3f(glGetUniformLocation(this->shader, "textColor"), project.textColor.x, project.textColor.y, project.textColor.z);
glActiveTexture(GL_TEXTURE0);
GLuint vertexBuffer;
glGenBuffers(1, &vertexBuffer);
/* Set up the VBO for our vertex data */
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexBuffer);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, 0, 0);
// Iterate through all characters
std::string::const_iterator c;
GLuint x = project.x;
for (c = project.text.begin(); c != project.text.end(); c++)
{
Character ch = characters[*c];
GLfloat xpos = x + ch.Bearing.x;
GLfloat ypos = project.y - (ch.Size.y - ch.Bearing.y);
GLfloat w = ch.Size.x;
GLfloat h = ch.Size.y;
// Update VBO for each character
GLfloat vertices[6*4] = {
xpos, ypos + h, 0.0, 0.0 ,
xpos, ypos, 0.0, 1.0 ,
xpos + w, ypos, 1.0, 1.0 ,
xpos, ypos + h, 0.0, 0.0 ,
xpos + w, ypos, 1.0, 1.0 ,
xpos + w, ypos + h, 1.0, 0.0
};
glBindTexture(GL_TEXTURE_2D, ch.TextureID);
glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_DYNAMIC_DRAW);
glDrawArrays(GL_TRIANGLES, 0, 6);
x += (ch.Advance / 64); // Bitshift by 6 to get value in pixels (2^6 = 64 (divide amount of 1/64th pixels by 64 to get amount of pixels))
}
glDisableVertexAttribArray(0);
}
glBindTexture(GL_TEXTURE_2D, 0);
}
So, to anyone that may find this post, scowering the web for hours on end, trying to figure out why everything looks funky, I found the answer. Freetype is not aligned (at least not in my project) through GL_RGB, instead is aligned through GL_LUMINANCE. By changing such things in glTexImage2D I solved on the above issues, as well as SIGABRT errors I was also getting.
TLDR;
glTexImage2D(
GL_TEXTURE_2D,
0,
GL_RGB, => GL_LUMINANCE
face->glyph->bitmap.width,
face->glyph->bitmap.rows,
0,
GL_RGB, => GL_LUMINANCE
GL_UNSIGNED_BYTE,
face->glyph->bitmap.buffer
);

glTexSubImage2D is slower with Pixel Buffer Objects

I am using PBO in opengl es3 on android but after using them, performance of data unpacking has degraded. Please help me in finding fault in the code.
Background: My app consist of a module which generates images at a very fast rate. These images are uploaded to textures which are finally drawn on TextureView. Bottleneck in my app is time taken in uploading images to textures.
I decided to try PBOs to improve this perf. Below is my code:
One time initialization
CheckGLErr( glGenTextures( 1, &texName ) );
if( usePBO )
{
CheckGLErr( glGenBuffers( 1, &pboId ) );
CheckGLErr( glBindBuffer( GL_PIXEL_UNPACK_BUFFER, pboId ) );
CheckGLErr( glBufferData( GL_PIXEL_UNPACK_BUFFER, textureSize, 0, GL_STREAM_DRAW ) );
CheckGLErr( glBindBuffer( GL_PIXEL_UNPACK_BUFFER, 0 ) );
}
glBindTexture( GL_TEXTURE_2D, texName );
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, windowWidth, windowHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0 );
glBindTexture( GL_TEXTURE_2D, 0 );
RenderLoop:
CheckGLErr(glClearColor(0, 1.0f, 0, 1));
CheckGLErr(glClear(GL_COLOR_BUFFER_BIT));
glBindTexture( GL_TEXTURE_2D, texName );
if( usePBO )
{
CheckGLErr(glBindBuffer( GL_PIXEL_UNPACK_BUFFER, pboId ));
void* pBuff = ::glMapBufferRange( GL_PIXEL_UNPACK_BUFFER, 0, textureSize , GL_MAP_WRITE_BIT );
memcpy( pBuff, buffer.get(), textureSize );
CheckGLErr(glUnmapBuffer( GL_PIXEL_UNPACK_BUFFER ));
}
//std::this_thread::sleep_for( std::chrono::milliseconds( 100 ) );
glTexSubImage2D( GL_TEXTURE_2D, 0, 0, 0, windowWidth, windowHeight, GL_RGBA, GL_UNSIGNED_BYTE, usePBO ? 0 : buffer.get() );
// Setup texture properties.
CheckGLErr(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST));
CheckGLErr(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST));
CheckGLErr(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));
CheckGLErr(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));
CheckGLErr( glUseProgram( programObject ) );
// Enable position vertex attribute. Its size is 3 floats corresponding for co-ordinates in three directions.
CheckGLErr(glEnableVertexAttribArray(0));
CheckGLErr(glVertexAttribPointer(0, 3, GL_FLOAT, false, 5 * sizeof(float)/*vertexStride*/, (const GLvoid *)0));
// Enable UV vertex attribute which starts after position. Hence offset = 3 floats.
CheckGLErr(glEnableVertexAttribArray(1));
CheckGLErr(glVertexAttribPointer(1, 2, GL_FLOAT, false, 5 * sizeof(float)/*vertexStride*/, (const GLvoid *)(3*sizeof(float))/*offset worth position*/));
// Setup ViewPort
CheckGLErr(glViewport(0, 0, windowWidth, windowHeight));
if(usePBO )
CheckGLErr( glBindBuffer( GL_PIXEL_UNPACK_BUFFER, 0 ) );
CheckGLErr(glDrawElements(GL_TRIANGLE_STRIP, _countof(indices), GL_UNSIGNED_SHORT, (const GLvoid *)0x00000000));
eglSwapBuffers(eglDisplay, eglWindowSurface);
Measurements Done:
On Nexus 7 (4.4.4) Adreno GPU
Screen Size = 1624 * 1200 pixels
Without PBO : 17 ms
With PBO : 18 ms (~ 5ms for buffer population and rest in glTexSubImage2D call)
On Nexus 6 (5.0)
Screen Size = 2042 * 1440 pixels
Without PBO : 6 ms
With PBO : 20 ms
My assumption was on using PBOs glTexSubImage2D should return instanteously(or at least in < 5ms) because image data is already in VRAM but in fact it is taking much longer specially on Nexus 6.
As an experiment I put std::this_thread::sleep_for( std::chrono::milliseconds( 100 ) ); between glUnmapBuffer and glTexSubImage2d to see whether it makes any difference in time taken by only glTexSubImage2D but it remained same.

GL_INVALID_FRAMEBUFFER_OPERATION Android NDK GL FrameBuffer and glReadPixels returns 0 0 0 0

My C++ code was designed for iOS and now I ported it to NDK with minimal modifications.
I bind frame buffer and call
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
then I bind main frame buffer like this
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glGetError() returns GL_INVALID_FRAMEBUFFER_OPERATION
I can draw in my framebuffer and use its texture to draw it in main framebuffer. But I when I call glReadPixels then I get zeros
That code worked in iOS and most works in Android except glReadPixels
glCheckFramebufferStatus(GL_FRAMEBUFFER) returns GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT 0x8CD7
--
I will consider sample code that will give me pixels data from framebuffer or texture that I can save to file as an answer
Right now I can draw to buffer with attached texture and I can use that texture to draw on main buffer. But I can't get pixels from framebuffer/texture to save to file or post to facebook.
I have been able to do a glReadPixels from a framebuffer modifying the NDK example "GL2JNIActivity".
I just changed gl_code.cpp, where I have added an initialization function:
char *pixel = NULL;
GLuint fbuffer, rbuffers[2];
int width, height;
bool setupMyStuff(int w, int h) {
// Allocate the memory
if(pixel != NULL) free(pixel);
pixel = (char *) calloc(w * h * 4, sizeof(char)); //4 components
if(pixel == NULL) {
LOGE("memory not allocated!");
return false;
}
// Create and bind the framebuffer
glGenFramebuffers(1, &fbuffer);
checkGlError("glGenFramebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, fbuffer);
checkGlError("glBindFramebuffer");
glGenRenderbuffers(2, rbuffers);
checkGlError("glGenRenderbuffers");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, w, h);
checkGlError("glGenFramebuffer[color]");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, w, h);
checkGlError("glGenFramebuffer[depth]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("Framebuffer not complete");
return false;
}
// Turn this on to confront the results in pixels when the fb is active with the ones obtained from the screen
if(false) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGlError("glBindFramebuffer");
}
width = w;
height = h;
return true;
}
which does nothing more than initializing the framebuffer and allocating the space for the output, and which I call at the very end of
bool setupGraphics(int w, int h);
and a block to save the output in my pixels array (which I obviously execute after the
checkGlError("glDrawArrays");
statement in renderFrame():
{
// save the output of glReadPixels somewhere... I'm a noob about JNI however, so I leave this part as an exercise for the reader ;-)
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
checkGlError("glReadPixel(0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, pixel)");
int end = width * height * 4 - 1;
// This function returns the same results regardless of where the data comes from (screen or fbo)
LOGI("glReadPixel => (%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),..., (%hhu, %hhu, %hhu, %hhu)",
pixel[0], pixel[1], pixel[2], pixel[3],
pixel[4], pixel[5], pixel[6], pixel[7],
pixel[8], pixel[9], pixel[10], pixel[11],
pixel[end - 3], pixel[end - 2], pixel[end - 1], pixel[end]);
}
The resuls in logcat are identical, whether you draw on the framebuffer or on the screen:
I/libgl2jni( 5246): glReadPixel => (0,4,0,255),(8,4,8,255),(0,4,0,255),..., (0, 4, 0, 255)
I/libgl2jni( 5246): glReadPixel => (8,4,8,255),(8,8,8,255),(0,4,0,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,8,8,255),(8,12,8,255),(8,8,8,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,12,8,255),(16,12,16,255),(8,12,8,255),..., (8, 12, 8, 255)
I/libgl2jni( 5246): glReadPixel => (16,12,16,255),(16,16,16,255),(8,12,8,255),..., (16, 16, 16, 255)
[...]
tested on Froyo (phone) and on a 4.0.3 tablet.
You can find all other details in the original NDK example (GL2JNIActivity).
Hope this helps
EDIT: also make sure to check:
Android NDK glReadPixels() from offscreen buffer
where the poster seemed to have your same symptoms (and solved the problem calling glReadPixels from the right thread).

Categories

Resources