Why would an OpenGL ES render on iOS be flipped vertically? - android

I've got some C code to render some OpenGL stuff and it's running on both Android and iOS. On Android it looks fine. But on iOS it is flipped vertically.
Here's some simple code to demonstrate (only copied the relevant parts because OpenGL C code is long-winded):
GLfloat vVertices[] = {
0.0f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f
};
glViewport(0, 0, context->width, context->height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(data->programObject);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
On Android it looks like this:
But on iOS it looks like this:
The only thing that differs between the two platforms is the initialization code for OpenGL ES, since all the OpenGL code is shared C code. However, I can't spot anything obviously wrong with the init code.
Here's the init code (I removed most error handling because there are no errors being triggered apart from the one I left in):
- (void)initGL {
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_context];
[self createCVBufferWithSize:_renderSize withRenderTarget:&_target withTextureOut:&_texture];
glBindTexture(CVOpenGLESTextureGetTarget(_texture), CVOpenGLESTextureGetName(_texture));
glTexImage2D(GL_TEXTURE_2D, // target
0, // level
GL_RGBA, // internalformat
_renderSize.width, // width
_renderSize.height, // height
0, // border
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
NULL); // data
// HACK: we always get an "error" here (GL_INVALID_OPERATION) despite everything working. See https://stackoverflow.com/questions/57104033/why-is-glteximage2d-returning-gl-invalid-operation-on-ios
glGetError();
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _renderSize.width, _renderSize.height);
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(_texture), 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
} else {
NSLog(#"Successfully initialized GL");
char* glRendererName = getGlRendererName();
char* glVersion = getGlVersion();
char* glShadingLanguageVersion = getGlShadingLanguageVersion();
NSLog(#"OpenGL renderer name: %s, version: %s, shading language version: %s", glRendererName, glVersion, glShadingLanguageVersion);
}
}
And here's the code that creates the actual texture (using EAGL):
- (void)createCVBufferWithSize:(CGSize)size
withRenderTarget:(CVPixelBufferRef *)target
withTextureOut:(CVOpenGLESTextureRef *)texture {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_textureCache);
if (err) return;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);
CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height,
kCVPixelFormatType_32BGRA, attrs, target);
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_textureCache,
*target,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
size.width,
size.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
texture);
CFRelease(empty);
CFRelease(attrs);
}
Can anyone tell me why iOS is flipped like this? I've since noticed other people with the same problem, such as here but haven't found a solution yet.

Related

opengl es multi-pass blurring the second pass overrides the first pass

I am not very good with opengl es so any beginner mistake in the code is justified. I have implemented the draw function to execute the same fragment shader with different uniform variable called blurringDirection, if I execute the horizontal blurring alone(by setting the blurringDirection to 0) it works fine, also the same for the vertical but when I try to combine them together the second overrides the first. Here is my draw() function:
fun draw(
mvpMatrix: FloatArray?,
vertexBuffer: FloatBuffer?,
firstVertex: Int,
vertexCount: Int,
coordsPerVertex: Int,
vertexStride: Int,
texMatrix: FloatArray?,
texBuffer: FloatBuffer?,
textureId: Int,
texStride: Int,
) {
//creating an intermediate texture
val intermediateTexIdArr = IntArray(1)
GLES20.glGenTextures(1, intermediateTexIdArr, 0)
glBindTexture(GL_TEXTURE_2D, intermediateTexIdArr[0])
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 640, 360, 0, GL_RGBA, GL_UNSIGNED_BYTE, null)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)
glBindTexture(GL_TEXTURE_2D, 0)
//creating a framebuffer and attaching the newly created texture to it.
val frameBufferIdArr = IntArray(1)
GLES20.glGenFramebuffers(1, frameBufferIdArr, 0)
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferIdArr[0])
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, intermediateTexIdArr[0], 0)
val status = glCheckFramebufferStatus(GL_FRAMEBUFFER)
Timber.d("KingArmstring: Framebuffer status is complete? ${status == GL_FRAMEBUFFER_COMPLETE}") //this prints true
if (status != GL_FRAMEBUFFER_COMPLETE) throw Exception("Framebuffer is not setup correctly")
//bind to the framebuffer instead of to the screen.
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferIdArr[0])
glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT)
glEnable(GL_DEPTH_TEST)
glDepthFunc(GL_LEQUAL)
// Select the program.
glUseProgram(programHandle)
checkGlError("glUseProgram")
// Set the texture.
glActiveTexture(GL_TEXTURE0)
glBindTexture(textureTarget, textureId) //textureId is the main id that has the original image
// Copy the model / view / projection matrix over.
glUniformMatrix4fv(mvpMatrixLoc, 1, false, mvpMatrix, 0)
checkGlError("glUniformMatrix4fv")
// Copy the texture transformation matrix over.
glUniformMatrix4fv(texMatrixLoc, 1, false, texMatrix, 0)
checkGlError("glUniformMatrix4fv")
// Enable the "aPosition" vertex attribute.
glEnableVertexAttribArray(positionLoc)
checkGlError("glEnableVertexAttribArray")
// Connect vertexBuffer to "aPosition".
glVertexAttribPointer(
positionLoc, coordsPerVertex,
GL_FLOAT, false, vertexStride, vertexBuffer
)
checkGlError("glVertexAttribPointer")
// Enable the "aTextureCoord" vertex attribute.
glEnableVertexAttribArray(textureCoordLoc)
checkGlError("glEnableVertexAttribArray")
// Connect texBuffer to "aTextureCoord".
glVertexAttribPointer(
textureCoordLoc, 2,
GL_FLOAT, false, texStride, texBuffer
)
checkGlError("glVertexAttribPointer")
if (programType == VIDEO_STREAM) {
glUniform1f(brightnessLoc, brightness)
glUniform1f(contrastLoc, contrast)
glUniform1f(saturationLoc, saturation)
glUniform1f(gammaLoc, gamma)
glUniform1f(whiteBalanceTempLoc, whiteBalanceTemp)
glUniform1f(whiteBalanceTintLoc, whiteBalanceTint)
glUniform1f(whiteBalanceEnabledLoc, whiteBalanceEnabled)
glUniform1f(hueAdjLoc, hueAdj)
glUniform1f(hueAdjEnabledLoc, hueAdjEnabled)
glUniform1f(sharpnessLoc, sharpness)
glUniform1f(imageWidthFactorLoc, imageWidthFactor)
glUniform1f(imageHeightFactorLoc, imageHeightFactor)
}
glUniform1i(blurringDirectionLoc, 0)
// Draw the rect.
glDrawArrays(GL_TRIANGLE_STRIP, firstVertex, vertexCount)//rendering to the framebuffer with the blurringDirection = 0
checkGlError("glDrawElements")
glBindFramebuffer(GL_FRAMEBUFFER, 0)
glActiveTexture(GL_TEXTURE0)
glBindTexture(GL_TEXTURE_2D, intermediateTexIdArr[0])
glUniform1i(blurringDirectionLoc, 1)
glDrawArrays(GL_TRIANGLE_STRIP, firstVertex, vertexCount)//drawing on the screen second pass with the blurringDirection = 1.
glDisableVertexAttribArray(positionLoc)
glDisableVertexAttribArray(textureCoordLoc)
GLES20.glDeleteFramebuffers(1, frameBufferIdArr, 0)
GLES20.glDeleteTextures(1, intermediateTexIdArr, 0)
}

OpenGL ES 2.0 in Android NDK: Nothing being drawn

I have a 2D game project that I'm porting to Android that utilizes OpenGL ES 2.0. I am having trouble getting anything drawn on the screen (except for a solid color from clearing the screen). Everything renders just fine when running in my Windows environment, but of course the environment is set up differently for the different version of OpenGL.
I followed the native-activity sample and took advice from several other OpenGL ES 2.0 resources to compose what I currently have.
I have checked everything I know how to with no anomalous results. As mentioned, glClear works, and displays the color set by glClearColor. I also know that every frame is being rendered, as changing glClearColor frame-by-frame displays the different colors. Of course, the application properly compiles. My textures are loaded from the proper location in the app's cache. glGetError is returning GL_NO_ERROR at every step in the process, so what I am doing appears to be accepted by OpenGL. My shaders are loaded without error. I have also tested this on both a few emulators and my physical android device, so it isn't localized to a specific device configuration.
I speculate that it must be some mistake in how I initialize and set up OpenGL. I am hoping someone more versed in OpenGL ES than I am will be able to help root out my problem. I am pasting the different relevant sections of my code below. engine is a global struct I am presently using out of laziness.
Initializing the display
static int AND_InitDisplay() {
// Desired display attributes
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_DEPTH_SIZE, 16,
EGL_NONE
};
EGLint w, h, dummy, format;
EGLint numConfigs;
EGLConfig config;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(engine->app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, engine->app->window, NULL);
EGLint const attrib_list[3] = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE};
context = eglCreateContext(display, config, NULL, attrib_list);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGW("Unable to eglMakeCurrent");
return -1;
}
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
engine->display = display;
engine->context = context;
engine->surface = surface;
engine->width = w;
engine->height = h;
// Initialize GL state.
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
return 0;
}
Drawing a frame
static void AND_drawFrame() {
if (engine->display == NULL) {
LOGW("DB E: DISPLAY IS NULL");
// No display.
return;
}
// Clearing with red color. This displays properly.
glClearColor(1.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// eglSwapBuffers results in no visible change
eglSwapBuffers(engine->display, engine->surface);
}
Example of preparing VBO data
I understand many wouldn't like the idea of using multiple VBOs for the same geometry. I would love to hear if this code isn't orthodox or is incorrect, but I am not focused on this unless it the root of my problem.
GLfloat charPosVerts[] = {
p0.x, p0.y, 0.f,
p1.x, p0.y, 0.f,
p1.x, p1.y, 0.f,
p0.x, p0.y, 0.f,
p1.x, p1.y, 0.f,
p0.x, p1.y, 0.f
};
GLfloat charTexVerts[] = {
0.0, 0.0,
textures[texid].w, 0.0,
textures[texid].w, textures[texid].h,
0.0, 0.0,
textures[texid].w, textures[texid].h,
0.0, textures[texid].h
};
GLfloat charColorVerts[] = {
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a
};
glGenBuffers(1, &(e->vboPos));
glGenBuffers(1, &(e->vboTex));
glGenBuffers(1, &(e->vboColor));
glBindBuffer(GL_ARRAY_BUFFER, e->vboPos);
glBufferData(GL_ARRAY_BUFFER, sizeof(charPosVerts), charPosVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribPosition, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribPosition);
glBindBuffer(GL_ARRAY_BUFFER, e->vboTex);
glBufferData(GL_ARRAY_BUFFER, sizeof(charTexVerts), charTexVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribTexCoord);
glBindBuffer(GL_ARRAY_BUFFER, e->vboColor);
glBufferData(GL_ARRAY_BUFFER, sizeof(charColorVerts), charColorVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribColors, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribColors);
Example of drawing VBO
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, CORE_GetBmpOpenGLTex(texix));
glUniform1i(shaderIDs.uniTexture, 0);
// Draw the sprite
glBindBuffer(GL_ARRAY_BUFFER, e->vboPos);
glVertexAttribPointer(shaderIDs.attribPosition, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribPosition);
glBindBuffer(GL_ARRAY_BUFFER, e->vboTex);
glVertexAttribPointer(shaderIDs.attribTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribTexCoord);
glBindBuffer(GL_ARRAY_BUFFER, e->vboColor);
glVertexAttribPointer(shaderIDs.attribColors, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribColors);
glDrawArrays(GL_TRIANGLES, 0, 18);
Vertex Shader
The shaders are very simple.
attribute vec3 position;
attribute vec2 texCoord;
attribute vec4 colors;
varying vec2 texCoordVar;
varying vec4 colorsVar;
void main() {
gl_Position = vec4(position, 1.0);
texCoordVar = texCoord;
colorsVar = colors;
}
Fragment Shader
uniform sampler2D texture;
varying vec2 texCoordVar;
varying vec4 colorsVar;
void main()
{
gl_FragColor = texture2D(texture, texCoordVar) * colorsVar;
}
Thanks for looking at this long post. Help is very much appreciated.
The posted code is not drawing anything. From the AND_drawFrame() function:
// Clearing with red color. This displays properly.
glClearColor(1.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// eglSwapBuffers results in no visible change
eglSwapBuffers(engine->display, engine->surface);
Based on this, the draw code is either never invoked, or the window is cleared after drawing, which would wipe out everything that was drawn before.

GL_INVALID_FRAMEBUFFER_OPERATION Android NDK GL FrameBuffer and glReadPixels returns 0 0 0 0

My C++ code was designed for iOS and now I ported it to NDK with minimal modifications.
I bind frame buffer and call
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
then I bind main frame buffer like this
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glGetError() returns GL_INVALID_FRAMEBUFFER_OPERATION
I can draw in my framebuffer and use its texture to draw it in main framebuffer. But I when I call glReadPixels then I get zeros
That code worked in iOS and most works in Android except glReadPixels
glCheckFramebufferStatus(GL_FRAMEBUFFER) returns GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT 0x8CD7
--
I will consider sample code that will give me pixels data from framebuffer or texture that I can save to file as an answer
Right now I can draw to buffer with attached texture and I can use that texture to draw on main buffer. But I can't get pixels from framebuffer/texture to save to file or post to facebook.
I have been able to do a glReadPixels from a framebuffer modifying the NDK example "GL2JNIActivity".
I just changed gl_code.cpp, where I have added an initialization function:
char *pixel = NULL;
GLuint fbuffer, rbuffers[2];
int width, height;
bool setupMyStuff(int w, int h) {
// Allocate the memory
if(pixel != NULL) free(pixel);
pixel = (char *) calloc(w * h * 4, sizeof(char)); //4 components
if(pixel == NULL) {
LOGE("memory not allocated!");
return false;
}
// Create and bind the framebuffer
glGenFramebuffers(1, &fbuffer);
checkGlError("glGenFramebuffer");
glBindFramebuffer(GL_FRAMEBUFFER, fbuffer);
checkGlError("glBindFramebuffer");
glGenRenderbuffers(2, rbuffers);
checkGlError("glGenRenderbuffers");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGB565, w, h);
checkGlError("glGenFramebuffer[color]");
glBindRenderbuffer(GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, w, h);
checkGlError("glGenFramebuffer[depth]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbuffers[0]);
checkGlError("glGenFramebuffer[color]");
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbuffers[1]);
checkGlError("glGenFramebuffer[depth]");
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
LOGE("Framebuffer not complete");
return false;
}
// Turn this on to confront the results in pixels when the fb is active with the ones obtained from the screen
if(false) {
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGlError("glBindFramebuffer");
}
width = w;
height = h;
return true;
}
which does nothing more than initializing the framebuffer and allocating the space for the output, and which I call at the very end of
bool setupGraphics(int w, int h);
and a block to save the output in my pixels array (which I obviously execute after the
checkGlError("glDrawArrays");
statement in renderFrame():
{
// save the output of glReadPixels somewhere... I'm a noob about JNI however, so I leave this part as an exercise for the reader ;-)
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixel);
checkGlError("glReadPixel(0, 0, w, h, GL_RGB, GL_UNSIGNED_BYTE, pixel)");
int end = width * height * 4 - 1;
// This function returns the same results regardless of where the data comes from (screen or fbo)
LOGI("glReadPixel => (%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),(%hhu,%hhu,%hhu,%hhu),..., (%hhu, %hhu, %hhu, %hhu)",
pixel[0], pixel[1], pixel[2], pixel[3],
pixel[4], pixel[5], pixel[6], pixel[7],
pixel[8], pixel[9], pixel[10], pixel[11],
pixel[end - 3], pixel[end - 2], pixel[end - 1], pixel[end]);
}
The resuls in logcat are identical, whether you draw on the framebuffer or on the screen:
I/libgl2jni( 5246): glReadPixel => (0,4,0,255),(8,4,8,255),(0,4,0,255),..., (0, 4, 0, 255)
I/libgl2jni( 5246): glReadPixel => (8,4,8,255),(8,8,8,255),(0,4,0,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,8,8,255),(8,12,8,255),(8,8,8,255),..., (8, 8, 8, 255)
I/libgl2jni( 5246): glReadPixel => (8,12,8,255),(16,12,16,255),(8,12,8,255),..., (8, 12, 8, 255)
I/libgl2jni( 5246): glReadPixel => (16,12,16,255),(16,16,16,255),(8,12,8,255),..., (16, 16, 16, 255)
[...]
tested on Froyo (phone) and on a 4.0.3 tablet.
You can find all other details in the original NDK example (GL2JNIActivity).
Hope this helps
EDIT: also make sure to check:
Android NDK glReadPixels() from offscreen buffer
where the poster seemed to have your same symptoms (and solved the problem calling glReadPixels from the right thread).

Texture won't appear using native code to load it with opengl es on android

I'm trying to apply a texture to an object in opengl es from the native side and I have no idea why it isn't showing up. I have a couple random objects drawn on the screen, and they're all visible and everything. I applied color to some shapes using glColor4f and that works fine. I'm trying to use a texture on the last object that gets drawn but it ends up being the same color as the one previous.
I was originally loading the texture from a png, but I decided to simplify things by loading it from a file that contains raw RGB data. It's 16 pixels x 16 pixels, and I've tried sizes up to 512 by 512 with the same result.
Here's how I'm initializing everything:
bool Activity::_initGL () {
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
EGLint dummy, format;
EGLint numConfigs;
EGLConfig config;
display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, app->window, NULL);
context = eglCreateContext(display, config, NULL, NULL);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGE("Unable to eglMakeCurrent");
return false;
}
eglQuerySurface(display, surface, EGL_WIDTH, &width);
eglQuerySurface(display, surface, EGL_HEIGHT, &height);
glViewport(0,0, width, height);
}
And then I enable the necessary things and try to create the texture:
void postInit () {
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
glDisable( GL_BLEND );
glDisable( GL_LIGHTING );
// glEnable(GL_CULL_FACE);
glEnable( GL_TEXTURE_2D );
glShadeModel(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
glClearColor(0,0,0,1);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// glTexEnvx( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glMatrixMode( GL_MODELVIEW );
GLuint texIDarray[1];
glGenTextures( 1, texIDarray );
glActiveTexture( GL_TEXTURE0 );
glBindTexture( GL_TEXTURE_2D, texIDarray[0] );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, (GLsizei)16, (GLsizei)16, 0, GL_RGB, GL_UNSIGNED_BYTE, protData);
}
And here's where the texture gets drawn, someday:
void drawImpl () {
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
// glDisable(GL_TEXTURE_2D);
// glTexEnvx( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
// glEnableClientState(GL_VERTEX_ARRAY);
// glViewport(0, 0, wid, hei);
#define fX(x) ((int)(x * (1 << 16)))
static int verts[6] = {
0,0,
65536,0,
0,30000
};
glVertexPointer(2, GL_FIXED, 0, verts);
// glColor4f(1,0,1,1);
glDrawArrays(GL_TRIANGLES, 0, 3);
static int poo[12] = {
40000,-5000,
40000,-30000,
60000,-5000,
60000,-5000,
40000,-30000,
60000,-30000
};
glVertexPointer(2, GL_FIXED, 0, poo);
// glColor4f(1,1,1,1);
glDrawArrays(GL_TRIANGLES, 0, 6);
static int pee[12] = {
40000, 5000,
60000, 5000,
60000,30000,
40000, 5000,
60000,30000,
40000,30000
};
glVertexPointer(2, GL_FIXED, 0, pee);
// glColor4f(1,0,1,1);
glDrawArrays(GL_TRIANGLES, 0, 6);
glEnable(GL_TEXTURE_2D);
static int squareVerts[12] = {
0,0,
fX(1),0,
0,fX(1),
0,fX(1),
fX(1),0,
fX(1),fX(1)
};
static int texCoords[12] = {
0,0,
fX(1),0,
0,fX(1),
0,fX(1),
fX(1),0,
fX(1),fX(1)
};
//glTranslatef( (float)-.25, (float)-.5, (float)0);
// glColor4f(0,0,0,0);
// glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glActiveTexture( GL_TEXTURE0 );
// glBindTexture(GL_TEXTURE_2D, texID);
glTexEnvx( GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE );
glTexParameterx( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT );
glTexParameterx( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT );
glVertexPointer(2, GL_FIXED, 0, squareVerts);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
// glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable( GL_TEXTURE_2D );
glDrawArrays(GL_TRIANGLES, 0, 6);
// glDisableClientState(GL_VERTEX_ARRAY);
// glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I deliberately left some of the commented out things to show the other things I have tried doing.
I am totally at a dead end with this right now. If anyone has any suggestions or anything it would make me super happy, and that is good.
Seems like you messed up your texture coordinates.. They should be between 0 and 1, not between 0 and 1<<16. Another thing is your "glColor4f" will also affect your texture by modulating it and for normal texture draw it needs to be set to (1,1,1,1).

OpenGL ES texture is duplicating in 4 columns and rows

I am trying to display a texture on a square using opengl es 1 using the ndk.
I am using this "hacks" to load a png from the apk : http://www.anddev.org/ndk_opengl_-_loading_resources_and_assets_from_native_code-t11978.html
This seems to work fine.
When i want to apply the texture to my quad, the texture seems to be duplicate.
After some research i think the problem is coming from my rendering code :
//the order is correct even if it is not in the numeric order
GLfloat vertexBuffer[] = {
_vertices[0].x, _vertices[0].y,
_vertices[3].x, _vertices[3].y,
_vertices[1].x, _vertices[1].y,
_vertices[2].x, _vertices[2].y,
};
GLfloat texCoords[] = {
0.0, 1.0, // left-bottom
1.0, 1.0, // right-bottom
0.0, 0.0, // left-top
1.0, 0.0 // right-top
};
glBindTexture(GL_TEXTURE_2D, _texture->getTexture());
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, 0);
The problem was definitly is the png loading function.
I add a test to check if the image contain an alpha channel using the libpng :
bool hasAlpha;
switch (info_ptr->color_type) {
case PNG_COLOR_TYPE_RGBA:
hasAlpha = true;
break;
case PNG_COLOR_TYPE_RGB:
hasAlpha = false;
break;
default:
png_destroy_read_struct(&png_ptr, &info_ptr, NULL);
zip_fclose(file);
return TEXTURE_LOAD_ERROR;
}
And i changed the glTexImage2D parameters "internalformat" and "format":
glTexImage2D(GL_TEXTURE_2D, 0, hasAlpha ? GL_RGBA : GL_RGB, width, height, 0, hasAlpha ? GL_RGBA : GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) image_data);

Categories

Resources