OpenGL ES texture is duplicating in 4 columns and rows - android

I am trying to display a texture on a square using opengl es 1 using the ndk.
I am using this "hacks" to load a png from the apk : http://www.anddev.org/ndk_opengl_-_loading_resources_and_assets_from_native_code-t11978.html
This seems to work fine.
When i want to apply the texture to my quad, the texture seems to be duplicate.
After some research i think the problem is coming from my rendering code :
//the order is correct even if it is not in the numeric order
GLfloat vertexBuffer[] = {
_vertices[0].x, _vertices[0].y,
_vertices[3].x, _vertices[3].y,
_vertices[1].x, _vertices[1].y,
_vertices[2].x, _vertices[2].y,
};
GLfloat texCoords[] = {
0.0, 1.0, // left-bottom
1.0, 1.0, // right-bottom
0.0, 0.0, // left-top
1.0, 0.0 // right-top
};
glBindTexture(GL_TEXTURE_2D, _texture->getTexture());
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glVertexPointer(2, GL_FLOAT, 0, vertexBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, 0);

The problem was definitly is the png loading function.
I add a test to check if the image contain an alpha channel using the libpng :
bool hasAlpha;
switch (info_ptr->color_type) {
case PNG_COLOR_TYPE_RGBA:
hasAlpha = true;
break;
case PNG_COLOR_TYPE_RGB:
hasAlpha = false;
break;
default:
png_destroy_read_struct(&png_ptr, &info_ptr, NULL);
zip_fclose(file);
return TEXTURE_LOAD_ERROR;
}
And i changed the glTexImage2D parameters "internalformat" and "format":
glTexImage2D(GL_TEXTURE_2D, 0, hasAlpha ? GL_RGBA : GL_RGB, width, height, 0, hasAlpha ? GL_RGBA : GL_RGB, GL_UNSIGNED_BYTE, (GLvoid*) image_data);

Related

Why would an OpenGL ES render on iOS be flipped vertically?

I've got some C code to render some OpenGL stuff and it's running on both Android and iOS. On Android it looks fine. But on iOS it is flipped vertically.
Here's some simple code to demonstrate (only copied the relevant parts because OpenGL C code is long-winded):
GLfloat vVertices[] = {
0.0f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f
};
glViewport(0, 0, context->width, context->height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(data->programObject);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
On Android it looks like this:
But on iOS it looks like this:
The only thing that differs between the two platforms is the initialization code for OpenGL ES, since all the OpenGL code is shared C code. However, I can't spot anything obviously wrong with the init code.
Here's the init code (I removed most error handling because there are no errors being triggered apart from the one I left in):
- (void)initGL {
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_context];
[self createCVBufferWithSize:_renderSize withRenderTarget:&_target withTextureOut:&_texture];
glBindTexture(CVOpenGLESTextureGetTarget(_texture), CVOpenGLESTextureGetName(_texture));
glTexImage2D(GL_TEXTURE_2D, // target
0, // level
GL_RGBA, // internalformat
_renderSize.width, // width
_renderSize.height, // height
0, // border
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
NULL); // data
// HACK: we always get an "error" here (GL_INVALID_OPERATION) despite everything working. See https://stackoverflow.com/questions/57104033/why-is-glteximage2d-returning-gl-invalid-operation-on-ios
glGetError();
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _renderSize.width, _renderSize.height);
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(_texture), 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
} else {
NSLog(#"Successfully initialized GL");
char* glRendererName = getGlRendererName();
char* glVersion = getGlVersion();
char* glShadingLanguageVersion = getGlShadingLanguageVersion();
NSLog(#"OpenGL renderer name: %s, version: %s, shading language version: %s", glRendererName, glVersion, glShadingLanguageVersion);
}
}
And here's the code that creates the actual texture (using EAGL):
- (void)createCVBufferWithSize:(CGSize)size
withRenderTarget:(CVPixelBufferRef *)target
withTextureOut:(CVOpenGLESTextureRef *)texture {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_textureCache);
if (err) return;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);
CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height,
kCVPixelFormatType_32BGRA, attrs, target);
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_textureCache,
*target,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
size.width,
size.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
texture);
CFRelease(empty);
CFRelease(attrs);
}
Can anyone tell me why iOS is flipped like this? I've since noticed other people with the same problem, such as here but haven't found a solution yet.

C++,Android | SDL and OpenGL ES (nothing drawn)

I want to use booth SDL and gles 1 on Android using a native activity.
SDL offers a function to create an OpenGL-context( SDL_GL_CreateContext ).
Clearing the screen and swapping the buffer works, but every draw-attempt fails.
To make sure, that i made nothing wrong, i made the drawing-attempt as small as possible.
Here is the small sample.
// creating OpengGL-context [...]
while (true) // basic mainloop
{
// set viewport and projectionmatrix
glViewport(0, 0, engine->getWidth(), engine->getHeight()); // width and height in pixels
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrthof(-1, 1, 1, -1, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
//draw shape
GLfloat vertices[] = {1,0,0, 0,1,0, -1,0,0};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
}
// free memory [...]
The problem is now that nothing is drawn. But my OpenGL-context is valid.
Any help would be appreciated :)
Thanks
Edit:
I actually forgot to force OpenGL 1, using
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 1);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 1);

OPENGL ES 2.0. Android. Strange behaviour of depth buffer

My Android application show the unexpected behaviour for PowerVR SGX 544MP.
In case render setting to "RENDERMODE_WHEN_DIRTY" it seems that the
depth buffer don't work, however, if the mode set
"RENDERMODE_CONTINUOUSLY" the drawing come to right:
Wrong result:
Proper result:
The emulator draw right in both case.
The default buffer of device is 24 bit, setting the buffer to same
range as emulator (16bit) unchanged drawing. I tried the varying witn
values of Near and Far of projection matrix but it was unsuccessful.
Only one of the my matrices have modification of near plane.The
martix may make bad data in the depth buffer. I turn off the
writing in the depth buffer before drawing using this matrix. In that
case, I sets "GLES20.glDepthMask( false )" before to call
"glDrawElements".
Initialisation OPENGL ES and work with VBOs are new for me, so perhaps my misunderstanding of trouble is more deep than that seems to me.
I sends to uniform different matrix values and draw with same VBOs.
I do "Enabling" for attributes globally only one time and I don't use Disable for them later.
//MyGLSurfaceView
public MyGLSurfaceView(Context context) {
super(context);
setEGLContextClientVersion(2);
// super.setEGLConfigChooser(8,8,8,8,16,0); // same result
mRenderer = new MyGLRenderer(context);
setRenderer(mRenderer);
setRenderMode(GLSurfaceView.RENDERMODE_CONTINUOUSLY);
}
//MyGLRenderer
#Override
public void onSurfaceCreated(GL10 unused, EGLConfig config) {
GLES20.glClearColor(0.1f, 0.2f, 0.3f, 1.0f);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthRangef(0.f, 1.f);
GLES20.glClearDepthf(1.f);
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glFrontFace(GLES20.GL_CCW);
GLES20.glDepthFunc(GLES20.GL_LEQUAL);
}
#Override
public void onSurfaceChanged(GL10 unused, int width, int height) {
// Adjust the viewport based on geometry changes,
// such as screen rotation
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
perspectiveFieldOfViewRH(mProjectionMatrix, 0, 28.4f, ratio, 0.4f, 28.f);
}
#Override
public void onDrawFrame(GL10 unused) {
GLES20.glDepthMask( true );
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
int i,j;
// turn off the writing. Only read
GLES20.glDepthMask( false );
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ZERO);
GLES20.glUseProgram(prg_shaderCube);
// draw with modified projection matrix:
for (i = 0; i < 4; i++){
for (j = 0; j < 6; j++){
System.arraycopy(arrFacesMatrices[i][j], 0, mModelMatrix, 0, 16);
mModelMatrix[14] = translations[i];
Matrix.multiplyMM(mMirrorFlankWithClippingMVP, 0, mMirrorFlankViewProjectionWithClippingMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(u_changematrixCube, 1, false, mMirrorFlankWithClippingMVP, 0);
GLES20.glUniformMatrix4fv(u_modelmatrixCube, 1, false, mModelMatrix, 0);
GLES20.glCullFace(GLES20.GL_BACK);
switch(pattern[i][j]){
case 0:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]);
GLES20.glVertexAttribPointer(attr_position_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 0);
GLES20.glVertexAttribPointer(attr_color_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 12);
GLES20.glVertexAttribPointer(attr_normal_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 24);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, capacityFlat1, GLES20.GL_UNSIGNED_SHORT, 0);
break;
case 1:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]);
....
break;
....
....
}
}
}
// others
GLES20.glDepthMask( true );
for (i = 3; i >= 0; i--){
for (j = 0; j < 6; j++){
System.arraycopy(arrFacesMatrices[i][j], 0, mModelMatrix, 0, 16);
mModelMatrix[14] = translations[i];
Matrix.multiplyMM(mMirrorFlankMVP, 0, mMirrorFlankViewProjectionMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMirrorDownMVP, 0, mMirrorDownViewProjectionMatrix, 0, mModelMatrix, 0);
Matrix.multiplyMM(mMVP, 0, mViewMatrix, 0, mModelMatrix, 0);
GLES20.glUniformMatrix4fv(u_modelmatrixCube, 1, false, mModelMatrix, 0);
switch(pattern[i][j]){
case 0:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[0]);
GLES20.glVertexAttribPointer(attr_position_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 0);
GLES20.glVertexAttribPointer(attr_color_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 12);
GLES20.glVertexAttribPointer(attr_normal_cube, 3, GLES20.GL_FLOAT, false, STRIDE_IN_FLAT, 24);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, ibo[0]);
GLES20.glCullFace(GLES20.GL_FRONT);
GLES20.glUniformMatrix4fv(u_changematrixCube, 1, false, mMirrorFlankMVP, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, capacityFlat1, GLES20.GL_UNSIGNED_SHORT, 0);
GLES20.glUniformMatrix4fv(u_changematrixCube, 1, false, mMirrorDownMVP, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, capacityFlat1, GLES20.GL_UNSIGNED_SHORT, 0);
GLES20.glCullFace(GLES20.GL_BACK);
GLES20.glUniformMatrix4fv(u_changematrixCube, 1, false, mMVP, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, capacityFlat1, GLES20.GL_UNSIGNED_SHORT, 0);
break;
case 1:
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vbo[1]);
....
break;
....
....
}
}
}
}
I would prefer to work on a mode RENDERMODE_WHEN_DIRTY,and I would like to understand what is happening with my depth buffer?
The following is not as conclusive as I normally like answers to be. Particularly, I have no explanation why this would behave differently between RENDERMODE_WHEN_DIRTY and RENDERMODE_CONTINUOUSLY. But there is one point in your question that is worth explaining anyway.
Only one of the my matrices have modification of near plane. The matrix may make bad data in the depth buffer.
You'll have to be very careful here. The range between near and far plane gets mapped to the range of the depth buffer. So if you use a standard projection matrix, and change the near plane, this mapping will change.
In other words, say you use a vertex at a given z-value (in eye coordinates) for your rendering while your projection matrix was set up with a near value of near1. Now you set the projection matrix with near value near2, and use a vertex with the same z-value. This vertex will now be mapped to a different depth buffer value. So depending on your projection, the same vertex will be mapped to different depth buffer values. Or a vertex that is farther away from the camera can end up with a smaller (closer) depth buffer value because you changed your projection matrix.
You could try to compensate for this by setting the depth range accordingly. But even that seems tricky if you use a perspective projection, because the mapping of eye space depth is to depth buffer values is not linear.
If you need to clip away close parts of some of your geometry, you're probably better off keeping the projection matrix unchanged, and clipping explicitly. OpenGL ES does not support arbitrary clip planes, so the easiest approach is to pass the distance to the fragment shader, and discard the clipped fragments there. Or if it's anyway possible, have logic in your app code to avoid rendering the geometry that would be clipped.
Adding of calling the glSurfaceView.requestRender() improve a performance. My focus on depth buffer dragged away from realy cause of problem.

Setting Background in OpenGL android

I am new to OpenGL programming.I have made a rotating cube with different images on different faces of the cube..i want to set background for the Screen..Any help will be appreciated..
Draw a textured quad covering the whole viewport. To do this, switch the projection and modelview to identity and disable depth testing. With projection and modelview being identity vertex coordinates [-1 … 1] will cover the whole viewport. In code:
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
GLfloat tex_quad[16] = {
/* x, y, s, t */
-1, -1, 0, 0,
1, -1, 1, 0,
1, 1, 1, 1,
-1, 1, 0, 1
};
glVertexPointer(2, GL_FLOAT, sizeof(GLfloat)*4, &tex_quad[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(GLfloat)*4, &tex_quad[2]);
glDisable(GL_DEPTH_TEST);
glDepthMask(GL_FALSE);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, background_image_texture_ID);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDisable(GL_TEXTURE_2D);
In my project all code of creation GLSurfaceView looks like:
glSurfaceView = ...
glSurfaceView.setEGLConfigChooser(8, 8, 8, 8, 16, 0);
glSurfaceView.getHolder().setFormat(PixelFormat.TRANSLUCENT);
glSurfaceView.setBackgroundResource(R.drawable.my_background);
glSurfaceView.setZOrderOnTop(true);
glSurfaceView.setRenderer(...);
glSurfaceView.setRenderMode(...);
NOTE: Do not use
_glSurfaceView.setBackgroundDrawable(this.getResources().getDrawable(R.drawable.my_background));
I wasted a few days on it.
And do not call
gl.glClearColor(...)
in
Renderer.onDrawFrame
I think the OP wants to turn his code into an android live wallpaper.
#Sumit : if I'm right you should do your due dilligence: http://developer.android.com/resources/articles/live-wallpapers.html
If I'm wrong, then please be more precise in your question.

glTranslatef/2D viewport setup issue

When i try glTranslatef(1,-1,0); it pushes my quad's lefthand corner to the center of the screen instead of what im trying to do, moving it 1 pixel to the left and 1 down. Im pretty sure this is because my viewport isnt set correctly but im unsure why. pic, view setup code and drawing code below.
setupView:
-(void)setupView:(GLView*)view
{
printf("setup view");
glClearColor(0,1,1, 1);
// Enable Smooth Shading, default not really needed.
glShadeModel(GL_SMOOTH);
// Depth buffer setup.
glClearDepthf(1.0f);
//enable textures.
glEnable(GL_TEXTURE_2D);
glHint(GL_PERSPECTIVE_CORRECTION_HINT,GL_FASTEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
CGRect rect = view.bounds;
glOrthof( 0,rect.size.width,-rect.size.height, 0, -1, 1 ) ;
glViewport(0, 0,rect.size.width,rect.size.height);
glMatrixMode(GL_PROJECTION);
// Bind the number of textures we need, in this case one.
glGenTextures(1, &texture[0]);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_GENERATE_MIPMAP,GL_TRUE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glLoadIdentity();
NSString *path = [[NSBundle mainBundle] pathForResource:#"cm2" ofType:#"jpg"];
NSData *texData = [[NSData alloc] initWithContentsOfFile:path];
UIImage *image = [[UIImage alloc] initWithData:texData];
if (image == nil)
NSLog(#"Do real error checking here");
GLuint width = CGImageGetWidth(image.CGImage);
GLuint height = CGImageGetHeight(image.CGImage);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
void *imageData = malloc( height * width * 4 );
CGContextRef context = CGBitmapContextCreate( imageData, width, height, 8, 4 * width, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big );
// Flip the Y-axis
CGContextTranslateCTM (context, 0, height);
CGContextScaleCTM (context, 1.0, -1.0);
CGColorSpaceRelease( colorSpace );
CGContextClearRect( context, CGRectMake( 0, 0, width, height ) );
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage );
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
CGContextRelease(context);
free(imageData);
[image release];
[texData release];
}
drawView:
- (void)drawView:(GLView*)view
{
//draw calls
glColor4f(1,1,1,1);
glClear(GL_COLOR_BUFFER_BIT);
glLoadIdentity();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
static const Vertex3D vertices[] = {
{0, 0, 1}, //TL
{ 1024.0f,0, 1}, //TR
{0, -1024.0f, 1}, //BL
{ 1024.0f, -1024.0f, 1} //BR
};
static const GLfloat texCoords[] = {
0.0, 1.0,
1.0, 1.0,
0.0, 0.0,
1.0, 0.0
};
glTranslatef(1,-1, 1);
glScalef(scale,scale,1);
glBindTexture(GL_TEXTURE_2D, texture[0]);
glVertexPointer(3, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 0, texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
You need to first set up your viewport, then set the matrix mode to projection, then call glOrtho, like so:
glViewPort (0, 0, width, height);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho (0, width, 0, height, -1, 1); // Usually this is -width/2,width/2,-height/2,height/2
Also, you probably want to set the matrix mode to ModelView after that to draw your model.

Categories

Resources