I have a 2D game project that I'm porting to Android that utilizes OpenGL ES 2.0. I am having trouble getting anything drawn on the screen (except for a solid color from clearing the screen). Everything renders just fine when running in my Windows environment, but of course the environment is set up differently for the different version of OpenGL.
I followed the native-activity sample and took advice from several other OpenGL ES 2.0 resources to compose what I currently have.
I have checked everything I know how to with no anomalous results. As mentioned, glClear works, and displays the color set by glClearColor. I also know that every frame is being rendered, as changing glClearColor frame-by-frame displays the different colors. Of course, the application properly compiles. My textures are loaded from the proper location in the app's cache. glGetError is returning GL_NO_ERROR at every step in the process, so what I am doing appears to be accepted by OpenGL. My shaders are loaded without error. I have also tested this on both a few emulators and my physical android device, so it isn't localized to a specific device configuration.
I speculate that it must be some mistake in how I initialize and set up OpenGL. I am hoping someone more versed in OpenGL ES than I am will be able to help root out my problem. I am pasting the different relevant sections of my code below. engine is a global struct I am presently using out of laziness.
Initializing the display
static int AND_InitDisplay() {
// Desired display attributes
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_RENDERABLE_TYPE, EGL_OPENGL_ES2_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_ALPHA_SIZE, 8,
EGL_DEPTH_SIZE, 16,
EGL_NONE
};
EGLint w, h, dummy, format;
EGLint numConfigs;
EGLConfig config;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(engine->app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, engine->app->window, NULL);
EGLint const attrib_list[3] = {EGL_CONTEXT_CLIENT_VERSION, 2, EGL_NONE};
context = eglCreateContext(display, config, NULL, attrib_list);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGW("Unable to eglMakeCurrent");
return -1;
}
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
engine->display = display;
engine->context = context;
engine->surface = surface;
engine->width = w;
engine->height = h;
// Initialize GL state.
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST);
return 0;
}
Drawing a frame
static void AND_drawFrame() {
if (engine->display == NULL) {
LOGW("DB E: DISPLAY IS NULL");
// No display.
return;
}
// Clearing with red color. This displays properly.
glClearColor(1.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// eglSwapBuffers results in no visible change
eglSwapBuffers(engine->display, engine->surface);
}
Example of preparing VBO data
I understand many wouldn't like the idea of using multiple VBOs for the same geometry. I would love to hear if this code isn't orthodox or is incorrect, but I am not focused on this unless it the root of my problem.
GLfloat charPosVerts[] = {
p0.x, p0.y, 0.f,
p1.x, p0.y, 0.f,
p1.x, p1.y, 0.f,
p0.x, p0.y, 0.f,
p1.x, p1.y, 0.f,
p0.x, p1.y, 0.f
};
GLfloat charTexVerts[] = {
0.0, 0.0,
textures[texid].w, 0.0,
textures[texid].w, textures[texid].h,
0.0, 0.0,
textures[texid].w, textures[texid].h,
0.0, textures[texid].h
};
GLfloat charColorVerts[] = {
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a,
e->color.r, e->color.g, e->color.b, e->color.a
};
glGenBuffers(1, &(e->vboPos));
glGenBuffers(1, &(e->vboTex));
glGenBuffers(1, &(e->vboColor));
glBindBuffer(GL_ARRAY_BUFFER, e->vboPos);
glBufferData(GL_ARRAY_BUFFER, sizeof(charPosVerts), charPosVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribPosition, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribPosition);
glBindBuffer(GL_ARRAY_BUFFER, e->vboTex);
glBufferData(GL_ARRAY_BUFFER, sizeof(charTexVerts), charTexVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribTexCoord);
glBindBuffer(GL_ARRAY_BUFFER, e->vboColor);
glBufferData(GL_ARRAY_BUFFER, sizeof(charColorVerts), charColorVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(shaderIDs.attribColors, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribColors);
Example of drawing VBO
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, CORE_GetBmpOpenGLTex(texix));
glUniform1i(shaderIDs.uniTexture, 0);
// Draw the sprite
glBindBuffer(GL_ARRAY_BUFFER, e->vboPos);
glVertexAttribPointer(shaderIDs.attribPosition, 3, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribPosition);
glBindBuffer(GL_ARRAY_BUFFER, e->vboTex);
glVertexAttribPointer(shaderIDs.attribTexCoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribTexCoord);
glBindBuffer(GL_ARRAY_BUFFER, e->vboColor);
glVertexAttribPointer(shaderIDs.attribColors, 4, GL_FLOAT, GL_FALSE, 0, 0);
glEnableVertexAttribArray(shaderIDs.attribColors);
glDrawArrays(GL_TRIANGLES, 0, 18);
Vertex Shader
The shaders are very simple.
attribute vec3 position;
attribute vec2 texCoord;
attribute vec4 colors;
varying vec2 texCoordVar;
varying vec4 colorsVar;
void main() {
gl_Position = vec4(position, 1.0);
texCoordVar = texCoord;
colorsVar = colors;
}
Fragment Shader
uniform sampler2D texture;
varying vec2 texCoordVar;
varying vec4 colorsVar;
void main()
{
gl_FragColor = texture2D(texture, texCoordVar) * colorsVar;
}
Thanks for looking at this long post. Help is very much appreciated.
The posted code is not drawing anything. From the AND_drawFrame() function:
// Clearing with red color. This displays properly.
glClearColor(1.f, 0.f, 0.f, 1.f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// eglSwapBuffers results in no visible change
eglSwapBuffers(engine->display, engine->surface);
Based on this, the draw code is either never invoked, or the window is cleared after drawing, which would wipe out everything that was drawn before.
Related
I am working on android app that uses NDK camera2API with opengl.
When I launch the application on the device, a black rectangle appears on top, although the application should run in full screen.
The architecture of the application from the java side uses the navigation graph.
To fullscreen mode, I use this:
class MainActivity : AppCompatActivity() {
...
...
companion object {
/** Combination of all flags required to put activity into immersive mode */
const val FLAGS_FULLSCREEN=
View.SYSTEM_UI_FLAG_LOW_PROFILE or
View.SYSTEM_UI_FLAG_FULLSCREEN or
View.SYSTEM_UI_FLAG_LAYOUT_STABLE or
View.SYSTEM_UI_FLAG_IMMERSIVE_STICKY
/** Milliseconds used for UI animations */
const val ANIMATION_FAST_MILLIS = 50L
const val ANIMATION_SLOW_MILLIS = 100L
private const val IMMERSIVE_FLAG_TIMEOUT = 100L
}
On android side I create texture for usage in cpp:
GLES30.glGenTextures(1, textures, 0)
GLES30.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[0])
surfaceTexture = SurfaceTexture(textures[0])
Shaders:
static const char* vertex_shader_src = R"(
attribute vec3 vertexPosition;
attribute vec2 uvs;
uniform mat4 texMatrix; // this from surfaceTexture getTransformMatrix
varying vec2 varUvs;
void main()
{
varUvs = (texMatrix * vec4(uvs.x, uvs.y, 0, 1.0)).xy;
gl_Position = vec4(vertexPosition, 1.0);
}
)";
static const char* fragment_shader_src = R"(
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES texSampler;
varying vec2 varUvs;
void main()
{
gl_FragColor = texture2D(texSampler, varUvs);
}
)";
Vertex and index
static float vertices[] {
// x, y, z, u, v
-1, -1, 0, 0, 0,
-1, 1, 0, 0, 1,
1, 1, 0, 1, 1,
1, -1, 0, 1, 0
};
static GLuint indices[] { 2, 1, 0, 0, 3, 2 };
This is render code
void ogl::draw_frame(const float texMat[]) {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
glClearColor(0,0,0,1);
glUseProgram(program);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_EXTERNAL_OES, texture_id);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_EXTERNAL_OES, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(tex_sampler, 0);
glUniformMatrix4fv(tex_matrix, 1, false, texMat);
glBindBuffer(GL_ARRAY_BUFFER, buffers[0]);
glEnableVertexAttribArray(vertex_position);
glVertexAttribPointer(vertex_position, 3, GL_FLOAT, GL_FALSE, sizeof(float) * 5, 0);
glEnableVertexAttribArray(uvs);
glVertexAttribPointer(uvs, 2, GL_FLOAT, GL_FALSE, sizeof(float) * 5, (void *)(3 * sizeof(float)));
glViewport(0, 0, width, height);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, buffers[1]);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, 0);
}
Closed.
the problem was that I used the dimensions that the camera used (for example, the camera resolution is 480x640, and the actual window size is 480 x 752, the difference is 112 pixels)
I've got some C code to render some OpenGL stuff and it's running on both Android and iOS. On Android it looks fine. But on iOS it is flipped vertically.
Here's some simple code to demonstrate (only copied the relevant parts because OpenGL C code is long-winded):
GLfloat vVertices[] = {
0.0f, 0.5f,
-0.5f, -0.5f,
0.5f, -0.5f
};
glViewport(0, 0, context->width, context->height);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(data->programObject);
glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 0, vVertices);
glEnableVertexAttribArray(0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDisableVertexAttribArray(0);
On Android it looks like this:
But on iOS it looks like this:
The only thing that differs between the two platforms is the initialization code for OpenGL ES, since all the OpenGL code is shared C code. However, I can't spot anything obviously wrong with the init code.
Here's the init code (I removed most error handling because there are no errors being triggered apart from the one I left in):
- (void)initGL {
_context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
[EAGLContext setCurrentContext:_context];
[self createCVBufferWithSize:_renderSize withRenderTarget:&_target withTextureOut:&_texture];
glBindTexture(CVOpenGLESTextureGetTarget(_texture), CVOpenGLESTextureGetName(_texture));
glTexImage2D(GL_TEXTURE_2D, // target
0, // level
GL_RGBA, // internalformat
_renderSize.width, // width
_renderSize.height, // height
0, // border
GL_RGBA, // format
GL_UNSIGNED_BYTE, // type
NULL); // data
// HACK: we always get an "error" here (GL_INVALID_OPERATION) despite everything working. See https://stackoverflow.com/questions/57104033/why-is-glteximage2d-returning-gl-invalid-operation-on-ios
glGetError();
glGenRenderbuffers(1, &_depthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, _renderSize.width, _renderSize.height);
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(_texture), 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthBuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
} else {
NSLog(#"Successfully initialized GL");
char* glRendererName = getGlRendererName();
char* glVersion = getGlVersion();
char* glShadingLanguageVersion = getGlShadingLanguageVersion();
NSLog(#"OpenGL renderer name: %s, version: %s, shading language version: %s", glRendererName, glVersion, glShadingLanguageVersion);
}
}
And here's the code that creates the actual texture (using EAGL):
- (void)createCVBufferWithSize:(CGSize)size
withRenderTarget:(CVPixelBufferRef *)target
withTextureOut:(CVOpenGLESTextureRef *)texture {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, _context, NULL, &_textureCache);
if (err) return;
CFDictionaryRef empty;
CFMutableDictionaryRef attrs;
empty = CFDictionaryCreate(kCFAllocatorDefault,
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs = CFDictionaryCreateMutable(kCFAllocatorDefault, 1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs, kCVPixelBufferIOSurfacePropertiesKey, empty);
CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height,
kCVPixelFormatType_32BGRA, attrs, target);
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
_textureCache,
*target,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
size.width,
size.height,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
texture);
CFRelease(empty);
CFRelease(attrs);
}
Can anyone tell me why iOS is flipped like this? I've since noticed other people with the same problem, such as here but haven't found a solution yet.
I just started programming opengl-es 2.0 and I'm currently struggling to find an issue concerned with setting the color of a wavefront object im drawing (https://pastebin.com/cEvpj8rt). The drawing is working just fine until I start to manipulate the color at which point im being confronted with opengl error 1281 and I'm unable to pinpoint the cause in my code. I've broken down the shader code to what I believe is the bare minimum required for the fragment shader to work:
void main() {
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}
To eliminate any additional source of error I am setting the color with a constant value as can be seen above. I doubt the error lies with the simple code above but is concerned with the code in my adapted renderer implementation. (it is based on the renderer that came with a sample from the ar-core github repo. The full code of the initial renderer can be found here: https://github.com/google-ar/arcore-android-sdk/blob/master/samples/java_arcore_hello_ar/app/src/main/java/com/google/ar/core/examples/java/helloar/rendering/ObjectRenderer.java while the adapted version can be seen here: https://pastebin.com/9cmKVnLV) Below you can find an excerpt of the code responsible for setting up and drawing the object. I reckoned the issue to be connected to the texturing which is why I removed the code.
I know its a bit much to ask for help given my lack of understanding on the matter at hand but I'd be glad for any hint/advice at this point. The error occurs after the first draw in the following method:
public void draw(float[] cameraView, float[] cameraPerspective) {
multiplyMM(mModelViewMatrix, 0, cameraView, 0, mModelMatrix, 0);
multiplyMM(mModelViewProjectionMatrix, 0, cameraPerspective, 0, mModelViewMatrix, 0);
glUseProgram(mProgram);
glBindBuffer(GL_ARRAY_BUFFER, mVertexBufferId);
glVertexAttribPointer(mPositionAttribute, COORDS_PER_VERTEX,
GL_FLOAT, false, 0, mVerticesBaseAddress);
glVertexAttribPointer(mNormalAttribute, 3,
GL_FLOAT, false, 0, mNormalsBaseAddress);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Set the ModelViewProjection matrix in the shader.
glUniformMatrix4fv(mModelViewUniform, 1,
false, mModelViewMatrix, 0);
glUniformMatrix4fv(mModelViewProjectionUniform, 1,
false, mModelViewProjectionMatrix, 0);
glEnableVertexAttribArray(mPositionAttribute);
glEnableVertexAttribArray(mNormalAttribute);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBufferId);
glDrawElements(GL_TRIANGLES, mIndexCount, GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableVertexAttribArray(mPositionAttribute);
glDisableVertexAttribArray(mNormalAttribute);
// this is where the error is detected
OpenGlHelper.checkGLError(TAG, "After draw");
}
Here the method which is used for initialization:
public void createOnGlThread(Context context) throws IOException {
InputStream objInputStream = context.getAssets()
.open(OBJ_ASSET_NAME);
Obj obj = ObjReader.read(objInputStream);
obj = ObjUtils.convertToRenderable(obj);
IntBuffer wideIndices = ObjData.getFaceVertexIndices(obj, 3);
FloatBuffer vertices = ObjData.getVertices(obj);
FloatBuffer texCoords = ObjData.getTexCoords(obj, 2);
FloatBuffer normals = ObjData.getNormals(obj);
ShortBuffer indices = ByteBuffer.allocateDirect(2 * wideIndices.limit())
.order(ByteOrder.nativeOrder()).asShortBuffer();
while (wideIndices.hasRemaining()) {
indices.put((short) wideIndices.get());
}
indices.rewind();
int[] buffers = new int[2];
glGenBuffers(2, buffers, 0);
mVertexBufferId = buffers[0];
mIndexBufferId = buffers[1];
// Load vertex buffer
mVerticesBaseAddress = 0;
mTexCoordsBaseAddress = mVerticesBaseAddress + 4 * vertices.limit();
mNormalsBaseAddress = mTexCoordsBaseAddress + 4 * texCoords.limit();
final int totalBytes = mNormalsBaseAddress + 4 * normals.limit();
glBindBuffer(GL_ARRAY_BUFFER, mVertexBufferId);
glBufferData(GL_ARRAY_BUFFER, totalBytes, null, GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER, mVerticesBaseAddress,
4 * vertices.limit(), vertices);
glBufferSubData(GL_ARRAY_BUFFER, mTexCoordsBaseAddress,
4 * texCoords.limit(), texCoords);
glBufferSubData(GL_ARRAY_BUFFER, mNormalsBaseAddress,
4 * normals.limit(), normals);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Load index buffer
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, mIndexBufferId);
mIndexCount = indices.limit();
glBufferData(GL_ELEMENT_ARRAY_BUFFER, 2 * mIndexCount,
indices, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
OpenGlHelper.checkGLError(TAG, "OBJ buffer load");
mProgram = glCreateProgram();
glAttachShader(mProgram, OpenGlHelper.loadGLShader(TAG, context,
GL_VERTEX_SHADER, R.raw.sphere_vertex));
glAttachShader(mProgram, OpenGlHelper.loadGLShader(TAG, context,
GL_FRAGMENT_SHADER, R.raw.sphere_fragment));
glLinkProgram(mProgram);
glUseProgram(mProgram);
OpenGlHelper.checkGLError(TAG, "Program creation");
mModelViewUniform = glGetUniformLocation(mProgram, "u_ModelView");
mModelViewProjectionUniform =
glGetUniformLocation(mProgram, "u_ModelViewProjection");
mPositionAttribute = glGetAttribLocation(mProgram, "a_Position");
mNormalAttribute = glGetAttribLocation(mProgram, "a_Normal");
OpenGlHelper.checkGLError(TAG, "Program parameters");
setIdentityM(mModelMatrix, 0);
}
Riddle me this,
I recently published a game for Android (play.google.com/store/apps/details?id=com.quackers if you want to witness the problems first hand), and initial feedback suggests that the thing doesn't run properly on some devices. I've since got ahold of one of the offending tablets (a Samsung Galaxy Tab 2 7.0) and it turns out it does run, it just wasn't rendering things properly.
A few digs later and I've discovered that it's a texturing issue. Textures are being loaded okay, but they're not being rendered - not the usual black squares you often get with OpenGL when something goes wrong - nothing at all.
This is OpenGL ES 2.0, doing an SDL/C++/ndk thing. While there are similar problems on the net, much of it involves ES 1.0 and regards a different issue - texture dimensions not being powers of two (e.g. 64x64, 128x128, 256x256 etc.) or some wacky compression stuff which doesn't apply here.
I've stripped out all of my rendering code and have gone back to basics - rendering a textured square (in a not particularly optimised manner).
Pre-loop code:
SDL_Init(SDL_INIT_VIDEO);
SDL_LogSetAllPriority(SDL_LOG_PRIORITY_VERBOSE);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_ES);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 2);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 0);
SDL_DisplayMode mode;
SDL_GetDisplayMode(0,0, &mode);
_currentWidth = mode.w;
_currentHeight = mode.h;
SDL_GL_SetAttribute(SDL_GL_ACCELERATED_VISUAL, 1);
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
SDL_GL_SetAttribute(SDL_GL_DEPTH_SIZE, 24);
_screen = SDL_CreateWindow("window", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, _currentWidth, _currentHeight, SDL_WINDOW_OPENGL | SDL_WINDOW_FULLSCREEN | SDL_WINDOW_RESIZABLE);
SDL_GLContext context = SDL_GL_CreateContext(_screen);
SDL_GL_MakeCurrent(_screen, context);
glViewport(0, 0, _currentWidth, _currentHeight);
//---
GLuint vs = glCreateShader(GL_VERTEX_SHADER);
const char *vs_source = "attribute highp vec2 coord2d; "
"attribute highp vec2 texcoord;"
"varying highp vec2 f_texcoord;"
"void main(void) { "
"gl_Position = vec4(coord2d, 0.0, 1.0); "
"f_texcoord = texcoord;"
"}";
glShaderSource(vs, 1, &vs_source, NULL);
glCompileShader(vs);
GLuint fs = glCreateShader(GL_FRAGMENT_SHADER);
const char *fs_source = "varying highp vec2 f_texcoord;"
"uniform sampler2D texture;"
"void main(void) { "
"vec2 flipped_texcoord = vec2(f_texcoord.x, 1.0 - f_texcoord.y);"
"gl_FragColor = texture2D(texture, flipped_texcoord);"
"}";
glShaderSource(fs, 1, &fs_source, NULL);
glCompileShader(fs);
_program = glCreateProgram();
glAttachShader(_program, vs);
glAttachShader(_program, fs);
glLinkProgram(_program);
//---
GLuint vs2 = glCreateShader(GL_VERTEX_SHADER);
const char *vs_source2 = "attribute vec2 coord2d; "
"void main(void) { "
"gl_Position = vec4(coord2d, 0.0, 1.0); "
"}";
glShaderSource(vs2, 1, &vs_source2, NULL);
glCompileShader(vs2);
GLuint fs2 = glCreateShader(GL_FRAGMENT_SHADER);
const char *fs_source2 = "uniform lowp vec4 u_colour;"
"void main(void) { "
"gl_FragColor = u_colour;"
"}";
glShaderSource(fs2, 1, &fs_source2, NULL);
glCompileShader(fs2);
_flatProgram = glCreateProgram();
glAttachShader(_flatProgram, vs2);
glAttachShader(_flatProgram, fs2);
glLinkProgram(_flatProgram);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
//---------------------------------------
_screenRect.x = -1.0;
_screenRect.y = -1.0;
_screenRect.w = 2.0;
_screenRect.h = 2.0;
_superDuperFrameBuffer = 0;
_depthRenderBuffer = 0;
glGenTextures(1, &_screenTexture);
glBindTexture(GL_TEXTURE_2D, _screenTexture);
if(_currentWidth < SCREENWIDTH*2 || _currentHeight < SCREENHEIGHT*2) {
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
else {
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
}
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, SCREENWIDTH, SCREENHEIGHT, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &_depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, SCREENWIDTH, SCREENHEIGHT);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
// create a framebuffer object
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glGenFramebuffers(1, &_superDuperFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _superDuperFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _screenTexture, 0);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, _depthRenderBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, _depthRenderBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
_defaultFrameBuffer = 0;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &_defaultFrameBuffer);
glClearColor(0.0f, 0.0f, 1.0f, 1.0f);
glEnable(GL_BLEND);
glEnable(GL_TEXTURE_2D);
SDL_Surface* testSurface = IMG_Load("graphics/bg_01_0.png");
uint32_t rmask;
uint32_t gmask;
uint32_t bmask;
uint32_t amask;
#if SDL_BYTEORDER == SDL_BIG_ENDIAN
rmask = 0xff000000;
gmask = 0x00ff0000;
bmask = 0x0000ff00;
amask = 0x000000ff;
#else
rmask = 0x000000ff;
gmask = 0x0000ff00;
bmask = 0x00ff0000;
amask = 0xff000000;
#endif
SDL_Surface *tempSurface = SDL_CreateRGBSurface(0, testSurface->w, testSurface->h, 32, rmask, gmask, bmask, amask);
SDL_SetSurfaceBlendMode(tempSurface, SDL_BLENDMODE_BLEND);
SDL_BlitSurface(testSurface, NULL, tempSurface, NULL);
testSurface = tempSurface;
SDL_FreeSurface(tempSurface);
GLint uniformTexture = glGetUniformLocation(_program, "texture");
_testTexture = 0;
glGenTextures(1, &_testTexture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _testTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
//glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(uniformTexture, /*GL_TEXTURE*/0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, testSurface->pixels);
_vboTest = 0;
_vbo_cube_texcoords = 0;
glGenBuffers(1, &_vboTest);
glGenBuffers(1, &_vbo_cube_texcoords);
loop:
...
_quadColour[0] = 0.0f;
_quadColour[1] = 255.0f;
_quadColour[2] = 0.0f;
_quadColour[3] = 1.0f;
drawSquare(0, 0, 20, 20);
glViewport(0, 0, SCREENWIDTH, SCREENHEIGHT);
GLfloat x1 = 0, x2 = 8, y1 = 0, y2 = 8;
glUseProgram(_program);
GLint attributeCoord2d = glGetAttribLocation(_program, "coord2d");
GLint attributeTexcoord = glGetAttribLocation(_program, "texcoord");
glEnableVertexAttribArray(attributeTexcoord);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _testTexture);
GLfloat cube_texcoords[] = {
0.0, 1.0,
0.0, 0.0,
1.0, 0.0,
1.0, 1.0,
};
glBindBuffer(GL_ARRAY_BUFFER, _vbo_cube_texcoords);
glVertexAttribPointer(attributeTexcoord, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBufferData(GL_ARRAY_BUFFER, sizeof(cube_texcoords), cube_texcoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(attributeCoord2d);
GLfloat triangle_vertices[] = {
x1, y2,
x1, y1,
x2, y1,
x2, y2,
};
glBindBuffer(GL_ARRAY_BUFFER, _vboTest);
glVertexAttribPointer(attributeCoord2d, 2, GL_FLOAT, GL_FALSE, 0, 0);
glBufferData(GL_ARRAY_BUFFER, sizeof(triangle_vertices), triangle_vertices, GL_STATIC_DRAW);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
SDL_GL_SwapWindow(_screen);
...
obviously there's some stuff I've only added for testing purposes, like converting textures to RGBA and whatever. It draws a little green square, then a textured square.
Code might be messy but the point is this - two different results:
Galaxy Tab 2 7.0 (bork)
http://i.imgur.com/ht6LvFV.png
Nexus 7 (correct)
http://i.imgur.com/p4acmIq.png
How do I fix this?
Power-of-two textures are still required on many GLES 2.0 devices. From section 3.8.2 of the GLES 2.0 spec (https://www.khronos.org/registry/gles/specs/2.0/es_full_spec_2.0.25.pdf):
"Calling a sampler from a fragment shader will return (R; G;B;A) =
(0; 0; 0; 1) if any of the following conditions are true: ... A two-dimensional sampler is called, the corresponding texture image is a non-power-of-two image (as described in the Mipmapping discussion of section 3.7.7), and either the texture wrap mode is not CLAMP_TO_EDGE, or the minification filter is neither NEAREST nor LINEAR."
Assuming SCREENHEIGHT/SCREENWIDTH are the dimensions of your device, you're violating this restriction. You can ignore this restriction if your device supports some NPOT extension, for instance GL_OES_texture_npot (https://www.khronos.org/registry/gles/extensions/OES/OES_texture_npot.txt), although in my experience, some devices that report this extension still sample textures as black when the npot texture is the color target of a framebuffer. The best course is to just always use POT render targets in ES 2.0.
There are a few errors and possible issues in this code:
Your fragment shader should not compile if the GLSL compiler is strict about error checking. Since you do not specify a default precision, and there is no default precision for float/vector/matrix types, you need an explicit precision for all declarations. It is missing for this variable:
vec2 flipped_texcoord = vec2(f_texcoord.x, 1.0 - f_texcoord.y);
If you want to stick with highp, this should be:
highp vec2 flipped_texcoord = vec2(f_texcoord.x, 1.0 - f_texcoord.y);
This call has a bad argument:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _screenTexture, 0);
Since you're attaching a texture, the 3rd argument must be GL_TEXTURE_2D (you would need to use glFramebufferRenderbuffer for attaching a renderbuffer):
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, screenTexture, 0);
Make sure that the OES_packed_depth_stencil extension is supported on the device, since you're using it here:
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, SCREENWIDTH, SCREENHEIGHT);
This code sequence does not make much sense:
glBindFramebuffer(GL_FRAMEBUFFER, 0);
_defaultFrameBuffer = 0;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &_defaultFrameBuffer);
You just bound framebuffer 0, so the current framebuffer binding will always be 0 here. If you're concerned that the default framebuffer might not be 0, you have to query the value before the first time you change the binding.
This is not a valid call in ES 2.0:
glEnable(GL_TEXTURE_2D);
Enabling textures was only needed in fixed function OpenGL. Once you use shader, it will just use textures anytime the shader... uses textures. No need to explicitly enable anything.
I'm new to Android NDK and Native Activity, I liked to create a triangle in the middle of the screen but no matter how I tried I wouldn't show up!
Here is my initialize method:
void Engine::initialize() {
LOGI("Engine::initialize fired!");
const EGLint attribs[] = {
EGL_SURFACE_TYPE, EGL_WINDOW_BIT,
EGL_BLUE_SIZE, 8,
EGL_GREEN_SIZE, 8,
EGL_RED_SIZE, 8,
EGL_NONE
};
EGLint w, h, dummy, format;
EGLint numConfigs;
EGLConfig config;
EGLSurface surface;
EGLContext context;
EGLDisplay display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
eglInitialize(display, 0, 0);
eglChooseConfig(display, attribs, &config, 1, &numConfigs);
eglGetConfigAttrib(display, config, EGL_NATIVE_VISUAL_ID, &format);
ANativeWindow_setBuffersGeometry(this->app->window, 0, 0, format);
surface = eglCreateWindowSurface(display, config, this->app->window, NULL);
context = eglCreateContext(display, config, NULL, NULL);
if (eglMakeCurrent(display, surface, surface, context) == EGL_FALSE) {
LOGW("Unable to eglMakeCurrent");
return;
}
eglQuerySurface(display, surface, EGL_WIDTH, &w);
eglQuerySurface(display, surface, EGL_HEIGHT, &h);
this->display = display;
this->context = context;
this->surface = surface;
this->width = w;
this->height = h;
// Initialize GL state.
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_FASTEST);
glEnable(GL_CULL_FACE);
glShadeModel(GL_SMOOTH);
glDisable(GL_DEPTH_TEST);
this->animating = true;
}
And here is my render method:
void Engine::onRender() {
glClearColor(0.7, 0.1, 0.5, 1);
glClear(GL_COLOR_BUFFER_BIT);
glViewport(0, 0, this->width, this->height);
//glMatrixMode(GL_PROJECTION);
//glLoadIdentity();
//glFrustumf(-this->width / 2, this->width / 2, -this->height / 2, this->height / 2, 1, 3);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, 0);
GLfloat triangle[] = {
0, 0, 0,
0, 100, 0,
100, -100, 0
};
glPushMatrix();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glTranslatef(0, 0, 0);
glColor4f(1.0f, 0.3f, 0.0f, .5f);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, triangle);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 3);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0, 0, -10);
eglSwapBuffers(this->display, this->surface);
}
Anyone can help?
All I can see the pink/purple background but know any other pixel :| No errors in console.
You have probably OpenGL errors, but they are not logged automatically. You can use this function for that:
static void checkGlError(const char* op) {
for (GLint error = glGetError(); error; error = glGetError()) {
LOGI("after %s() glError (0x%x)\n", op, error);
}
}
and call it after each OpenGL call. Then you will see exactly where it crashes.
A part from that, I would recommend you using OpenGL ES 2.0. I'm not sure right now if all the calls you are using work with ES 1.1 (maybe someone else can confirm).
In addition, there is an NDK sample implementing exactly the same than you, but using ES 2.0 instead. You can find it here:
http://code.google.com/p/android-cmake/source/browse/samples/hello-gl2/jni/gl_code.cpp?r=787b14cf9ed13299cb4c729d9a67d06e300fd52e
It uses a simple shader to paint the triangle and renders it using a VBO.
I was having the same problem for many hours today, and finally found an answer. It is not an error on your code, but most likely on the testing device.
First: Are you working with an Android Virtual Device or with a physical mobile?
With the first case, you need to use an ADV min API-15, and add gpu-emulation set to yes. Or from the command line, you can use this line when you run your ADV:
emulator -avd <avd_name> -gpu on
If it is ok, you will find these lines in the logcat:
D/libEGL ( 595): loaded /system/lib/egl/libGLES_android.so
D/libEGL ( 595): loaded /system/lib/egl/libEGL_emulation.so
D/libEGL ( 595): loaded /system/lib/egl/libGLESv1_CM_emulation.so
D/libEGL ( 595): loaded /system/lib/egl/libGLESv2_emulation.so
Else, you might find only the first, and an error like "egl.cnf not found, falling back to default" (Found the helping at: https://developer.amazon.com/sdk/fire/enable-features.html#GPU).
Now, if you are using a physical mobile, I just read that some mobiles seem to not support egl, mainly some with CyanogenMod (They display a similar error in the logcat). In this case, you should test it on other phone, or an AVD with the specifications above.