Android OpenGL - loaded textures change color - android

I'm trying to render a simple 3D scene. It's basically a surface wrapped in a texture. I've tested it on 4 devices and only one (Galaxy Note 4) renders the texture properly. The other three phones (HTC Evo 3D, LG G2, Sony Xperia Z1) render the texture with all texels in one color, which seems to be an average color of the texture. E.g.: original image and rendered texture.
My first guess was there's something wrong with my fragment shader. But it's very basic and copied from a book "OpenGL ES 2 for Android":
private final String vertexShaderCode =
"uniform mat4 uMVPMatrix;"+
"attribute vec4 vPosition;"+
"attribute vec2 a_TextureCoordinates;"+
"varying vec2 v_TextureCoordinates;"+
"void main()"+
"{"+
"v_TextureCoordinates = a_TextureCoordinates;"+
"gl_Position = uMVPMatrix * vPosition;"+
"}";
private final String fragmentShaderCode =
"precision mediump float;"+
"uniform sampler2D u_TextureUnit;"+
"varying vec2 v_TextureCoordinates;"+
"void main()"+
"{"+
"gl_FragColor = texture2D(u_TextureUnit, v_TextureCoordinates);"+
"}";
Loading texture:
texCoordHandle = glGetAttribLocation(program, "a_TextureCoordinates");
texUnitHandle = glGetUniformLocation(program, "u_TextureUnit");
texHandle = new int[1];
glGenTextures(1, texHandle, 0);
BitmapFactory.Options opt = new BitmapFactory.Options();
opt.inScaled = false;
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(),
R.drawable.way_texture_pink_square_512, opt);
glBindTexture(GL_TEXTURE_2D, texHandle[0]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
texImage2D(GL_TEXTURE_2D, 0, bitmap, 0);
glGenerateMipmap(GL_TEXTURE_2D);
bitmap.recycle();
glBindTexture(GL_TEXTURE_2D, 0);
I hold the vertices and texture coords in different float buffers, and draw them in a loop:
glUseProgram(program);
glUniformMatrix4fv(mvpMatrixHandle, 1, false, vpMatrix, 0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texHandle[0]);
glUniform1i(texUnitHandle, 0);
glEnableVertexAttribArray(positionHandle);
WaySection[] ws = waySects.get(currWaySect);
for( int i = 0 ; i < ws.length ; i++ ) {
glVertexAttribPointer(
positionHandle, COORDS_PER_VERTEX,
GL_FLOAT, false,
vertexStride, ws[i].vertexBuffer);
glVertexAttribPointer(texCoordHandle, COORDS_PER_TEX_VER,
GL_FLOAT, false,
texStride, ws[i].texBuffer);
glDrawArrays(GL_TRIANGLE_FAN, 0,
ws[i].vertices.length / COORDS_PER_VERTEX);
}
glDisableVertexAttribArray(positionHandle);
What I've done:
Changed textures, tried different sizes (power of two and not), different alpha.
Tried switching on/off different options, like: GL_CULL_FACE, GL_BLEND, GL_TEXTURE_MIN(MAG)_FILTER, GL_TEXTURE_WRAP...
Checked for OpenGL errors in every possible place. glGetError() returns no error.
Checked the code in debugger. The textures converted into Bitmap looked fine before and after loading them into OpenGL.
Help, please :)

Problem solved. Erratic behavior was caused by the lack of glEnableVertexAttribArray(texCoordHandle); call in onDraw(). Note 4 was ok with that; it required only the positionHandle (vertices) to be enabled. Other phones apparently weren't to happy. Maybe because Note 4 supports GL ES 3.1?
Thanks for your time.

Related

OpenGL-ES: is it possible to use an integer surface as a color attachment?

I am writing a game for Android devices which uses the Android NDK and OpenGL-ES. I am rendering an image to a framebuffer and then using that information in the CPU. More precision would be better, so I used:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_INT, nullptr);
to create the surface for the (only) color attachment. I selected it because it was the only 32 bit per color surface type that was usable as a color attachment listed on the OpenGL-ES page for glTexImage2D.
This works fine on some devices, but on an Android 6 HTC phone, I get the following errors output from the phone:
E/Adreno-ES20: <core_glClear:62>: WARNING: glClear called on an integer buffer. Buffer contents will be undefined
<oxili_check_sp_rb_fmt_mismatch:86>: WARNING : Rendertarget does not match shader output type.
E/Adreno-ES20: <core_glClear:62>: WARNING: glClear called on an integer buffer. Buffer contents will be undefined
E/Adreno-ES20: <oxili_check_sp_rb_fmt_mismatch:86>: WARNING : Rendertarget does not match shader output type.
Note: These messages are in a log file, no OpenGL errors were returned with glGetError.
Am I getting this error just because it is a buggy ancient phone, or is there a problem with what I am doing?
The OpenGL-ES page on glTexImage2D states that the surface can be used as a color attachment:
Khronos glTexImage2D reference page
The output from the fragment shader is a mediump vec4 (gl_FragColor), but that cannot be changed, right?
Note: the result I get from the code is just the clear color on the phone with the error in the log file (and one other phone which is a later model of the same brand). There are no errors returned from glGetError. And glCheckFramebufferStatus returned that the framebuffer was complete.
Code for creating the framebuffer:
glGenTextures(1, &m_depthMap);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, m_depthMap);
checkGraphicsError();
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0,
GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, 0);
checkGraphicsError();
glGenFramebuffers(1, &m_depthMapFBO);
checkGraphicsError();
glBindFramebuffer(GL_FRAMEBUFFER, m_depthMapFBO);
checkGraphicsError();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthMap, 0);
checkGraphicsError();
glGenTextures(1, &m_colorImage);
checkGraphicsError();
glActiveTexture(activeTextureIndicator);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, m_colorImage);
checkGraphicsError();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_INT, nullptr);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, 0);
checkGraphicsError();
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndicator, GL_TEXTURE_2D, m_colorImage, 0);
checkGraphicsError();
GLenum rc = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (rc != GL_FRAMEBUFFER_COMPLETE) {
std::string c;
switch (rc) {
case GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
c = "GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT";
break;
case GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
c = "GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT";
break;
case GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
c = "GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS";
break;
case GL_FRAMEBUFFER_UNSUPPORTED:
c = "GL_FRAMEBUFFER_UNSUPPORTED";
break;
default:
c = "Unknown return code.";
}
throw std::runtime_error(std::string("Framebuffer is not complete, returned: ") + c);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGraphicsError();
Update: It turns out if you use OpenGL ES GLSL version 1.00, you cannot change the output types. I was using GLSL 1.00 to be able to support lower end phones and old phones. I changed the code so that it would use GLSL 3.00 if the call to eglCreateContext with EGL_CONTEXT_CLIENT_VERSION set to 3 succeeded, otherwise it uses GLSL version 1.00 and does not use the integer surface. I am reading the result of the render with glReadPixels:
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data.data());
I changed from calling glClear to calling glClearBufferuiv/glClearBufferfv, if OpenGL ES 3.0 is being used:
if (m_surfaceDetails->useIntTexture) {
std::array<GLuint, 4> color = {0, 0, 0, 4294967295};
glClearBufferuiv(GL_COLOR, 0, color.data());
checkGraphicsError();
GLfloat depthBufferClearValue = 1.0f;
glClearBufferfv(GL_DEPTH, 0, &depthBufferClearValue);
checkGraphicsError();
} else {
glClearColor(0.0, 0.0, 0.0, 1.0);
checkGraphicsError();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkGraphicsError();
}
This works well for all my test devices except for that old HTC phone with android 6.0. On that phone, I am getting no GL errors programmatically or printed out to the debug log (i.e. the previously stated errors about calling glClear on an integer surface are gone). However, I get rgb = 1073741824, a=4294967295. The result I was looking for my test was rgb=2147483647 and a=4294967295. I didn't get the clear color of rgb=0 and a=4294967295, I got a different (weird) value for rgb. Any other ideas, or is the phone just buggy?
Listed below are my new vertex and fragment shaders using OpenGL ES GLSL 3.00.
My vertex shader:
#version 300 es
precision highp float;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
uniform float nearestDepth;
uniform float farthestDepth;
layout(location = 0) in vec3 inPosition;
out vec3 fragColor;
void main() {
gl_Position = proj * view * model * vec4(inPosition, 1.0);
gl_Position.z = -gl_Position.z;
vec4 pos = model * vec4(inPosition, 1.0);
float z = (pos.z/pos.w - farthestDepth)/(nearestDepth - farthestDepth);
if (z > 1.0) {
z = 1.0;
} else if (z < 0.0) {
z = 0.0;
}
fragColor = vec3(z, z, z);
}
My fragment shader:
#version 300 es
precision highp float;
precision highp int;
in vec3 fragColor;
layout(location = 0) out uvec4 fragColorOut;
void main() {
float maxUint = 4294967295.0;
fragColorOut = uvec4(
uint(fragColor.r * maxUint),
uint(fragColor.g * maxUint),
uint(fragColor.b * maxUint),
uint(maxUint));
}
Update 2
Thanks for all the comments. I ran some tests and changed my shaders in response to the comments:
So I checked the precision of highp and mediump floats and ints with glGetShaderPrecisionFormat and here's what I got:
GLint range[2];
GLint precision;
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_FLOAT, range, &precision);
// range[0] = 127
// range[1] = 127
// precision = 23
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_INT, range, &precision);
// range[0] = 31
// range[1] = 31
// precision = 0
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_FLOAT, range, &precision);
// range[0] = 15
// range[1] = 15
// precision = 10
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_INT, range, &precision);
// range[0] = 15
// range[1] = 15
// precision = 0
A couple of things to note:
highp floats and ints are supported by this phone (or so it says).
most of these values match up to the values stated in the OpenGL ES 3 reference card: https://www.khronos.org/files/opengles3-quick-reference-card.pdf - except: the medium precision float which is supposed to have range 14, but claims range 15.
But using mediump in the fragment shader is more correct since all the phones are required to support it. So I switched to using mediump floats and ints and GL_RGBA16UI surface:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_SHORT, nullptr);
The new shaders are below:
The vertex shader:
#version 300 es
precision highp float;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
uniform float nearestDepth;
uniform float farthestDepth;
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;
out mediump vec3 fragColor;
void main() {
gl_Position = proj * view * model * vec4(inPosition, 1.0);
gl_Position.z = -gl_Position.z;
vec4 pos = model * vec4(inPosition, 1.0);
float z = (pos.z/pos.w - farthestDepth)/(nearestDepth - farthestDepth);
if (z > 1.0) {
z = 1.0;
} else if (z < 0.0) {
z = 0.0;
}
fragColor = vec3(z, z, z);
}
The fragment shader:
#version 300 es
precision mediump int;
precision mediump float;
in vec3 fragColor;
layout(location = 0) out uvec4 fragColorOut;
void main() {
// 2^14 the highest value for mediump float. -1 because uint only goes to 2^16-1, see below
float maxUint = 16383.0;
fragColorOut = uvec4(
uint(fragColor.r * maxUint),
uint(fragColor.g * maxUint),
uint(fragColor.b * maxUint),
16383u);
// mediump uint goes from 0 to 2^16-1
fragColorOut = fragColorOut << 2;
}
This works for all devices except that android 6 HTC phone. It returns all 0's for this value. Again, if I clear the depth surface to 0.8f or so, then I get the clear color.
The reason I am using the integer surface is that GL_RGBA32F and GL_RGBA16F internal formats do not support color rendering in OpenGL ES 3.0. GL_RGBA8 is supported but is only 8 bits per channel.
Update 3
My clear code and read code is below. I have code to deal with testing this code to see if the integer surface works. If it does not, useIntTexture will be set to false and the float surface will be used. So the branch of code that should be examined is if useIntTexture is true.
The only difference in the clear the depth buffer to 0.8f and 1.0f is the value of the variable: depthBufferClearValue. The below code has it set to 1.0f (as it should be, 0.8f was just an experiment).
ref.renderDetails->overrideClearColor(clearColor);
if (m_surfaceDetails->useIntTexture) {
auto convert = [](float color) -> GLuint {
return static_cast<GLuint>(std::round(color * std::numeric_limits<uint16_t>::max()));
};
std::array<GLuint, 4> color = {convert(clearColor.r), convert(clearColor.g), convert(clearColor.b), convert(clearColor.a)};
glClearBufferuiv(GL_COLOR, 0, color.data());
checkGraphicsError();
// the only difference between the clear 0.8f case and the clear
// 1.0f case is the below line. Right now it is clearing 1.0f...
GLfloat depthBufferClearValue = 1.0f;
glClearBufferfv(GL_DEPTH, 0, &depthBufferClearValue);
checkGraphicsError();
} else {
glClearColor(clearColor.r, clearColor.g, clearColor.b, clearColor.a);
checkGraphicsError();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkGraphicsError();
}
ref.renderDetails->draw(0, ref.commonObjectData, drawObjTable,
drawObjTable->zValueReferences().begin(), drawObjTable->zValueReferences().end());
glFinish();
checkGraphicsError();
renderDetails::PostprocessingDataInputGL dataVariant;
if (m_surfaceDetails->useIntTexture) {
/* width * height * 4 color values each a uint16_t in size. */
std::vector<uint16_t> data(static_cast<size_t>(imageWidth * imageHeight * 4), 0.0f);
glReadBuffer(GL_COLOR_ATTACHMENT0);
checkGraphicsError();
glReadPixels(0, 0, imageWidth, imageHeight, colorImageFormat.format, colorImageFormat.type, data.data());
checkGraphicsError();
dataVariant = std::move(data);
} else {
/* width * height * 4 color values each a char in size. */
std::vector<uint8_t> data(static_cast<size_t>(imageWidth * imageHeight * 4), 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
checkGraphicsError();
glReadPixels(0, 0, imageWidth, imageHeight, colorImageFormat.format, colorImageFormat.type, data.data());
checkGraphicsError();
dataVariant = std::move(data);
}
In case anyone else has the above problem, like I did, making two changes solved my problem:
I made the relevant "out" variable in the fragment shader into a uvec4, as opposed to just a vec4.
I made sure that the "format" argument for my glReadPixels() command was also "_Integer". E.g. in my case, using GL_RED_INTEGER/GL_UNSIGNED_BYTE in the glReadPixels command, instead of GL_RED/GL_UNSIGNED_BYTE. While the original question does not explicitly mention glReadPixels(), I imagine it might still be relevant, as getting the data to the CPU was mentioned.
Hopefully, if anyone else comes across this problem, one/both of the above changes will fix it.

OpenGL ES 2.0 dynamic texture not working

I'm very new to Android, Eclipse, and OpenGL ES 2.0 (and object oriented programming in general), but I got a program (basically, based on a few examples I found on the Internet) working to show a cube with each face displaying a different 512 x 512 bitmap image (a PNG image loaded and converted once during the initialization phase) a few days ago.
My system is a 32-bit Vista PC, and I've been using Bluestacks as my emulator (version 0.8.11, released June 22, 2014, Android version 4.0.4). I believe its API level is 15. I have no physical Android devices to test my programs on at this time.
I now want to try a technique to use a dynamic texture.
I've looked for complete Android app examples using a dynamic texture (in Java), but I found only one and when I run it, it doesn't work on my Bluestacks.
Here's the page that has a link to the complete project containing the program:
http://blog.shayanjaved.com/2011/05/13/android-opengl-es-2-0-render-to-texture/
When I run it on my Bluestacks, all I see is a black screen.
One thing I noticed is that the texture size is huge. I changed the size to 512 x 512 (which I know works because my cube program uses 512 x 512 textures), but I still see only a black screen.
(It would be nice if somebody could try to run this program on his/her physical Android device/s and/or Bluestacks and post the result.)
The following is the method in which a dynamic texture is created and prepared for its use:
/**
* Sets up the framebuffer and renderbuffer to render to texture
*/
private void setupRenderToTexture() {
fb = new int[1];
depthRb = new int[1];
renderTex = new int[1];
// generate
GLES20.glGenFramebuffers(1, fb, 0);
GLES20.glGenRenderbuffers(1, depthRb, 0);
GLES20.glGenTextures(1, renderTex, 0);
// generate color texture
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, renderTex[0]);
// parameters
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T,
GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_LINEAR);
// create it
// create an empty intbuffer first?
int[] buf = new int[texW * texH];
texBuffer = ByteBuffer.allocateDirect(buf.length
* FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asIntBuffer();;
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGB, texW, texH, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_SHORT_5_6_5, texBuffer);
// create render buffer and bind 16-bit depth buffer
GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, depthRb[0]);
GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16, texW, texH);
/*** PART BELOW SHOULD BE DONE IN onDrawFrame ***/
// bind framebuffer
/*GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, fb[0]);
// specify texture as color attachment
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, renderTex[0], 0);
// attach render buffer as depth buffer
GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, depthRb[0]);
// check status
int status = GLES20.glCheckFramebufferStatus(GLES20.GL_FRAMEBUFFER);*/
}
Does the above piece of code seem sound?
Oh, one more thing:
I've been using the following as my vertex and fragment shaders. They've been working fine to show a cube with static textures. (Normal's data are read but I'm not using any lights now.)
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 a_Position;" +
"attribute vec4 a_Color;" +
"attribute vec3 a_Normal;" +
"attribute vec2 a_TexCoordinate;" + // Per-vertex texture coordinate information we will pass in.
"varying vec4 v_Color;" +
"varying vec2 v_TexCoordinate;" + // This will be passed into the fragment shader.
"void main() {" +
// The matrix must be included as a modifier of gl_Position.
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * a_Position;" +
" v_Color = a_Color;" +
" vec3 difuse = a_Normal;" +
" v_TexCoordinate = a_TexCoordinate;" + // pass through
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform sampler2D u_Texture;" + // The input texture.
"varying vec4 v_Color;" +
"varying vec2 v_TexCoordinate;" + // Interpolated texture coordinate per fragment.
"void main() {" +
" gl_FragColor = (v_Color * texture2D(u_Texture, v_TexCoordinate));" +
"}";
Should they work when I render to a texture? Or do I have to have separate vertex and fragment shaders when I'm rendering to a texture?
Thank you.

Can't achieve 60fps rendering simple quad, Android, Opengl ES 2.0

I'm working on a simple pong type game to get to grips with opengl and android, and seem to have hit a brick wall in terms of performance.
I've got my game logic on a separate thread, with draw commands sent to the gl thread through a blocking queue. The problem is that I'm stuck at around 40fps, and nothing I've tried seems to improve the framerate.
To keep things simple I set up opengl with:
GLES20.glDisable(GLES20.GL_CULL_FACE);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDisable(GLES20.GL_BLEND);
Set up of the opengl program and drawing is handled by the following class:
class GLRectangle {
private final String vertexShaderCode =
"precision lowp float;" +
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform lowp mat4 uMVPMatrix;" +
"attribute lowp vec4 vPosition;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
" gl_Position = vPosition * uMVPMatrix;" +
"}";
private final String fragmentShaderCode =
"precision lowp float;" +
"uniform lowp vec4 vColor;" +
"void main() {" +
" gl_FragColor = vColor;" +
"}";
protected static int mProgram = 0;
private static ShortBuffer drawListBuffer;
private static short drawOrder[] = { 0, 1, 2, 0, 2, 3};//, 4, 5, 6, 4, 6, 7 }; // order to draw vertices
// number of coordinates per vertex in this array
private static final int COORDS_PER_VERTEX = 3;
private static final int vertexStride = COORDS_PER_VERTEX * 4; // bytes per vertex
GLRectangle(){
int vertexShader = GameRenderer.loadShader(GLES20.GL_VERTEX_SHADER, vertexShaderCode);
int fragmentShader = GameRenderer.loadShader(GLES20.GL_FRAGMENT_SHADER, fragmentShaderCode);
mProgram = GLES20.glCreateProgram(); // create empty OpenGL ES Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // creates OpenGL ES program executables
// initialize byte buffer for the index list
ByteBuffer dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder.length * 2);
dlb.order(ByteOrder.nativeOrder());
drawListBuffer = dlb.asShortBuffer();
drawListBuffer.put(drawOrder);
drawListBuffer.position(0);
}
protected static void Draw(Drawable dObj, float mvpMatrix[])
{
FloatBuffer vertexBuffer = dObj.vertexBuffer;
GLES20.glUseProgram(mProgram);
//GameRenderer.checkGlError("glUseProgram");
// get handle to fragment shader's vColor member
int mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor");
//GameRenderer.checkGlError("glGetUniformLocation");
// get handle to shape's transformation matrix
int mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix");
//GameRenderer.checkGlError("glGetUniformLocation");
// get handle to vertex shader's vPosition member
int mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition");
//GameRenderer.checkGlError("glGetAttribLocation");
// Apply the projection and view transformation
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0);
//GameRenderer.checkGlError("glUniformMatrix4fv");
// Set color for drawing the quad
GLES20.glUniform4fv(mColorHandle, 1, dObj.color, 0);
//GameRenderer.checkGlError("glUniform4fv");
// Enable a handle to the square vertices
GLES20.glEnableVertexAttribArray(mPositionHandle);
//GameRenderer.checkGlError("glEnableVertexAttribArray");
// Prepare the square coordinate data
GLES20.glVertexAttribPointer(mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer);
//GameRenderer.checkGlError("glVertexAttribPointer");
// Draw the square
GLES20.glDrawElements(GLES20.GL_TRIANGLES, drawOrder.length,
GLES20.GL_UNSIGNED_SHORT, drawListBuffer);
//GameRenderer.checkGlError("glDrawElements");
// Disable vertex array
GLES20.glDisableVertexAttribArray(mPositionHandle);
//GameRenderer.checkGlError("glDisableVertexAttribArray");
}
}
I've done plenty of profiling and googling, but cant find anything to make this work faster... I've included a screenshot of the DDMS output:
To me, it looks like glClear is causeing GLThread to sleep for 23ms... though I doubt that's really the case.
I have absolutely no idea how I can make this more efficient, there's nothing fancy going on. In my quest for better rendering performance I have switched to the multi-threaded approach I described, turned off alpha blending and depth testing, changed to a batched drawing approach (not applicable for this simple example), and switched everything to lowp in the shaders.
Any assistance with getting this to 60fps would be greatly appreciated!
Bruce
edit Well talk about overthinking a problem. It turns out that I've had the powersaving mode switched on for the past week... it seems to lock rendering to 40fps.
This behavior occurs when Power Saving mode is switched on, using a Galaxy S3.
It appears the power saving mode locks the framerate to 40fps. Switching it off easily achieved the desired 60fps.
Not sure if you have control over the EGL setup to the device surface, but if you can there is a way to set the update to run in "non-sync" mode.
egl.eglSwapInterval( dpy, 0 )
This isn't available on all devices but allows some control of your rendering.

OpenglES 2.0 PNG alpha texture overlap

I'm trying to draw multiple hexagons on the screen that have an alpha channel. the image is this:
So, I load the texture into the program and that's ok. When it runs, the alpha channel is blended with the background color and that's ok but, when two hexagons overlap themselves, the overlapped part becomes the color of the background! Below the picture:
Of course, this is not the effect that I expected.. I want them to overlap without this background being drawn over the other texture. Here is my code for drawing:
GLES20.glUseProgram(Program);
hVertex = GLES20.glGetAttribLocation(Program,"vPosition");
hColor = GLES20.glGetUniformLocation(Program, "vColor");
uTexture = GLES20.glGetUniformLocation(Program, "u_Texture");
hTexture = GLES20.glGetAttribLocation(Program, "a_TexCoordinate");
hMatrix = GLES20.glGetUniformLocation(Program, "uMVPMatrix");
GLES20.glVertexAttribPointer(hVertex, 3, GLES20.GL_FLOAT, false, 0, bVertex);
GLES20.glEnableVertexAttribArray(hVertex);
GLES20.glUniform4fv(hColor, 1, Color, 0);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, hTexture);
GLES20.glUniform1i(uTexture, 0);
GLES20.glVertexAttribPointer(hTexture, 2, GLES20.GL_FLOAT, false, 0, bTexture);
GLES20.glEnableVertexAttribArray(hTexture);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glEnable(GLES20.GL_BLEND);
x=-1;y=0;z=0;
for (int i=0;i<10;i++) {
Matrix.setIdentityM(ModelMatrix, 0);
Matrix.translateM(ModelMatrix, 0, x, y, z);
x+=0.6f;
Matrix.multiplyMM(ModelMatrix, 0, ModelMatrix, 0, ProjectionMatrix, 0);
GLES20.glUniformMatrix4fv(hMatrix, 1, false, ModelMatrix, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES, DrawOrder.length, GLES20.GL_UNSIGNED_SHORT, bDrawOrder);
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisableVertexAttribArray(hVertex);
}
And My fragment shader:
public final String fragmentShaderCode =
"precision mediump float;" +
"uniform vec4 vColor;" +
"uniform sampler2D u_Texture;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
" gl_FragColor = vColor * texture2D(u_Texture, v_TexCoordinate);" +
"}";
and my renderer code:
super(context);
setEGLContextClientVersion(2);
getHolder().setFormat(PixelFormat.TRANSLUCENT);
setEGLConfigChooser(8, 8, 8, 8, 8, 8);
renderer = new GLRenderer(context);
setRenderer(renderer);
I already tried to use diferent functions on glBlendFunc but nothing seems to work.. Does Anyone knows what the problem is? I'm really lost.. If needs anymore code just ask!
Thank you!
My guess is that you need to disable the depth test when drawing these. Since they all appear at the same depth, when you draw your leftmost ring, it writes into the depth buffer for every pixel in the quad, even the transparent ones.
Then when you draw the next quad to the right, the pixels which overlap don't get drawn because they fail the depth test, so you just get a blank area where it intersects with the first quad.

vertex shader doesn't run on galaxy tab10 (tegra 2)

I created an app that uses GLES2.0 on a HTC Desire S.
It works on the HTC, but not on an Samung Galaxy tab10.1.
The program cannot be linked (GLES20.glGetProgramiv(mProgram, GLES20.GL_LINK_STATUS, linOk,0) gives-1) and glGetError() gives me an error 1282 (Invalid Operation).
When I replace this line (in the shader):
graph_coord.z = (texture2D(mytexture, graph_coord.xy / 2.0 + 0.5).r);
by
graph_coord.z = 0.2;
it works also on the galaxy tab.
My shader looks like this:
private final String vertexShaderCode =
"attribute vec2 coord2d;" +
"varying vec4 graph_coord;" +
"uniform mat4 texture_transform;" +
"uniform mat4 vertex_transform;" +
"uniform sampler2D mytexture;" +
"void main(void) {" +
" graph_coord = texture_transform * vec4(coord2d, 0, 1);" +
" graph_coord.z = (texture2D(mytexture, graph_coord.xy / 2.0 + 0.5).r);" +
" gl_Position = vertex_transform * vec4(coord2d, graph_coord.z, 1);" +
"}";
That's where the shaders are attached:
mProgram = GLES20.glCreateProgram(); // create empty OpenGL Program
GLES20.glAttachShader(mProgram, vertexShader); // add the vertex shader to program
GLES20.glAttachShader(mProgram, fragmentShader); // add the fragment shader to program
GLES20.glLinkProgram(mProgram); // create OpenGL program executables
int linOk[] = new int[1];
GLES20.glGetProgramiv(mProgram, GLES20.GL_LINK_STATUS, linOk,0);
And the texture is loaded here:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture_id[0]);
GLES20.glTexImage2D(
GLES20.GL_TEXTURE_2D, // target
0, // level, 0 = base, no minimap,
GLES20.GL_LUMINANCE, // internalformat
size, // width
size, // height
0, // border, always 0 in OpenGL ES
GLES20.GL_LUMINANCE, // format
GLES20.GL_UNSIGNED_BYTE, // type
values
);
This seems to be a limitation of the Nvidia Tegra GPU. I was able to reproduce the error on a Tegra 3 GPU. Even though texture lookups in the vertex shader are in theory part of OpenGL ES 2.0, according to Nvidia the number of vertex shader texture units (GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS) for Tegra is 0 (PDF: OpenGL ES 2.0 Development for the Tegra Platform).
You have to use texture2DLod() instead of texture2D() to make texture lookups in vertex shader.
GLSL specs, section 8.7 Texture Lookup Functions: http://www.khronos.org/files/opengles_shading_language.pdf

Categories

Resources