GLSL - Wrong data in vertex attribute - android

I have already lost 2 days trying to figure out on this issue, but with no use.
I have written a collada animation renderer using opengles2.0 for android; using shader for skinning. The code is almost complete and it runs just fine on my HTC DesireS.
But, when I try to run the same on a tridernt SetTopBox with PowerVR chipset, my geometry is not displayed. After a day of debugging, I found out that it is happening because I am getting != -1 as bone matrix indeices in the shader.
I verified that it is == -1 in my phone; but is != -1 in the SetTopBox.
What could possibly be wrong?
Please save me from this big trouble.
Sorry for not puttingup the code.
Here is the vertex shader. I am expecting vec2(boneIndices) to have -1 in [0] as well as [1]; but is not so on Powervr.
attribute vec4 vPosition;
attribute vec2 vTexCoord;
attribute vec2 boneIndices;
attribute vec2 boneWeights;
uniform mat4 boneMatrices[BNCNT];
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
varying mediump vec2 fTexCoord;
varying mediump vec3 col;
void main(){
vec4 tempPosition = vPosition;
int index = int(boneIndices.x);
col = vec3(1.0, 0.0, 0.0);
if(index >= 0){
col.y = 1.0;
tempPosition = (boneMatrices[index] * vPosition) * boneWeights.x;
}
index = int(boneIndices.y);
if(index >= 0){
col.z = 1.0;
tempPosition = (boneMatrices[index] * vPosition) * boneWeights.y + tempPosition;
}
gl_Position = projectionMatrix * viewMatrix * modelMatrix * tempPosition;
fTexCoord = vTexCoord;
}
setting up the attribute pointers
glVertexAttribPointer(position, 3, GL_FLOAT, GL_FALSE, 13*sizeof(GLfloat), 0);
glVertexAttribPointer(texCoord, 2, GL_FLOAT, GL_FALSE, 13*sizeof(GLfloat), (GLvoid*)(3*sizeof(GLfloat)));
glVertexAttribPointer(boneIndices, 2, GL_FLOAT, GL_FALSE, 13*sizeof(GLfloat), (GLvoid*)(9*sizeof(GLfloat)));
glVertexAttribPointer(boneWeights, 2, GL_FLOAT, GL_FALSE, 13*sizeof(GLfloat), (GLvoid*)(11*sizeof(GLfloat)));
glEnableVertexAttribArray(position);
glEnableVertexAttribArray(texCoord);
glEnableVertexAttribArray(boneIndices);
glEnableVertexAttribArray(boneWeights);
my vertex and index buffers
GLfloat vertices[13*6] =
{-0.5*size, -0.5*size, 0, 0,1, 1,1,1,1, -1,-1, 0,0,
-0.5*size, 0.5*size, 0, 0,0, 1,1,1,1, -1,-1, 0,0,
0.5*size, 0.5*size, 0, 1,0, 1,1,1,1, -1,-1, 0,0,
-0.5*size, -0.5*size, 0, 0,1, 1,1,1,1, -1,-1, 0,0,
0.5*size, 0.5*size, 0, 1,0, 1,1,1,1, -1,-1, 0,0,
0.5*size, -0.5*size, 0, 1,1, 1,1,1,1, -1,-1, 0,0 };
GLushort indices[]= {0,1,2, 3,4,5};
I am expecting the indices to be -1 in the shader; but they are not.

After days of frustration, I finally found the problem by myself.
The culprit was the "int()" function call, which was returning 0 even if I specify -1.
The observed behavior is that it returns
0 for -1
-1 for -2
-2 for -3 and like...
I am not sure if this is a driver/hw bug, or if it is because of the floating point representation where -1 is represented as something like "-.9999999"
Can anybody shed a little more light on this?

Related

OpenGL-ES: is it possible to use an integer surface as a color attachment?

I am writing a game for Android devices which uses the Android NDK and OpenGL-ES. I am rendering an image to a framebuffer and then using that information in the CPU. More precision would be better, so I used:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_INT, nullptr);
to create the surface for the (only) color attachment. I selected it because it was the only 32 bit per color surface type that was usable as a color attachment listed on the OpenGL-ES page for glTexImage2D.
This works fine on some devices, but on an Android 6 HTC phone, I get the following errors output from the phone:
E/Adreno-ES20: <core_glClear:62>: WARNING: glClear called on an integer buffer. Buffer contents will be undefined
<oxili_check_sp_rb_fmt_mismatch:86>: WARNING : Rendertarget does not match shader output type.
E/Adreno-ES20: <core_glClear:62>: WARNING: glClear called on an integer buffer. Buffer contents will be undefined
E/Adreno-ES20: <oxili_check_sp_rb_fmt_mismatch:86>: WARNING : Rendertarget does not match shader output type.
Note: These messages are in a log file, no OpenGL errors were returned with glGetError.
Am I getting this error just because it is a buggy ancient phone, or is there a problem with what I am doing?
The OpenGL-ES page on glTexImage2D states that the surface can be used as a color attachment:
Khronos glTexImage2D reference page
The output from the fragment shader is a mediump vec4 (gl_FragColor), but that cannot be changed, right?
Note: the result I get from the code is just the clear color on the phone with the error in the log file (and one other phone which is a later model of the same brand). There are no errors returned from glGetError. And glCheckFramebufferStatus returned that the framebuffer was complete.
Code for creating the framebuffer:
glGenTextures(1, &m_depthMap);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, m_depthMap);
checkGraphicsError();
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32F, width, height, 0,
GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, 0);
checkGraphicsError();
glGenFramebuffers(1, &m_depthMapFBO);
checkGraphicsError();
glBindFramebuffer(GL_FRAMEBUFFER, m_depthMapFBO);
checkGraphicsError();
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_depthMap, 0);
checkGraphicsError();
glGenTextures(1, &m_colorImage);
checkGraphicsError();
glActiveTexture(activeTextureIndicator);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, m_colorImage);
checkGraphicsError();
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_INT, nullptr);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
checkGraphicsError();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
checkGraphicsError();
glBindTexture(GL_TEXTURE_2D, 0);
checkGraphicsError();
glFramebufferTexture2D(GL_FRAMEBUFFER, attachmentIndicator, GL_TEXTURE_2D, m_colorImage, 0);
checkGraphicsError();
GLenum rc = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (rc != GL_FRAMEBUFFER_COMPLETE) {
std::string c;
switch (rc) {
case GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
c = "GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT";
break;
case GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
c = "GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT";
break;
case GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
c = "GL_FRAMEBUFFER_INCOMPLETE_DIMENSIONS";
break;
case GL_FRAMEBUFFER_UNSUPPORTED:
c = "GL_FRAMEBUFFER_UNSUPPORTED";
break;
default:
c = "Unknown return code.";
}
throw std::runtime_error(std::string("Framebuffer is not complete, returned: ") + c);
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
checkGraphicsError();
Update: It turns out if you use OpenGL ES GLSL version 1.00, you cannot change the output types. I was using GLSL 1.00 to be able to support lower end phones and old phones. I changed the code so that it would use GLSL 3.00 if the call to eglCreateContext with EGL_CONTEXT_CLIENT_VERSION set to 3 succeeded, otherwise it uses GLSL version 1.00 and does not use the integer surface. I am reading the result of the render with glReadPixels:
glReadPixels(0, 0, imageWidth, imageHeight, GL_RGBA_INTEGER, GL_UNSIGNED_INT, data.data());
I changed from calling glClear to calling glClearBufferuiv/glClearBufferfv, if OpenGL ES 3.0 is being used:
if (m_surfaceDetails->useIntTexture) {
std::array<GLuint, 4> color = {0, 0, 0, 4294967295};
glClearBufferuiv(GL_COLOR, 0, color.data());
checkGraphicsError();
GLfloat depthBufferClearValue = 1.0f;
glClearBufferfv(GL_DEPTH, 0, &depthBufferClearValue);
checkGraphicsError();
} else {
glClearColor(0.0, 0.0, 0.0, 1.0);
checkGraphicsError();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkGraphicsError();
}
This works well for all my test devices except for that old HTC phone with android 6.0. On that phone, I am getting no GL errors programmatically or printed out to the debug log (i.e. the previously stated errors about calling glClear on an integer surface are gone). However, I get rgb = 1073741824, a=4294967295. The result I was looking for my test was rgb=2147483647 and a=4294967295. I didn't get the clear color of rgb=0 and a=4294967295, I got a different (weird) value for rgb. Any other ideas, or is the phone just buggy?
Listed below are my new vertex and fragment shaders using OpenGL ES GLSL 3.00.
My vertex shader:
#version 300 es
precision highp float;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
uniform float nearestDepth;
uniform float farthestDepth;
layout(location = 0) in vec3 inPosition;
out vec3 fragColor;
void main() {
gl_Position = proj * view * model * vec4(inPosition, 1.0);
gl_Position.z = -gl_Position.z;
vec4 pos = model * vec4(inPosition, 1.0);
float z = (pos.z/pos.w - farthestDepth)/(nearestDepth - farthestDepth);
if (z > 1.0) {
z = 1.0;
} else if (z < 0.0) {
z = 0.0;
}
fragColor = vec3(z, z, z);
}
My fragment shader:
#version 300 es
precision highp float;
precision highp int;
in vec3 fragColor;
layout(location = 0) out uvec4 fragColorOut;
void main() {
float maxUint = 4294967295.0;
fragColorOut = uvec4(
uint(fragColor.r * maxUint),
uint(fragColor.g * maxUint),
uint(fragColor.b * maxUint),
uint(maxUint));
}
Update 2
Thanks for all the comments. I ran some tests and changed my shaders in response to the comments:
So I checked the precision of highp and mediump floats and ints with glGetShaderPrecisionFormat and here's what I got:
GLint range[2];
GLint precision;
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_FLOAT, range, &precision);
// range[0] = 127
// range[1] = 127
// precision = 23
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_HIGH_INT, range, &precision);
// range[0] = 31
// range[1] = 31
// precision = 0
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_FLOAT, range, &precision);
// range[0] = 15
// range[1] = 15
// precision = 10
glGetShaderPrecisionFormat(GL_FRAGMENT_SHADER, GL_MEDIUM_INT, range, &precision);
// range[0] = 15
// range[1] = 15
// precision = 0
A couple of things to note:
highp floats and ints are supported by this phone (or so it says).
most of these values match up to the values stated in the OpenGL ES 3 reference card: https://www.khronos.org/files/opengles3-quick-reference-card.pdf - except: the medium precision float which is supposed to have range 14, but claims range 15.
But using mediump in the fragment shader is more correct since all the phones are required to support it. So I switched to using mediump floats and ints and GL_RGBA16UI surface:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16UI, width, height, 0, GL_RGBA_INTEGER,
GL_UNSIGNED_SHORT, nullptr);
The new shaders are below:
The vertex shader:
#version 300 es
precision highp float;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
uniform float nearestDepth;
uniform float farthestDepth;
layout(location = 0) in vec3 inPosition;
layout(location = 1) in vec3 inColor;
layout(location = 2) in vec2 inTexCoord;
layout(location = 3) in vec3 inNormal;
out mediump vec3 fragColor;
void main() {
gl_Position = proj * view * model * vec4(inPosition, 1.0);
gl_Position.z = -gl_Position.z;
vec4 pos = model * vec4(inPosition, 1.0);
float z = (pos.z/pos.w - farthestDepth)/(nearestDepth - farthestDepth);
if (z > 1.0) {
z = 1.0;
} else if (z < 0.0) {
z = 0.0;
}
fragColor = vec3(z, z, z);
}
The fragment shader:
#version 300 es
precision mediump int;
precision mediump float;
in vec3 fragColor;
layout(location = 0) out uvec4 fragColorOut;
void main() {
// 2^14 the highest value for mediump float. -1 because uint only goes to 2^16-1, see below
float maxUint = 16383.0;
fragColorOut = uvec4(
uint(fragColor.r * maxUint),
uint(fragColor.g * maxUint),
uint(fragColor.b * maxUint),
16383u);
// mediump uint goes from 0 to 2^16-1
fragColorOut = fragColorOut << 2;
}
This works for all devices except that android 6 HTC phone. It returns all 0's for this value. Again, if I clear the depth surface to 0.8f or so, then I get the clear color.
The reason I am using the integer surface is that GL_RGBA32F and GL_RGBA16F internal formats do not support color rendering in OpenGL ES 3.0. GL_RGBA8 is supported but is only 8 bits per channel.
Update 3
My clear code and read code is below. I have code to deal with testing this code to see if the integer surface works. If it does not, useIntTexture will be set to false and the float surface will be used. So the branch of code that should be examined is if useIntTexture is true.
The only difference in the clear the depth buffer to 0.8f and 1.0f is the value of the variable: depthBufferClearValue. The below code has it set to 1.0f (as it should be, 0.8f was just an experiment).
ref.renderDetails->overrideClearColor(clearColor);
if (m_surfaceDetails->useIntTexture) {
auto convert = [](float color) -> GLuint {
return static_cast<GLuint>(std::round(color * std::numeric_limits<uint16_t>::max()));
};
std::array<GLuint, 4> color = {convert(clearColor.r), convert(clearColor.g), convert(clearColor.b), convert(clearColor.a)};
glClearBufferuiv(GL_COLOR, 0, color.data());
checkGraphicsError();
// the only difference between the clear 0.8f case and the clear
// 1.0f case is the below line. Right now it is clearing 1.0f...
GLfloat depthBufferClearValue = 1.0f;
glClearBufferfv(GL_DEPTH, 0, &depthBufferClearValue);
checkGraphicsError();
} else {
glClearColor(clearColor.r, clearColor.g, clearColor.b, clearColor.a);
checkGraphicsError();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
checkGraphicsError();
}
ref.renderDetails->draw(0, ref.commonObjectData, drawObjTable,
drawObjTable->zValueReferences().begin(), drawObjTable->zValueReferences().end());
glFinish();
checkGraphicsError();
renderDetails::PostprocessingDataInputGL dataVariant;
if (m_surfaceDetails->useIntTexture) {
/* width * height * 4 color values each a uint16_t in size. */
std::vector<uint16_t> data(static_cast<size_t>(imageWidth * imageHeight * 4), 0.0f);
glReadBuffer(GL_COLOR_ATTACHMENT0);
checkGraphicsError();
glReadPixels(0, 0, imageWidth, imageHeight, colorImageFormat.format, colorImageFormat.type, data.data());
checkGraphicsError();
dataVariant = std::move(data);
} else {
/* width * height * 4 color values each a char in size. */
std::vector<uint8_t> data(static_cast<size_t>(imageWidth * imageHeight * 4), 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
checkGraphicsError();
glReadPixels(0, 0, imageWidth, imageHeight, colorImageFormat.format, colorImageFormat.type, data.data());
checkGraphicsError();
dataVariant = std::move(data);
}
In case anyone else has the above problem, like I did, making two changes solved my problem:
I made the relevant "out" variable in the fragment shader into a uvec4, as opposed to just a vec4.
I made sure that the "format" argument for my glReadPixels() command was also "_Integer". E.g. in my case, using GL_RED_INTEGER/GL_UNSIGNED_BYTE in the glReadPixels command, instead of GL_RED/GL_UNSIGNED_BYTE. While the original question does not explicitly mention glReadPixels(), I imagine it might still be relevant, as getting the data to the CPU was mentioned.
Hopefully, if anyone else comes across this problem, one/both of the above changes will fix it.

OpenGL ES 2.0 Failing to correctly assign the color attribute

I'm struggling a bit to apply the color for my geometry. When I specify it directly in the vertex shader ("varColor = vec4(1.0, 0.5, 0.4, 1.0);") - everything is ok. But if I use color values from the "vColor" attribue - everything gets messed up.
(Added some screenshots to show what I mean)
Can someone help me to figure out what am I doing wrong, or point me in the right direction? Thanks.
Using "varColor = vec4(1.0, 0.5, 0.4, 1.0);"
Using "varColor = vColor"
Vertex shader:
precision mediump float;
uniform mat4 modelViewProjectionMatrix;
attribute vec4 vPosition;
attribute vec2 vTexCoord;
attribute vec4 vColor;
varying lowp vec4 varColor;
varying mediump vec2 varTexCoord;
void main()
{
gl_Position = modelViewProjectionMatrix * vPosition;
varTexCoord = vTexCoord;
// varColor = vColor;
varColor = vec4(1.0, 0.5, 0.4, 1.0);
}
Fragment shader:
precision mediump float;
uniform sampler2D Texture0;
varying vec4 varColor;
varying vec2 varTexCoord;
void main()
{
gl_FragColor = texture2D(Texture0, varTexCoord) * varColor;
}
After shader is linked, I'm binding my attributes like this:
mMatrixMVP = glGetUniformLocation(mProgramId, "modelViewProjectionMatrix");
glBindAttribLocation(mProgramId, 0, "vPosition");
glBindAttribLocation(mProgramId, 1, "vTexCoord");
glBindAttribLocation(mProgramId, 2, "vColor");
mTexture = glGetUniformLocation(mProgramId, "Texture0");
glUniform1i(mTexture, 0);
Structure that holds my vertex information:
struct Vertex
{
float xyz[3];
float st[2];
unsigned char color[4]; // All assigned to value of 255
};
When rendering, after vertex buffer is bound, I'm setting vertex attributes like this:
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) offsetof(Vertex, xyz));
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) offsetof(Vertex, st));
glEnableVertexAttribArray(2);
glVertexAttribPointer(2, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(Vertex), (GLvoid*) offsetof(Vertex, color));
glActiveTexture(GL_TEXTURE0);
pTexture->Bind(); // Just a "glBindTexture(GL_TEXTURE_2D, mTextureId);"
glUniform1i(mpCurrentShader->GetTexture(), 0);
After this I'm binding the index buffer and calling "glDrawElements".
Then, calling glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and disabling all attributes with "glDisableVertexAttribArray" and finally - calling glBindBuffer(GL_ARRAY_BUFFER, 0);
You need to make your glBindAttribLocation() calls before linking the shader program.
If you link a program without specifying the location of attributes with either glBindAttribLocation(), or with layout qualifiers in the shader code, the attribute locations will be assigned automatically when the shader program is linked. If you call glBindAttribLocation() after linking the program, the new locations will only take effect if you link the shader program again.

LibGDX ShaderProgram is not compiled on Android device

I am using LibGDX and have a simple fragment shader:
#ifdef GL_ES
#define mediump lowp
precision mediump float;
#else
#define lowp
#endif
uniform sampler2D u_texture;
uniform vec2 iResolution;
uniform float iGlobalTime;
uniform float glow;
void main(void)
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
//uv.y = 1-uv.y; // THIS LINE EVOKES AN ERROR!!!
uv.y += (sin((uv.x + (iGlobalTime * 0.2)) * 10.0) * 0.02)+0.02;
vec4 color = vec4(glow,glow,glow,1);
vec4 texColor = texture2D(u_texture, uv)*color;
gl_FragColor = texColor;
}
It works well on PC, but on Android device this shader is not compiled. If I comment the line(commented one in the example), everything goes ok.
How I use it:
mesh = new Mesh(true, 4, 6, VertexAttribute.Position(), VertexAttribute.ColorUnpacked(), VertexAttribute.TexCoords(0));
mesh.setVertices(new float[]
{0.5f, 0.5f, 0, 1, 1, 1, 1, 0, 1,
0.5f, -0.5f, 0, 1, 1, 1, 1, 1, 1,
-0.5f, -0.5f, 0, 1, 1, 1, 1, 1, 0,
-0.5f, 0.5f, 0, 1, 1, 1, 1, 0, 0});
mesh.setIndices(new short[] {0, 1, 2, 2, 3, 0});
this.texture = texture;
shader = new ShaderProgram(Gdx.files.internal("shaders/default.vertex"),
Gdx.files.internal("shaders/glowwave.fragment"));
Render method:
texture.bind();
shader.begin();
shader.setUniformMatrix("u_worldView", localCam.projection);
shader.setUniformi("u_texture", 0);
shader.setUniformf("iGlobalTime", time);
shader.setUniformf("iResolution", new Vector2(Gdx.graphics.getWidth(),Gdx.graphics.getHeight()+15));
shader.setUniformf("glow", glow);
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
What the reason can it be? Seems to me the problem is in preprocessor, although I could be wrong.
ERROR: 0:17: '-' : Wrong operand types. No operation '-' exists that takes a left-hand operand of type 'const int' and a right operand of type 'float' (and there is no acceptable conversion)
ERROR: 0:17: 'assign' : cannot convert from 'int' to 'float'
ERROR: 2 compilation errors. No code generated.
try this: use 1.0 rather than 1
uv.y = 1.0 - uv.y; // THIS LINE was EVOKing AN ERROR!!!

Android GL ES 2.0 Ortho Matrices

EDIT: Right, fixed it :D Issue was that I was trying to set the Projection matrix before calling glUseProgram()
I'm starting out with GL ES 2.0 on Android, and am trying to migrate some of my code over from 1.1 I've defined vertex and frag shaders as per the official docs, and after some googling I understand how the Model/Projection matrices work together, yet I can't seem to get anything but a blank screen.
I'm passing in a model view matrix to my vert shader, and am multiplying it with the ortho projection before multiplying the resulting mvp matrix with the vertex position. Here are my shaders to clarify:
Vertex Shader
attribute vec3 Position;
uniform mat4 Projection;
uniform mat4 ModelView;
void main() {
mat4 mvp = Projection * ModelView;
gl_Position = mvp * vec4(Position.xyz, 1);
}
Fragment Shader
precision mediump float;
uniform vec4 Color;
void main() {
gl_FragColor = Color;
}
I'm building the projection matrix in my renderer's onSurfaceChangedFunction():
int projectionHandle = GLES20.glGetUniformLocation(shaderProg, "Projection");
Matrix.orthoM(projection, 0, -width / 2, width / 2, -height / 2, height / 2, -10, 10);
GLES20.glUniformMatrix4fv(projectionHandle, 1, false, projection, 0);
Then in my onDrawFrame(), I call each actor's draw routine, which looks like
nt positionHandle = GLES20.glGetAttribLocation(Renderer.getShaderProg(), "Position");
nt colorHandle = GLES20.glGetAttribLocation(Renderer.getShaderProg(), "Color");
nt modelHandle = GLES20.glGetUniformLocation(Renderer.getShaderProg(), "ModelView");
float[] modelView = new float[16];
Matrix.setIdentityM(modelView, 0);
Matrix.rotateM(modelView, 0, rotation, 0, 0, 1.0f);
Matrix.translateM(modelView, 0, position.x, position.y, 1.0f);
GLES20.glUniformMatrix4fv(modelHandle, 1, false, modelView, 0);
GLES20.glUniform4fv(colorHandle, 1, color.toFloatArray(), 0);
GLES20.glVertexAttribPointer(positionHandle, 3, GLES20.GL_FLOAT, false, 0, vertBuffer);
GLES20.glEnableVertexAttribArray(positionHandle);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
GLES20.glDisableVertexAttribArray(positionHandle);
I realize that I can optimize this a bit, but I just want to get it to work first. The vertices are in a FloatBuffer, and centered around the origin. Any thoughts on what I am doing wrong? I've been checking my code against various tutorials and SO questions/answers, and can't see what I'm doing wrong.
Right, fixed it :D Issue was that I was trying to set the Projection matrix before calling glUseProgram()

OpenGL ES 2.0: Filling in a Sphere instead of it being wireframe

So finally by following some other posts etc I have created a sphere in OpenGL 2.0 (Android)
However it is currently rendering as a wireframe instead of being filled in.
Draw code
public void draw()
{
// Set our per-vertex lighting program.
GLES20.glUseProgram(mProgramHandle);
// Set program handles for drawing.
mRenderer.mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MVPMatrix");
mRenderer.mMVMatrixHandle = GLES20.glGetUniformLocation(mProgramHandle, "u_MVMatrix");
mNormalHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Normal");
mPositionHandle = GLES20.glGetAttribLocation(mProgramHandle, "a_Position");
mColorHandle = GLES20.glGetUniformLocation(mProgramHandle, "v_Color");
// Translate the cube into the screen.
Matrix.setIdentityM(mRenderer.mModelMatrix, 0);
Matrix.translateM(mRenderer.mModelMatrix, 0, position.x, position.y, position.z);
Matrix.scaleM(mRenderer.mModelMatrix, 0, scale.x, scale.y, scale.z);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.x, 1, 0,0);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.y, 0, 1,0);
Matrix.rotateM(mRenderer.mModelMatrix, 0, rotation.z, 0, 0,1);
float color[] = { 0.0f, 0.0f, 1.0f, 1.0f };
// Set color for drawing the triangle
GLES20.glUniform4fv(mColorHandle, 1, color, 0);
GLES20.glEnableVertexAttribArray(mPositionHandle);
GLES20.glVertexAttribPointer(mPositionHandle, 3, GLES20.GL_FLOAT, false, BYTES_PER_VERTEX, vertexBuffer);
GLES20.glEnableVertexAttribArray(mNormalHandle);
// This multiplies the view matrix by the model matrix, and stores the result in the MVP matrix
// (which currently contains model * view).
Matrix.multiplyMM(mRenderer.mMVPMatrix, 0, mRenderer.mViewMatrix, 0, mRenderer.mModelMatrix, 0);
// Pass in the modelview matrix.
GLES20.glUniformMatrix4fv(mRenderer.mMVMatrixHandle, 1, false, mRenderer.mMVPMatrix, 0);
// This multiplies the modelview matrix by the projection matrix, and stores the result in the MVP matrix
// (which now contains model * view * projection).
Matrix.multiplyMM(mTemporaryMatrix, 0, mRenderer.mProjectionMatrix, 0, mRenderer.mMVPMatrix, 0);
System.arraycopy(mTemporaryMatrix, 0, mRenderer.mMVPMatrix, 0, 16);
// Pass in the combined matrix.
GLES20.glUniformMatrix4fv(mRenderer.mMVPMatrixHandle, 1, false, mRenderer.mMVPMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, vertexCount);
//GLES20.glDrawArrays(GLES20.GL_TRIANGLES, 0, vertexCount);
}
Sphere creation code
private void generateSphereCoords(float radius, int stacks, int slices)
{
for (int stackNumber = 0; stackNumber <= stacks; ++stackNumber)
{
for (int sliceNumber = 0; sliceNumber < slices; ++sliceNumber)
{
float theta = (float) (stackNumber * Math.PI / stacks);
float phi = (float) (sliceNumber * 2 * Math.PI / slices);
float sinTheta = FloatMath.sin(theta);
float sinPhi = FloatMath.sin(phi);
float cosTheta = FloatMath.cos(theta);
float cosPhi = FloatMath.cos(phi);
vertexBuffer.put(new float[]{radius * cosPhi * sinTheta, radius * sinPhi * sinTheta, radius * cosTheta});
}
}
for (int stackNumber = 0; stackNumber < stacks; ++stackNumber)
{
for (int sliceNumber = 0; sliceNumber <= slices; ++sliceNumber)
{
indexBuffer.put((short) ((stackNumber * slices) + (sliceNumber % slices)));
indexBuffer.put((short) (((stackNumber + 1) * slices) + (sliceNumber % slices)));
}
}
}
Any ideas how I can make this 1 full colour instead of a wireframe?
In the fragment shader I have
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
uniform vec4 v_Color;
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.7;
gl_FragColor = v_Color;
}
Vertex
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
It looks like you are calculating vertex indices in your setup code, but not using them. Consider using glDrawElements instead of glDrawArrays.
It also looks like you are half way between using GL_TRIANGLE_STRIP (one new index per triangle after the first triangle) and GL_TRIANGLES (three indices per triangle). You will probably find it easier to use GL_TRIANGLES.

Categories

Resources