shaders not causing any effect - opengl - android

I'm trying to render on a Texture which resides in another class and I can't look at the code. I just have the access to it's pointer which is created by glGenTextures.
I'm trying to render on this texture. I'm creating my own shaders and linking to it but they don't affect anything. I just see a white screen on my phone. I've put opengl error check after every statement, and it passes without any error.
I wanted to ask, could there be any previously attached shaders to that texture or something like that which are hindering my own shaders. (I don't even know if this statement makes any sense.)
I'm using the utility class from grafika for program and shaders creation - https://github.com/google/grafika/blob/master/src/com/android/grafika/gles/GlUtil.java . That's why I'm pretty much sure my shaders related code is okay.
This is my main drawing loop -
log("Receiving frame");
GLES20.glUseProgram(programHandle);
GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mainTexture.glName);
//
// Copy the model / view / projection matrix over.
GLES20.glUniformMatrix4fv(muMVPMatrixLoc, 1, false, GlUtil.IDENTITY_MATRIX, 0);
GlUtil.checkGlError("glUniformMatrix4fv");
// Copy the texture transformation matrix over.
/*
GLES20.glUniformMatrix4fv(muTexMatrixLoc, 1, false, texMatrix, 0);
GlUtil.checkGlError("glUniformMatrix4fv"); */
// Enable the "aPosition" vertex attribute.
GLES20.glEnableVertexAttribArray(maPositionLoc);
GlUtil.checkGlError("glEnableVertexAttribArray");
// Connect vertexBuffer to "aPosition".
GLES20.glVertexAttribPointer(maPositionLoc, 2,
GLES20.GL_FLOAT, false, 2 * GlUtil.SIZEOF_FLOAT, GlUtil.FULL_RECTANGLE_BUF);
GlUtil.checkGlError("glVertexAttribPointer");
// Enable the "aTextureCoord" vertex attribute.
GLES20.glEnableVertexAttribArray(maTextureCoordLoc);
GlUtil.checkGlError("glEnableVertexAttribArray");
// Connect texBuffer to "aTextureCoord".
GLES20.glVertexAttribPointer(maTextureCoordLoc, 2,
GLES20.GL_FLOAT, false, 2 * GlUtil.SIZEOF_FLOAT, GlUtil.FULL_RECTANGLE_TEX_BUF);
GlUtil.checkGlError("glVertexAttribPointer");
GLES20.glViewport(0, 0, parameters.getPreviewSize().width, parameters.getPreviewSize().height);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
This is my fragment shader code -
public static final String FRAGMENT_SHADER_2D =
"precision mediump float;\n" +
"varying vec2 vTextureCoord;\n" +
"uniform sampler2D sTexture;\n" +
"void main() {\n" +
" gl_FragColor[0] = 1.0;gl_FragColor[1] = 0.0;gl_FragColor[2] = 0.0;gl_FragColor[3] = 1.0;\n" +
"}\n";
These are the coordinate matrices -
private static final float FULL_RECTANGLE_COORDS[] = {
-1.0f, -1.0f, // 0 bottom left
1.0f, -1.0f, // 1 bottom right
-1.0f, 1.0f, // 2 top left
1.0f, 1.0f, // 3 top right
};
private static final float FULL_RECTANGLE_TEX_COORDS[] = {
0.0f, 0.0f, // 0 bottom left
45.0f, 0.0f, // 1 bottom right
0.0f, 1.0f, // 2 top left
56.0f, 344.0f // 3 top right
};

There doesn't seem to be any code for connecting your texture to "sTexture". You are using texture slot 2 for your texture, so you need tell the shader this:
GLES20.glActiveTexture(GLES20.GL_TEXTURE2); // using texture slot 2
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mainTexture.glName);
int mSamplerLoc = GLES20.glGetUniformLocation (programHandle, "sTexture" );
GLES20.glUniform1i ( mSamplerLoc2, 2); // connect sTexture to texture slot 2

Related

Vertex shader does not apply second attribute array

Consider a simple game with two classes of objects (ball and wall). Tutorials I've found suggests using a single vertex data array in the manner like:
[... initializing ...]
vdata = new float[ENOUGH_FOR_ALL];
vertexData = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
aPositionLocation = glGetAttribLocation(programId, "a_Position");
glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 0, vertexData);
glEnableVertexAttribArray(aPositionLocation);
[...drawing...]
vertexData.position(0);
vertexData.put(vdata);
glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f); // ball is red
glDrawArrays(GL_TRIANGLE_FAN, 0, BALL_VERTICES + 2);
glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f); // wall is green
glDrawArrays(GL_LINES, BALL_VERTICES + 2, 2); // wall as a single line
and, the vertex shader is trivial:
attribute vec4 a_Position;
void main() {
gl_Position = a_Position;
}
This works but requires cumbersome calculation of offsets in a single buffer, moving when some object size changes...
Consider the following vertex shader
attribute vec4 a_Ball;
attribute vec4 a_Wall;
uniform float u_WhatDraw;
void main() {
if (u_WhatDraw == 1.0) {
gl_Position = a_Ball;
}
else if (u_WhatDraw == 2.0) {
gl_Position = a_Wall;
}
}
data prepared as:
ball_data = new float[BALL_VERTICES + 2];
wall_data = new float[4]; // a single line
ballVertexData = ByteBuffer.allocateDirect(ball_data.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
wallVertexData = ByteBuffer.allocateDirect(wall_data.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
aBallLocation = glGetAttribLocation(programId, "a_Ball");
glVertexAttribPointer(aBallLocation, 2, GL_FLOAT, false, 0, ballVertexData);
glEnableVertexAttribArray(aBallLocation);
aWallLocation = glGetAttribLocation(programId, "a_Wall");
glVertexAttribPointer(aWallLocation, 2, GL_FLOAT, false, 0, wallVertexData);
glEnableVertexAttribArray(aWallLocation);
[...generation of static wall_data skipped...]
[...drawing...]
ballVertexData.position(0);
ballVertexData.put(ball_data);
glUniform4f(uColorLocation, 1.0f, 0.0f, 0.0f, 1.0f); // ball is red
glUniform1f(uWhatDrawLocation, 1.0f);
glDrawArrays(GL_TRIANGLE_FAN, 0, BALL_VERTICES + 2);
glUniform4f(uColorLocation, 0.0f, 1.0f, 0.0f, 1.0f); // wall is green
glUniform1f(uWhatDrawLocation, 2.0f);
glDrawArrays(GL_LINES, 0, 2); // wall as a single line
This does not draw wall at all, despite the ball is drawn.
Please suggest in fixing this with 2-array approach or explain a limitation why I should stick with a single array for all activity.
This works but requires cumbersome calculation of offsets in a single buffer, moving when some object size changes...
If object size changes you need to reupload new mesh data, so computing offsets seems to be the least of the problems.
or explain a limitation why I should stick with a single array for all activity.
What happens when you add a third object, or a fourth, or a fifth?
You're optimizing the wrong problem.
Any time you are required to generate shaders with if (<insert uniform>) ... you're doing it wrong - never make the GPU make control plane decisions for every vertex when the application can just do it once.

OpenGL 3.0, cannot draw textured quad

I am trying to modify a sample from the OpenGL ES 3.0 programming guide book, but I cannot figure out why I can't draw a textured quad.
here is the main drawing loop (extracted from the book, with minor modifications):
void DrawTexturedQuad(ESContext *esContext)
{
UserData *userData = static_cast<UserData *>(esContext->userData);
GLfloat vVertices[] =
{
-0.5f, 0.5f, 0.0f, // Position 0
0.0f, 0.0f, // TexCoord 0
-0.5f, -0.5f, 0.0f, // Position 1
0.0f, 1.0f, // TexCoord 1
0.5f, -0.5f, 0.0f, // Position 2
1.0f, 1.0f, // TexCoord 2
0.5f, 0.5f, 0.0f, // Position 3
1.0f, 0.0f // TexCoord 3
};
GLushort indices[] = { 0, 1, 2, 0, 2, 3 };
// Use the program object
glUseProgram ( userData->programObjectMultiTexture );
// Load the MVP matrix
glUniformMatrix4fv(userData->mvpLoc, 1, GL_FALSE,(GLfloat *) &userData->grid.mvpMatrix.m[0][0]);
#define LOCATION_VERT (7)
#define LOCATION_TEXT (8)
// Load the vertex position
glVertexAttribPointer ( LOCATION_VERT, 3, GL_FLOAT,
GL_FALSE, 5 * sizeof ( GLfloat ), vVertices );
// Load the texture coordinate
glVertexAttribPointer ( LOCATION_TEXT, 2, GL_FLOAT,
GL_FALSE, 5 * sizeof ( GLfloat ), &vVertices[3] );
glEnableVertexAttribArray ( LOCATION_VERT );
glEnableVertexAttribArray ( LOCATION_TEXT );
// Bind the base map
glActiveTexture ( GL_TEXTURE0 );
glBindTexture ( GL_TEXTURE_2D, userData->baseMapTexId );
// Set the base map sampler to texture unit to 0
glUniform1i ( userData->baseMapLoc, 0 );
// Bind the light map
glActiveTexture ( GL_TEXTURE1 );
glBindTexture ( GL_TEXTURE_2D, userData->lightMapTexId );
// Set the light map sampler to texture unit 1
glUniform1i ( userData->lightMapLoc, 1 );
glDrawElements ( GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices );
glDisableVertexAttribArray ( LOCATION_VERT );
glDisableVertexAttribArray ( LOCATION_TEXT );
}
The vertex shader:
#version 300 es
uniform mat4 u_mvpMatrix;
layout(location = 7) in vec3 a_position;
layout(location = 8) in vec2 a_texCoord;
out vec2 v_texCoord;
void main()
{
//gl_Position = vec4(a_position, 1.f);
gl_Position = u_mvpMatrix * vec4(a_position, 1.f);
v_texCoord = a_texCoord;
}
the Fragment shader:
#version 300 es
precision mediump float;
in vec2 v_texCoord;
layout(location = 0) out vec4 outColor;
uniform sampler2D s_baseMap;
uniform sampler2D s_lightMap;
void main()
{
vec4 baseColor;
vec4 lightColor;
baseColor = texture( s_baseMap, v_texCoord );
lightColor = texture( s_lightMap, v_texCoord );
outColor = baseColor * (lightColor + 0.25);
//outColor = vec4(1.f, 1.f, 1.f, 1.f);
}
I did three tests, but they all look incorrect although the setup and shaders look right to me.
1st test (top window of the picture). The code is as shown above.
2nd test (middle window of the picture). I by passed the samplers and hard-coded the color on the fragment shader like so:
outColor = vec4(1.f, 1.f, 1.f, 1.f);
This proved that my matrices are correct, since I see a white quad properly tranformed on the screen.
3rd test (bottom window). I bypassed the matrix tranform to test if the samplers are correct. I do see the textured quad rendered but no transformation. like so:
// this is so odd, I can see the textures!!!!
// why the transform can mess things up?
gl_Position = vec4(a_position, 1.f);
v_texCoord = a_texCoord;
Anyway, if anyone can spot the issue, I'd really appraciated. I am using the Mali ARM emulator and there is no way to debug the shader, I have no idea what is going on inside. Yet, the code setup looks correct but they don't seem to work all together for some reason.

FBO texture copy not working on Android - rendered texture filled with whatever is at texture coord 0, 0

The problem is that the result of the FBO copy is filled with whatever pixel is at texture coordinate 0,0 of the source texture.
If I edit the shader to render a gradient based on texture coordinate position, the fragment shader fills the whole result as if it had texture coordinate 0, 0 fed into it.
If I edit the triangle strip vertices, things behave as expected, so I think the camera and geometry is setup right. It's just that the 2-tri quad is all the same color when it should reflect either my input texture or at least my position-gradient shaders!
I've ported this code nearly line for line from a working iOS example.
This is running alongside Unity3D, so don't assume any GL settings are default, as the engine is likely fiddling with them before my code starts.
Here's the FBO copy operation
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBuffer);
checkGlError("glBindFramebuffer");
GLES20.glViewport(0, 0, TEXTURE_WIDTH*4, TEXTURE_HEIGHT*4);
checkGlError("glViewport");
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthMask(false);
GLES20.glDisable(GLES20.GL_CULL_FACE);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glPolygonOffset(0.0f, 0.0f);
GLES20.glDisable(GLES20.GL_POLYGON_OFFSET_FILL);
checkGlError("fbo setup");
// Load the shaders if we have not done so
if (mProgram <= 0) {
createProgram();
Log.i(TAG, "InitializeTexture created program with ID: " + mProgram);
if (mProgram <= 0)
Log.e(TAG, "Failed to initialize shaders!");
}
// Set up the program
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glUniform1i(mUniforms[UNIFORM_TEXTURE], 0);
checkGlError("glUniform1i");
// clear the scene
GLES20.glClearColor(0.0f,0.0f, 0.1f, 1.0f);
checkGlError("glClearColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Bind out source texture
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
checkGlError("glActiveTexture");
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mSourceTexture);
checkGlError("glBindTexture");
GLES20.glFrontFace( GLES20.GL_CW );
// Our object to render
ByteBuffer imageVerticesBB = ByteBuffer.allocateDirect(8 * 4);
imageVerticesBB.order(ByteOrder.nativeOrder());
FloatBuffer imageVertices = imageVerticesBB.asFloatBuffer();
imageVertices.put(new float[]{
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f}
);
imageVertices.position(0);
// The object's texture coordinates
ByteBuffer textureCoordinatesBB = ByteBuffer.allocateDirect(8 * 4);
imageVerticesBB.order(ByteOrder.nativeOrder());
FloatBuffer textureCoordinates = textureCoordinatesBB.asFloatBuffer();
textureCoordinates.put(new float[]{
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f}
);
textureCoordinates.position(0);
// Update attribute values.
GLES20.glEnableVertexAttribArray(ATTRIB_VERTEX);
GLES20.glVertexAttribPointer(ATTRIB_VERTEX, 2, GLES20.GL_FLOAT, false, 0, imageVertices);
GLES20.glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
GLES20.glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GLES20.GL_FLOAT, false, 0, textureCoordinates);
// Draw the quad
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
If you want to dive in, I've put up a nice gist with the update loop, setup and shaders here: https://gist.github.com/acgourley/7783624
I'm checking the result of this as an Android port to UnityFBO (MIT License) so all help is both appreciated and will be shared more broadly.
The declaration of your vertex shader output and fragment shader input do not mach for the texture coordinate varying (different precision qualifiers). Ordinarily this would not be an issue, but for reasons I will discuss below using highp in your fragment shader may come back to bite you in the butt.
Vertex shader:
attribute vec4 position;
attribute mediump vec4 textureCoordinate;
varying mediump vec2 coordinate;
void main()
{
gl_Position = position;
coordinate = textureCoordinate.xy;
}
Fragment shader:
varying highp vec2 coordinate;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, coordinate);
}
In OpenGL ES 2.0 highp is an optional feature in fragment shaders. You should not declare anything highp in a fragment shader unless GL_FRAGMENT_PRECISION_HIGH is defined by the pre-processor.
GLSL ES 1.0 Specification - 4.5.4: Available Precision Qualifiers - pp. 36
The built-in macro GL_FRAGMENT_PRECISION_HIGH is defined to one on systems supporting highp precision in the fragment language
#define GL_FRAGMENT_PRECISION_HIGH 1
and is not defined on systems not supporting highp precision in the fragment language. When defined, this macro is available in both the vertex and fragment languages. The highp qualifier is an optional feature in the fragment language and is not enabled by #extension.
The bottom line is you need to check whether the fragment shader supports highp precision before declaring something highp or re-write your declaration in the fragment shader to use mediump. I cannot see much reason for arbitrarily increasing the precision of the vertex shader coordinates in the fragment shader, I would honestly expect to see it written as highp in both the vertex shader and fragment shader or kept mediump.

Scrolling in OpenGL ES 1.1

I'm developing a 2D side-scrolling game.I have a tiled bitmap for background with size 2400x480.How can i scroll this bitmap?
I know that i can do it with following code:
for(int i=0;i<100;i++)
draw(bitmap,2400*i,480);
So it will scroll bitmap for 240000 pixels.
But i don't want to draw images which stays out of screen(with size 800x480).
How can i render scrolling tile?How can i give velocity to it?(For parallax scrolling after normal scrolling)
The following code does not use two quads, but one quad, and adapts the texture coordinates accordingly. Please note that I stripped all texture handling code. The projection matrix solely contains glOrthof(0.0f, 1.0f, 0.0f, 1.0f, -1.0f, 1.0f);. Quad2D defines a quad whose tex coords can be animated using updateTexCoords:
static class Quad2D {
public Quad2D() {
/* SNIP create buffers */
float[] coords = { 1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f, 0.0f, 1.0f, };
mFVertexBuffer.put(coords);
float[] texs = { 1.0f, 1.0f, 0.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f, };
mTexBuffer.put(texs);
/* SNIP reposition buffer indices */
}
public void updateTexCoords(float t) {
t = t % 1.0f;
mTexBuffer.put(0, 0.33f+t);
mTexBuffer.put(2, 0.0f+t);
mTexBuffer.put(4, 0.33f+t);
mTexBuffer.put(6, 0.0f+t);
}
public void draw(GL10 gl) {
glVertexPointer(2, GL_FLOAT, 0, mFVertexBuffer);
glTexCoordPointer(2, GL_FLOAT, 0, mTexBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
/* SNIP members */
}
Now how to call updateTexCoords? Inside onDrawFrame, put the following code:
long time = SystemClock.uptimeMillis() % 3000L;
float velocity = 1.0f / 3000.0f;
float offset = time * velocity;
mQuad.updateTexCoords(offset);
Since the texture coordinates go beyond 1.0f, the texture wrap for the S coordinate must be set to GL_REPEAT.
If you are ok working with shaders, we can have a cleaner and may be more efficient way of doing this.
Create a VBO with a quad covering the whole screen. The witdth and height should be 800 and 480 respectively.
then create your vertex shader as:
attribute vec4 vPos;
uniform int shift;
varying mediump vec2 fTexCoord;
void main(void){
gl_Position = vPos;
fTexCoord = vec2((vPos.x-shift)/800.0, vPos.y/480.0);//this logic might change slightly based on which quadarnt you choose.
}
The fragment shader would be something like:
precision mediump float;
uniform sampler2D texture;
varying mediump vec2 fTexCoord;
void main(void){
gl_FragColor = texture2D(texture, fTexCoord);
}
and you are done.
To scroll just set the uniform value using "glUniformX()" apis.

In Android OpenGL ES 1.1, how to draw texture to both side of triangle?

I am trying to draw a texture to both sides of a triangle in Android. I have the below code
float textureCoordinates[] = { 0.0f, 0.0f, //
0.0f, 1.0f, //
1.0f, 0.0f
};
short[] indices = new short[] { 0, 1, 2 };
float[] vertices = new float[] { -0.5f, 0.5f, 0.0f, // p0
-0.5f, -0.5f, 0.0f, // p1
0.5f, 0.5f, 0.0f, // p2
};
Now with this i am able to get the triangle with the texture. But when i rotate it along the y axis after a 180 degree rotation i see the same image on the other side of the triangle. I would like to change the texture of the triangle on the other side. Please let me know how to do it.
This is not achievable with just a single function call. When using fragment shaders (ES 2) you may query the builtin fragment shader input variable gl_FrontFacing to determine if the current fragment belongs to the front or the back of its triangle. So you would write something like this:
uniform sampler2D frontTexture;
uniform sampler2D backTexture;
varying vec2 texCoord;
void main()
{
if(gl_FrontFacing)
gl_FragColor = texture2D(frontTexture, texCoord);
else
gl_FragColor = texture2D(backTexture, texCoord);
}
When using the fixed function pipeline (ES 1) you won't get around rendering two versions of your object, one with the front texture and the other with the back texture. You can facilitate culling to prevent depth-fighting problems. So it would be something like this:
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK); //render only front faces
glBindTexture(GL_TEXTURE_2D, frontTexture);
<draw object>
glCullFace(GL_FRONT); //render only back faces
glBindTexture(GL_TEXTURE_2D, backTexture);
<draw object>
As I'm not sure how well modern ES devices handle branching in the shader (although in this case all fragments should take the same path), it may be, that the second approach together with a simple one-texture fragment shader might also be the preferred solution for ES 2, performance-wise.

Categories

Resources