Open GL shader with 2 textures - android

I'm trying to do some experiments with Open GL ES on Android.
I'm trying to write a shader that got two uniform variables pointing 2 textures.
One containing the current frame, and the other containing the texture drawn on frame before
They're created in java world like this:
texturenames = new int[2];
GLES20.glGenTextures(2, texturenames, 0);
// Bind texture to texturename
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturenames[0]);
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texturenames[1]);
Then are passed as parameters of the shader like this:
int location = GLES20.glGetUniformLocation (ShaderTools.program, "currentTexture" );
GLES20.glUniform1i ( location, 0 );
location = GLES20.glGetUniformLocation (ShaderTools.program, "prevFrameTexture" );
GLES20.glUniform1i ( location, 1 );
This is the content of the fragment shader:
precision mediump float;
varying vec2 v_TexCoordinate;
uniform sampler2D currentTexture;
uniform sampler2D prevFrameTexture;
main() {
gl_FragColor = (texture2D(currentTexture, v_TexCoordinate) +
texture2D(prevFrameTexture, v_TexCoordinate)) / 2;
}
What i want to achieve is create a sort of blurring effect that's the result of the average of current and previous frame.
Is it possibile to update prevFrameTexture directly into shader code? I didn't find any way to do this.
As alternative... how should i tackle this problem?
Should i copy the content of currentTexture into prevFrameTexture in java world?
I tried to draw alternatively the TEXTURE0 and TEXTURE1 into onDrawFrame but it doesn't work as glActiveTexture to swap from one to another, doesn't work inside that callback

Yes it is possible. Use Render To Texture (RTT)
We can make a FBO as a texture so you should make two FBOs.
An example of making a RTT below
glGenFramebuffers(1, &fbo[object_id]);
glBindFramebuffer(GL_FRAMEBUFFER, fbo[object_id]);
glGenRenderbuffers(1, &rboColor[object_id]);
glBindRenderbuffer(GL_RENDERBUFFER, rboColor[object_id]);
Right after, make a texture following code below
glGenTextures(1, &texture[texture_id].texture_id);
glBindTexture(GL_TEXTURE_2D, texture[texture_id].texture_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, texture_width, texture_height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture[texture_id].texture_id, 0);
Once you have RTT textures, you can update these by rendering each framebuffer
https://github.com/sunglab/StarEngine/blob/master/renderer/StarFBO.cpp
void StarFBO::createFBO(...)
https://github.com/sunglab/StarEngine/blob/master/renderer/StarTexture.cpp
void StarTexture::createTEXTURE_RTT(...)

Related

Random disappearing part of rendered image

I am creating a simple traingle strip to cover the whole viewport with a single rectangle. Then I am applying a 100x100 texture to this surface which changes with every frame.
I set up viewPort and initialise my vertexBuffer etc. in the onSurfaceChanged method of my GLSurfaceView class, then call my rendering function from onDrawFrame.
This setup works as it should, but at random occasions only the right lower quarter of my rectangle gets rendered, the other 3/4th of the canvas is filled with background color! It doesn't happen every time, and the anomaly disappears after rotating the device back and forth (I guess because everything gets a reset in onSurfaceChanged)
I have tried to re-upload all vertices at every frame update with GLES20.glBufferData, which seems to get rid of this bug, although it might be that I just wasn't patient enough to observe it happening (as it is quite unpredictible). It's a very simple triangle strip, so I don't think that it consumes a lot of time, but it just feels bad practice to upload a data 60/sec which isn't changing at all!
//called from onSurfaceChanged
private fun initGL (side:Int) {
/*======== Defining and storing the geometry ===========*/
//vertices for TRIANGLE STRIP
val verticesData = floatArrayOf(
-1.0f,1.0f,//LU
-1.0f,-1.0f,//LL
1.0f,1.0f,//RU
1.0f,-1.0f//RL
)
//float : 32 bit -> 4 bytes
val vertexBuffer : FloatBuffer = ByteBuffer.allocateDirect(verticesData.size * 4)
.order(ByteOrder.nativeOrder()).asFloatBuffer()
vertexBuffer.put(verticesData).position(0)
val buffers = IntArray(1)
GLES20.glGenBuffers(1, buffers, 0)
vertexBufferId = buffers[0]
//upload vertices to GPU
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
GLES20.glBufferData(GLES20.GL_ARRAY_BUFFER,
vertexBuffer.capacity() * 4, // 4 = bytes per float
vertexBuffer,
GLES20.GL_STATIC_DRAW)
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0)
/*================ Shaders ====================*/
// Vertex shader source code
val vertCode =
"""
attribute vec4 aPosition;
void main(void) {
gl_Position = aPosition;
}
"""
val fragCode =
"""
precision mediump float;
varying vec2 vCoord;
uniform sampler2D u_tex;
void main(void) {
//1-Y, because we need to flip the Y-axis!!
vec4 color = texture2D(u_tex, vec2(gl_FragCoord.x/$side.0, 1.0-(gl_FragCoord.y/$side.0)));
gl_FragColor = color;
}
"""
// Create a shader program object to store
// the combined shader program
val shaderProgram = createProgram(vertCode, fragCode)
// Use the combined shader program object
GLES20.glUseProgram(shaderProgram)
val vertexCoordLocation = GLES20.glGetAttribLocation(shaderProgram, "aPosition")
GLES20.glVertexAttribPointer(vertexCoordLocation, 2, GLES20.GL_FLOAT, false, 0, vertexBuffer)
GLES20.glEnableVertexAttribArray(vertexCoordLocation)
//set ClearColor
GLES20.glClearColor(1f,0.5f,0.5f,0.9f)
//setup a texture buffer array
val texArray = IntArray(1)
GLES20.glGenTextures(1,texArray,0)
textureId = texArray[0]
if (texArray[0]==0) Log.e(TAG, "Error with Texture!")
else Log.e(TAG, "Texture id $textureId created!")
GLES20.glActiveTexture(GLES20.GL_TEXTURE0)
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST)
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST)
GLES20.glPixelStorei(GLES20.GL_UNPACK_ALIGNMENT,1)
}
//called from onDrawFrame
private fun updateGLCanvas (matrix : ByteArray, side : Int) {
//create ByteBuffer from updated texture matrix
val textureImageBuffer : ByteBuffer = ByteBuffer.allocateDirect(matrix.size * 1)//Byte = 1 Byte
.order(ByteOrder.nativeOrder())//.asFloatBuffer()
textureImageBuffer.put(matrix).position(0)
//do I need to bind the texture in every frame?? I am desperate XD
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId)
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0,GLES20.GL_RGB, side, side, 0, GLES20.GL_RGB, GLES20.GL_UNSIGNED_BYTE, textureImageBuffer)
//bind vertex buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
// Clear the color buffer bit
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
//draw from vertex buffer
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP,0,4)
//unbind vertex buffer
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0)
}
There are no error messages and most of the time the code is behaving as it should ... which makes this a bit difficult to track ...
If you want to use a vertex buffer, then the buffer object has to be the currently bound to the target GLES20.GL_ARRAY_BUFFER, when the array of generic vertex attribute data is specified by glVertexAttribPointer. The vertex attribute specification refers to this buffer.
In this case the last parameter of glVertexAttribPointer is treated as a byte offset into the buffer object's data store.
In your case this means the last parameter has to be 0.
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, vertexBufferId)
GLES20.glVertexAttribPointer(vertexCoordLocation, 2, GLES20.GL_FLOAT, false, 0, 0)
Note, if no named buffer buffer object is bound (zero), then the last parameter is a pointer to the buffer memory. Every time when a draw call is performed, this buffer is read.
In your implementation, the data which was uploaded to the GPU is never used, because it isn't referenced by the vertex array specification.
See also Vertex Specification.

Multiple texture units GLES 2.0 Android

Trying to get 2 texture units working.
#Override
public void onDrawFrame(GL10 gl) {
//Clear the Rendering Surface
glClear(GL_COLOR_BUFFER_BIT);
multiplyMM(viewProjectionMatrix, 0, projectionMatrix, 0, viewMatrix, 0);
positionTableInScene();
textureProgram.useProgram();
textureProgram.setUniforms(modelViewProjectionMatrix, texture_b, texture_r);
table.bindData(textureProgram);
table.draw();
...
}
Texture Program:
public class TextureShaderProgram extends ShaderProgram {
//Uniform locations
private final int uMatrixLocation;
private final int uTextureUnit0Location;
private final int uTextureUnit1Location;
//Attribute locations
private final int aPositionLocation;
private final int aTextureCoordinatesLocation;
public TextureShaderProgram(Context context){
super(context, R.raw.texture_vertex_shader, R.raw.texture_fragment_shader);
//Retrieve uniform locations for the shader program.
uMatrixLocation = glGetUniformLocation(program, U_MATRIX);
uTextureUnit0Location = glGetUniformLocation(program, U_TEXTURE_UNIT_0);
uTextureUnit1Location = glGetUniformLocation(program, U_TEXTURE_UNIT_1);
//Retrieve attribute locations for the shader program.
aPositionLocation = glGetAttribLocation(program, A_POSITION);
aTextureCoordinatesLocation = glGetAttribLocation(program, A_TEXTURE_COORDINATES);
}
public void setUniforms(float[] matrix, int textureId, int textureId2){
//Pass the matrix into the shader program
glUniformMatrix4fv(uMatrixLocation, 1, false, matrix, 0);
//Set the active texture unit to texture unit 0.
glActiveTexture(GL_TEXTURE0);
//Bind the texture to this unit.
glBindTexture(GL_TEXTURE_2D, textureId);
//Tell the texture uniform sample to use this texture in the shader by
//telling it to read from texture unit 0.
glUniform1f(uTextureUnit0Location, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textureId2);
glUniform1f(uTextureUnit1Location, 0);
}
public int getPositionAttributeLocation(){
return aPositionLocation;
}
public int getTextureCoordinatesAttributeLocation(){
return aTextureCoordinatesLocation;
}
}
Fragment Shader:
precision mediump float;
uniform sampler2D u_TextureUnit0;
uniform sampler2D u_TextureUnit1;
varying vec2 v_TextureCoordinates;
void main()
{
//gl_FragColor = texture2D(u_TextureUnit0, v_TextureCoordinates);
gl_FragColor = (v_TextureCoordinates.y > 0.5)
?texture2D(u_TextureUnit1, v_TextureCoordinates)
:texture2D(u_TextureUnit0, v_TextureCoordinates);
}
This did not provide me with the desired outcome of half of one texture and half another. i found out this was because my texture units held the same image. Always the image loaded into texture unit 0. changing which texture is loaded first changes the texture so I know both texture work.
I think its the way im informing opengl of where my texture units are. But im not sure how to change it.
Two problems. First of all, you're setting both uniform variables to the same value:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
glUniform1f(uTextureUnit0Location, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textureId2);
glUniform1f(uTextureUnit1Location, 0);
The value you're setting for the uniform variable must match the index of the texture unit the corresponding texture is bound to. Since the second texture is bound to GL_TEXTURE1, the second uniform value must be 1, not 0.
Also, uniform values for samplers must be set with glUniform1i(), not glUniform1f().
So the correct code looks like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureId);
glUniform1i(uTextureUnit0Location, 0);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, textureId2);
glUniform1i(uTextureUnit1Location, 1);

FBO texture copy not working on Android - rendered texture filled with whatever is at texture coord 0, 0

The problem is that the result of the FBO copy is filled with whatever pixel is at texture coordinate 0,0 of the source texture.
If I edit the shader to render a gradient based on texture coordinate position, the fragment shader fills the whole result as if it had texture coordinate 0, 0 fed into it.
If I edit the triangle strip vertices, things behave as expected, so I think the camera and geometry is setup right. It's just that the 2-tri quad is all the same color when it should reflect either my input texture or at least my position-gradient shaders!
I've ported this code nearly line for line from a working iOS example.
This is running alongside Unity3D, so don't assume any GL settings are default, as the engine is likely fiddling with them before my code starts.
Here's the FBO copy operation
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBuffer);
checkGlError("glBindFramebuffer");
GLES20.glViewport(0, 0, TEXTURE_WIDTH*4, TEXTURE_HEIGHT*4);
checkGlError("glViewport");
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthMask(false);
GLES20.glDisable(GLES20.GL_CULL_FACE);
GLES20.glBindBuffer(GLES20.GL_ARRAY_BUFFER, 0);
GLES20.glBindBuffer(GLES20.GL_ELEMENT_ARRAY_BUFFER, 0);
GLES20.glPolygonOffset(0.0f, 0.0f);
GLES20.glDisable(GLES20.GL_POLYGON_OFFSET_FILL);
checkGlError("fbo setup");
// Load the shaders if we have not done so
if (mProgram <= 0) {
createProgram();
Log.i(TAG, "InitializeTexture created program with ID: " + mProgram);
if (mProgram <= 0)
Log.e(TAG, "Failed to initialize shaders!");
}
// Set up the program
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glUniform1i(mUniforms[UNIFORM_TEXTURE], 0);
checkGlError("glUniform1i");
// clear the scene
GLES20.glClearColor(0.0f,0.0f, 0.1f, 1.0f);
checkGlError("glClearColor");
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
// Bind out source texture
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
checkGlError("glActiveTexture");
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mSourceTexture);
checkGlError("glBindTexture");
GLES20.glFrontFace( GLES20.GL_CW );
// Our object to render
ByteBuffer imageVerticesBB = ByteBuffer.allocateDirect(8 * 4);
imageVerticesBB.order(ByteOrder.nativeOrder());
FloatBuffer imageVertices = imageVerticesBB.asFloatBuffer();
imageVertices.put(new float[]{
-1.0f, -1.0f,
1.0f, -1.0f,
-1.0f, 1.0f,
1.0f, 1.0f}
);
imageVertices.position(0);
// The object's texture coordinates
ByteBuffer textureCoordinatesBB = ByteBuffer.allocateDirect(8 * 4);
imageVerticesBB.order(ByteOrder.nativeOrder());
FloatBuffer textureCoordinates = textureCoordinatesBB.asFloatBuffer();
textureCoordinates.put(new float[]{
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f}
);
textureCoordinates.position(0);
// Update attribute values.
GLES20.glEnableVertexAttribArray(ATTRIB_VERTEX);
GLES20.glVertexAttribPointer(ATTRIB_VERTEX, 2, GLES20.GL_FLOAT, false, 0, imageVertices);
GLES20.glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
GLES20.glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GLES20.GL_FLOAT, false, 0, textureCoordinates);
// Draw the quad
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
If you want to dive in, I've put up a nice gist with the update loop, setup and shaders here: https://gist.github.com/acgourley/7783624
I'm checking the result of this as an Android port to UnityFBO (MIT License) so all help is both appreciated and will be shared more broadly.
The declaration of your vertex shader output and fragment shader input do not mach for the texture coordinate varying (different precision qualifiers). Ordinarily this would not be an issue, but for reasons I will discuss below using highp in your fragment shader may come back to bite you in the butt.
Vertex shader:
attribute vec4 position;
attribute mediump vec4 textureCoordinate;
varying mediump vec2 coordinate;
void main()
{
gl_Position = position;
coordinate = textureCoordinate.xy;
}
Fragment shader:
varying highp vec2 coordinate;
uniform sampler2D texture;
void main()
{
gl_FragColor = texture2D(texture, coordinate);
}
In OpenGL ES 2.0 highp is an optional feature in fragment shaders. You should not declare anything highp in a fragment shader unless GL_FRAGMENT_PRECISION_HIGH is defined by the pre-processor.
GLSL ES 1.0 Specification - 4.5.4: Available Precision Qualifiers - pp. 36
The built-in macro GL_FRAGMENT_PRECISION_HIGH is defined to one on systems supporting highp precision in the fragment language
#define GL_FRAGMENT_PRECISION_HIGH 1
and is not defined on systems not supporting highp precision in the fragment language. When defined, this macro is available in both the vertex and fragment languages. The highp qualifier is an optional feature in the fragment language and is not enabled by #extension.
The bottom line is you need to check whether the fragment shader supports highp precision before declaring something highp or re-write your declaration in the fragment shader to use mediump. I cannot see much reason for arbitrarily increasing the precision of the vertex shader coordinates in the fragment shader, I would honestly expect to see it written as highp in both the vertex shader and fragment shader or kept mediump.

Andengine set alpha for each pixel in sprite

I have a one big Sprite on the Scene - for example 200x200 and in the app i have an array[200][200] in which i store 0 or 1 for each pixel in big sprite.
I want to draw one more textured sprite (for example 10x10) above existing one, but i want to calculate for eache pixel in new sprite if it needs to draw it on this scene depends on provided array (if in corresponding position of the pixel in new sprite in array is '1' - i need to draw this pixel, if '0' - i don't want to draw it (maybe set alpha = 0)).
I think i can use fragment shader for each of new sprites, but i can't understand how to provide array data to the shader to calculate color for each pixel.
I think also can use fragment shader for the whole scene (if render to texture).
I am quite new in opengl and can't figure out in what way to move.
When i create resources for the scene - i try to create my mask:
mask = new float[512*512*4];
for (int i = 0; i < mask.length; i++)
{
mask[i] = 2f;
}
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1029384756);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 1, GLES20.GL_RGBA, 512, 512, 0, GLES20.GL_RGBA, GLES20.GL_FLOAT, FloatBuffer.wrap(mask));
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
Then when i draw new item on scene i use shader:
setShaderProgram(ShaderProgram.getInstance());
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glUniform1i(RadialBlurExample.RadialBlurShaderProgram.sUniformMask, GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1029384756);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
But i can't see new item on scene (maskVal is <0.5).
I try to find working way to pass array as a texture but i can't find it.
Upload your array as a second texture with the same dimensions as the sprite, and then when you draw the sprite, sample the second texture at the same texcoord.
If the second texture doesn't meet the mask criteria, discard the fragment
uniform sampler2d sprite;
uniform sampler2d mask;
in vec2 uv;
main() {
float maskVal = texture2D(mask, uv).r;
if(maskVal > 0.5) {
gl_FragColor = texture2D(sprite,uv);
} else {
discard;
}
}

Texture only shows up as black

NOTE: I've updated this code now to the working form and provided everything I attempted in an answer. Hopefully it helps somebody else with the same problem.
My texture is being shown as black (that is, no texture). I've gone through several other questions here with the same problem, but could not find a solution. I'm sure I'm missing something quite simple (likely ordering), but can't figure it out.
I setup my texture like this (GLProgram.checkError checks for GL errors and logs them -- I get no errors anywhere):
/*Bitmap*/ bitmap = BitmapFactory.decodeResource( context.getResources(),
R.drawable.gears );
int textures[] = new int[1];
GLES20.glGenTextures( 1, textures, 0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_NEAREST);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
GLProgram.checkError( "texImage2D" );
texture = textures[0];
To draw a square which should be textured I do this:
GLES20.glVertexAttribPointer(glProgram.hATex, 2, GLES20.GL_FLOAT, false,
2*SIZEOF_FLOAT, texBuffer.under);
GLProgram.checkError( "hATex" );
GLES20.glActiveTexture(GLES20.GL_TEXTURE0 );
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texture);
GLES20.glUniform1i(glProgram.hUTex, 0);
GLProgram.checkError( "Uniform" );
GLES20.glVertexAttribPointer( glProgram.hAttribPosition, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer.under );
GLProgram.checkError( "Vertex" );
GLES20.glDrawArrays( GLES20.GL_TRIANGLE_FAN, 0, vertexBuffer.size/COORDS_PER_VERTEX );
GLProgram.checkError( "draw" );
My vertex shader is:
precision mediump float;
attribute vec4 vPosition;
uniform mat4 mTransform;
attribute vec2 aTex;
varying mediump vec2 vTex;
void main() {
gl_Position = mTransform * vPosition;
vTex = aTex;
}
My fragment shader is:
precision mediump float;
uniform vec4 vColor;
uniform sampler2D uTex;
varying mediump vec2 vTex;
void main(void)
{
//gl_FragColor = vColor;
//testing vTex, and it is fine
//gl_FragColor = vec4( vTex[0], vTex[1], 0, 1.0 );
//so it must be uTex which is program
gl_FragColor = texture2D(uTex,vTex);
}
I left in the commented bits to show what I checked. My vTex parameter is correct, since that bit produces the expected red/green color sweep. So I assume it must be the texture itself.
Also, uTex, aTex are located via:
hATex = GLES20.glGetAttribLocation( hProgram, "aTex" );
checkError( "aTex" );
hUTex = GLES20.glGetUniformLocation( hProgram, "uTex" );
checkError( "uTex" );
My texture comes a JPG and is 64x64 in size. I checked just after loading and it has the correct size and does appear to have pixel colors (dumping a few at random were non-zero).
The code, as presented, now works. I modified it as I tried things and for the comments. I can't be sure at which step it actually started working, since it didn't work before. Here are some of the things I checked/double-checked in the process -- I presume it has to be a combination of these somehow:
Verify source image is square and a power of two in size
non-square works so long as power of 2 in both dimensions
non-power 2 works so long as TEXTURE_WRAP is set to CLAMP_TO_EDGE
Set Min/Mag to nearest
Call bindTexture during setup and in each draw
These are things I tried, and tried again now, and appear to make no different (that is, it works either way):
use bitmap options.inScaled = false (using default options works fine)
put texImage2d before/after the glTexParameter functions
add/remove mediump from vTex (mismatched works fine, probably because default)
not calling glEnableVertexAttribArray (this results in a white box, so it wasn't my problem)
changing order of vertices and texture coords (all orders work once other things are correct -- texture may be skewed, but it does appear)
changing resource format (JPG/PNG) (RGB/Grayscale)
changing object transform matrix
TEXTURE_WRAP settings (not needed in this case, works without)
In the case when it wasn't working the error was silent: calls to glGetError returned okay and glGetProgramInfoLog was empty.

Categories

Resources