Sorry for the broad question, but I didn't know quite how to word it. I am creating an app that manipulates the camera's pixels. I am new to OpenGl and my problem could be in how I link textures to the shaders, or somewhere in my actual shader code.
I have an RGB look up table that I turn into a texture and pass into the shader to use as the manipulation table. I believe my texture is of the proper size and setting, but I am not 100% sure. In my shader I have this:
uniform sampler2D data_Texture // The RGB look up table texture
uniform samplerExternalOES u_Texture; // The camera's texture
And this is in my main loop in the shader:
// Color changing Algorithm
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel.x, texel.y, texel.z, 1.0);
float rr = gl_FragColor.r * 255.0;
float gg = gl_FragColor.g * 255.0;
float bb = gl_FragColor.b * 255.0;
int r = int(rr);
int g = int(gg);
int b = int(bb);
int index = ((r/4) * 4096) + ((g/4) * 64) + (b/4);
int x = int(mod(float(index), 512.0));
int y = index / 512;
vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
We take the camera's RGB pixels to get an index for the look up table. Then we try to get the rgb data out of the look up table to replace the camera's pixel with. This is where the problem occurs. As you can probably tell from the code above, we don't actually change the FragColor with our data. This is because we were testing and found an interesting occurrence. When we comment out the last line in the main loop,
//vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
the camera just displays like normal, because we don't do any manipulations on the actual FragColor. But when we leave the last line in, the pixels turn green for dark colors and pink/orange for light colors.
Why does filling this data variable, and not explicitly changing the FragColor, change the camera's pixels??
Related
Working on AR app and trying to replace plane texture.
I'm trying to render texture on vertical and horizontal planes. It's working fine for horizontal planes, but doesn't work well on vertical.
i found that something wrong with texture_coord calculations, but can't figure it out (new to OpenGL).
Here is my vertex shader
void main()
{
vec4 local_pos = vec4(a_position, 1.0);
vec4 world_pos = u_model * local_pos;
texture_coord = world_pos.sp * u_scale;
gl_Position = u_mvp * local_pos;
}
fragment shader
out vec4 outColor;
void main()
{
vec4 control = texture(u_texture, diffuse_coord);
float dotScale = 1.0;
float lineFade = 0.5;
vec3 newColor = (control.r * dotScale > u_gridControl.x) ? u_dotColor.rgb : control.g > u_gridControl.y ? u_lineColor.rgb * lineFade: u_lineColor.rgb * 0.25 * lineFade;
outColor = vec4(newColor, 1.0);
}
The important bit is texture_coord = world_pos.sp in your vertex shader.
There are 3 ways to refer to the components of a vector in GLSL. xyzw (the most common), rgba (more natural for colours), stpq (more natural for texture coordinates).
The line texture_coord = world_pos.sp would be clearer if it were written as texture_coord = world_pos.xz.
Once you realize that you're generating texture coordinates by ignoring the y-component it's obvious why vertical planes are not textured how you would like.
Unfortunately there's no simple one line fix. Perhaps tri-planar texturing might be an appropriate solution for you - this seems to be a good explanation of the technique.
The main texture of my surface shader is a Google Maps image tile, similar to this:
.
I want to replace pixels that are close to a specified color with that from a separate texture. What is working now is the following:
Shader "MyShader"
{
Properties
{
_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
_GrassTexture("Grass Texture", 2D) = "white" {}
_RoadTexture("Road Texture", 2D) = "white" {}
_WaterTexture("Water Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha approxview halfasview noforwardadd nometa
uniform sampler2D _MainTex;
uniform sampler2D _GrassTexture;
uniform sampler2D _RoadTexture;
uniform sampler2D _WaterTexture;
struct Input
{
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o)
{
fixed4 ct = tex2D(_MainTex, IN.uv_MainTex);
// if the red (or blue) channel of the pixel is within a
// specific range, get either a 1 or a 0 (true/false).
int grassCond = int(ct.r >= 0.45) * int(0.46 >= ct.r);
int waterCond = int(ct.r >= 0.14) * int(0.15 >= ct.r);
int roadCond = int(ct.b >= 0.23) * int(0.24 >= ct.b);
// if none of the above conditions is a 1, then we want to keep our
// current pixel's color:
half defaultCond = 1 - grassCond - waterCond - roadCond;
// get the pixel from each texture, multiple by their check condition
// to get:
// fixed4(0,0,0,0) if this isn't the right texture for this pixel
// or fixed4(r,g,b,1) from the texture if it is the right pixel
fixed4 grass = grassCond * tex2D(_GrassTexture, IN.uv_MainTex);
fixed4 water = waterCond * tex2D(_WaterTexture, IN.uv_MainTex);
fixed4 road = roadCond * tex2D(_RoadTexture, IN.uv_MainTex);
fixed4 def = defaultCond * ct; // just used the MainTex pixel
// then use the found pixels as the Albedo
o.Albedo = (grass + road + water + def).rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "None"
}
This is the first shader I've ever written, and it probably isn't very performant. It seems counter intuitive to me to call tex2D on each texture for every pixel to just throw that data away, but I couldn't think of a better way to do this without if/else (which I read were bad for GPUs).
This is a Unity Surface Shader, and not a fragment/vertex shader. I know there is a step that happens behind the scenes that will generate the fragment/vertex shader for me (adding in the scene's lighting, fog, etc.). This shader is applied to 100 256x256px map tiles (2560x2560 pixels in total). The grass/road/water textures are all 256x256 pixels as well.
My question is: is there a better, more performant way of accomplishing what I'm doing here? The game runs on Android and iOS.
I'm not a specialist in Shader performance, but assuming you have a relatively small number of source tiles that you wish to render in the same frame it might make more sense to store the result of the pixel replacement and reuse it.
As you are stating that the resulting image is going to be the same size as your source tile, just render the source tile using your surface shader (without any lighting though, you may want to consider using a simple, flat pixel shader!) into a RenderTexture once and then use that RenderTexture as source for your world rendering. That way you are doing the expensive work only once per source tile and thus it isn't even important anymore whether your shader is well optimized.
If all textures are static, you might even consider not doing this at runtime, but just translate them once in the Editor.
I've written some shader code for my android application. It has some time dependant animation which work totaly fine on webgl version, shader code is below, but full version could be found here
vec3 bip(vec2 uv, vec2 center)
{
vec2 diff = center-uv; //difference between center and start coordinate
float r = length(diff); //vector length
float scale = mod(u_ElapsedTime,2.); //it is equal 1 every 2 seconds and trigerring function
float circle = smoothstep(scale, scale+cirleWidth, r)
* smoothstep(scale+cirleWidth,scale, r)*4.;
return vec3(circle);
}
Return of the function is used in Fragcolor as a base for color.
u_ElapsedTime is sent to shader via uniform:
glUniform1f(uElapsedTime,elapsedTime);
Time data sent to shader from "onDrawFrame":
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
elapsedTime = (SystemClock.currentThreadTimeMillis()-startTime)/100f;
//Log.d("KOS","time " + elapsedTime);
scannerProgram.useProgram(); //initialize shader
scannerProgram.setUniforms(resolution,elapsedTime,rotate); //send uniforms to shader
scannerSurface.bindData(scannerProgram); //get attribute location
scannerSurface.draw(); //draw vertices with given attributes
}
So everything looks totaly fine. Nevertheless, after some amount of time it looks like there is some lags and amount of frames is lesser then from the beginning. In the end it could be like only one-two frames per cicle for that function. In the same time it doesn't seems like opengl by itself have some lags, because i can for example rotate the picture and don't see any lags.
What could be the reason of that lags??
upd:
code of binddata:
public void bindData(ScannerShaderProgram scannerProgram) {
//getting location of each attribute for shader program
vertexArray.setVertexAttribPointer(
0,
scannerProgram.getPositionAttributeLocation(),
POSITION_COMPONENT_COUNT,
0
);
Sounds to me like precision issues. Try taking this line from your shader:
float scale = mod(u_ElapsedTime,2.);
And perform it on the CPU instead. e.g.
elapsedTime = ((SystemClock.currentThreadTimeMillis()-startTime)%200)/100f;
As a starting point I use the Vuforia (version 4) sample called MultiTargets which tracks a 3d physical "cube" in the camera feed and augments it with yellow grid lines along the cube edges.
What I want to achieve is remove the textures and use diffuse lighting on the cube faces instead, by setting my own light position.
I want to do this on native Android and I do NOT want to use Unity.
It's been a hard journey of several days of work and learning. This is my first time working with OpenGL of any kind, and OpenGL ES 2.0 doesn't exactly make it easy for the beginner.
So I have a light source positioned slightly above the top face of my cube. I found that I can get the diffuse effect right if I compute the lambert factor in model space, everything remains in place regardless of my camera, and only the top face gets any light.
But when I move to using eye space, it becomes weird and the light seems to follow my camera around. Other faces get light, not only the top face. I don't understand why that is. For testing I have made sure that the light position is as expected by only using distance to lightsource for rendering pixel brightness in the fragment shader. Therefore, I'm fairly confident in the correctness of my "lightDirectionEyespace", and my only explanation is that something with the normals must be wrong. But I think I followed the explanations for creating the normal matrix correctly...
Help please!
Then there is of course the question whether those diffuse calculations SHOULD be performed in eye space? Will there be any disadvantages if I just do it in model space? I suspect that probably when I later use more models and lights and add specular and transparency, it will not work anymore, even though I don't see yet why.
My renderFrame method: (some variable names still contain "bottle", which is the object I want to light next after I get the cube right)
private void renderFrame()
{
ShaderFactory.checkGLError("Check gl errors prior render Frame");
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering section
final State state=Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
// Did we find any trackables this frame?
if(0 != state.getNumTrackableResults())
{
// Get the trackable:
TrackableResult result=null;
final int numResults=state.getNumTrackableResults();
// Browse results searching for the MultiTarget
for(int j=0; j < numResults; j++)
{
result=state.getTrackableResult(j);
if(result.isOfType(MultiTargetResult.getClassType()))
break;
result=null;
}
// If it was not found exit
if(null == result)
{
// Clean up and leave
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
return;
}
final Matrix44F modelViewMatrix_Vuforia=Tool.convertPose2GLMatrix(result.getPose());
final float[] modelViewMatrix=modelViewMatrix_Vuforia.getData();
final float[] modelViewProjection=new float[16];
Matrix.scaleM(modelViewMatrix, 0, CUBE_SCALE_X, CUBE_SCALE_Y, CUBE_SCALE_Z);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession
.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
GLES20.glUseProgram(bottleShaderProgramID);
// Draw the cube:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
GLES20.glVertexAttribPointer(vertexHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getVertices());
GLES20.glVertexAttribPointer(normalHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getNormals());
GLES20.glEnableVertexAttribArray(vertexHandleBottle);
GLES20.glEnableVertexAttribArray(normalHandleBottle);
// add light position and color
final float[] lightPositionInModelSpace=new float[] {0.0f, 1.1f, 0.0f, 1.0f};
GLES20.glUniform4f(lightPositionHandleBottle, lightPositionInModelSpace[0], lightPositionInModelSpace[1],
lightPositionInModelSpace[2], lightPositionInModelSpace[3]);
GLES20.glUniform3f(lightColorHandleBottle, 0.9f, 0.9f, 0.9f);
// create the normalMatrix for lighting calculations
final float[] normalMatrix=new float[16];
Matrix.invertM(normalMatrix, 0, modelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
// extract the camera position for lighting calculations (last column of matrix)
// GLES20.glUniform3f(cameraPositionHandleBottle, normalMatrix[12], normalMatrix[13], normalMatrix[14]);
// set material properties
GLES20.glUniform3f(matAmbientHandleBottle, 0.0f, 0.0f, 0.0f);
GLES20.glUniform3f(matDiffuseHandleBottle, 0.1f, 0.9f, 0.1f);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(modelViewMatrixHandleBottle, 1, false, modelViewMatrix, 0);
// pass the model view projection matrix to the shader
// the "transpose" parameter must be "false" according to the spec, anything else is an error
GLES20.glUniformMatrix4fv(mvpMatrixHandleBottle, 1, false, modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
cubeObject.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT, cubeObject.getIndices());
GLES20.glDisable(GLES20.GL_CULL_FACE);
// disable the enabled arrays after everything has been rendered
GLES20.glDisableVertexAttribArray(vertexHandleBottle);
GLES20.glDisableVertexAttribArray(normalHandleBottle);
ShaderFactory.checkGLError("MultiTargets renderFrame");
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
My vertex shader:
attribute vec4 vertexPosition;
attribute vec3 vertexNormal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 normalMatrix;
// lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
// material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
// pass to fragment shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
// we can just take vec3() of a vec4 and it will take the first 3 entries
vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
vNormal = vertexNormal;
vVertexEyespace = vec3(modelViewMatrix * vertexPosition);
vVertex = vertexPosition;
// light position
vLightPositionEyespace = modelViewMatrix * uLightPosition;
gl_Position = modelViewProjectionMatrix * vertexPosition;
}
And my fragment shader:
precision highp float; //apparently necessary to force same precision as in vertex shader
//lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
//material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
//from vertex shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
vec3 normalModel = normalize(vNormal);
vec3 normalEyespace = normalize(vNormalEyespace);
vec3 lightDirectionModel = normalize(uLightPosition.xyz - vVertex.xyz);
vec3 lightDirectionEyespace = normalize(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
vec3 ambientTerm = uMatAmbient;
vec3 diffuseTerm = uMatDiffuse * uLightColor;
// calculate the lambert factor via cosine law
float diffuseLambert = max(dot(normalEyespace, lightDirectionEyespace), 0.0);
// Attenuate the light based on distance.
float distance = length(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
float diffuseLambertAttenuated = diffuseLambert * (1.0 / (1.0 + (0.01 * distance * distance)));
diffuseTerm = diffuseLambertAttenuated * diffuseTerm;
gl_FragColor = vec4(ambientTerm + diffuseTerm, 1.0);
}
I finally solved all problems.
There were 2 issues that might be of interest for future readers.
Vuforia CubeObject class from the official sample (current Vuforia version 4) has wrong normals. They do not all correspond with the vertex definition order. If you're using the CubeObject from the sample, make sure that the normal definitions are correctly corresponding with the faces. Vuforia fail...
As suspected, my normalMatrix was wrongly built. We cannot just invert-transpose the 4x4 modelViewMatrix, we need to first extract the top left 3x3 submatrix from it and then invert-transpose that.
Here is the code that works for me:
final Mat3 normalMatrixCube=new Mat3();
normalMatrixCube.SetFrom4X4(modelViewMatrix);
normalMatrixCube.invert();
normalMatrixCube.transpose();
This code by itself is not that useful though, because it relies on a custom class Mat3 which I randomly imported from this guy because neither Android nor Vuforia seem to offer any matrix class that can invert/transpose 3x3 matrices. This really makes me question my sanity - the only code that works for such a basic problem has to rely on a custom matrix class? Maybe I'm just doing it wrong, I don't know...
thumbs up for not using the fixed functions on this! I found your example quite useful for understanding that one needs to also translate the light to a position in eyespace. All the questions i've found just recommend using glLight.
While this helped me solve using a static light source, something which is missing from your code if you wish to also make transformations on your model(s) while keeping the light source static(e.g rotating the object) is to keep track of the original modelview matrix until the view is changed, or until you're drawing another object which has a different model. So something like:
vLightPositionEyespace = fixedModelView * uLightPosition;
where fixedModelView can be updated in your renderFrame() method.
This thread on opengl discussion boards helped :)
Please see Edit at end for progress.
I'm in the process of trying to learn OpenGL ES 2.0 (I'm going to be developing on Android devices)
I'm a little confused about the Vertex and Fragment shaders. I understand their purpose, but if I'm building a shape from a custom built class (say a 'point') and setting it's size and colour or applying a texture and assuming that both shaders are declared and defined initially in the object class's constructor, would this mean that each instance of that class would have it's very own pair of shaders?
That is my first question. My second is that if this is the case (shader pairs for each object).........is this the way to go? I've heard that having one shader pair and switching it's parameters isn't a good idea because of performance, but if I have 100 sprites all of the same size and colour (or texture) does it make sense for them all to have a different pair of shaders with exactly the same parameters?
I hope I'm asking the correct question, I've not been studying ES 2.0 for long so find it a little confusing. I currently only have a limited understanding of OpenGL!
Edit
Adding code as requested.
public class Dot {
int iProgId;
int iPosition;
float size = 10;
FloatBuffer vertexBuf;
float r = 1f;
float g = 1f;
float b = 1f;
float a = 1f;
int iBaseMap;
int texID;
Bitmap imgTexture;
//Constructor
public Dot() {
float[] vertices = {
0,0,0f
};
//Create vertex shader
String strVShader =
"attribute vec4 a_position;\n"+
"void main()\n" +
"{\n" +
"gl_PointSize = " +size+ ";\n" +
"gl_Position = a_position;\n"+
"}";
//Create fragment shader
String strFShader =
"precision mediump float;" +
"void main() " +
"{" +
"gl_FragColor = vec4(0,0,0,1);" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
}
My setTexture method
public void setTexture(GLSurfaceView view, Bitmap imgTexture){
this.imgTexture=imgTexture;
//Create vertex shader
String strVShader =
"attribute vec4 a_position;\n"+
"void main()\n" +
"{\n" +
"gl_PointSize = " +size+ ";\n" +
"gl_Position = a_position;\n"+
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"vec4 color;" +
"color = texture2D(u_baseMap, gl_PointCoord);" +
"gl_FragColor = color;" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glUniform1i(iBaseMap, 0);
texID = Utils.LoadTexture(view, imgTexture); //See code below
}
LoadTexture() method from my Utils class:
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex) {
int textures[] = new int[1];
try {
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
} catch
}
return textures[0];
}
And finally my Drawing method:
public void drawDot(float x, float y){
float[] vertices = {
x,y,0f
};
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
GLES20.glUseProgram(iProgId);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 0, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1);
}
So I can create stuff like so:
Dot dot1 = new Dot();
dot1.setSize(40);
setTexture(myBitmap); //(created earlier with BitmapFactory)
drawDot(0,0);
Thank you!
Edit 1: Thanks for the answer so far. On further research, it seem a few other people have had this exact same problem. The issue seems to be that I'm not calling glBindTexture in my rendering routine, thus OpenGL is just using the last texture that it loaded, which I guess makes sense.
If I put the following into my Rendering routine:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1);
it will apply the first bitmap and display it
if I put:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 2);
it will apply the second bitmap and display it
So, getting somewhere! But my question would now be how can I get my rendering method to automatically know which bitmap to use based on which object is calling it (the rendering routine)?
Thanks again
How it works (briefly)
Shaders are just programs that run in the graphics card. You compile and link them, and then you can pass some variables to them to modify properties of vertices and fragments.
This means that when you call certain drawing functions, such as glDrawElements or glDrawArrays, the vertex data (this means position, texture coords, normals, color, etc. depending on what you want to send) will be sent to the pipeline.
This means that the currently loaded vertex shader will get the vertices one by one and run its code to apply whatever transformations it needs. After that OpenGL will apply rasterization to generate the fragments for the current frame. Then the fragment shader will take every fragment and modify it accordingly.
You can always unload a shader and load a different one. If you need different shaders for different objects, you can group your objects according to their shader and render them independently while reloading the corresponding shader for every group.
However, sometimes it's easier to pass some parameters to the shader and change them for every object. For instance, if you want to render a 3D model, you can split it in submeshes, with every submesh having a different texture. Then, when you pass the vertex data for a mesh, you load the texture and pass it to the shader. For the next mesh you will pass another texture, and so on.
In the real world everything is more complex, but I hope it's useful for you to get an idea of how it works.
Your example
You are loading a pair of shaders on the constructor (with no texture), and then creating a new shader every time you set a texture. I'm not sure this is the better approach.
Without knowing what Utils.LoadShader does is difficult to know, but you could log the result every time you call it. Maybe the second time you are linking the shader it doesn't work.
If I were you, I would just use a pair of shaders outside your dot object. You can pass parameters to the shader (with glUniform...), indicating the dot size, texture, etc.
The setTexture function would just bind the new texture without loading the shaders. Compile then at the beggining (after setting the GL context and so on).
When this works you may consider to change your shaders every time, only if it is really necessary.
I will answer myself as I found out what the problem was.
Added this to my drawDot method:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
texID being the texture ID that corresponds to the object calling the drawDot() method.
Works perfectly
Hope this helps anyone who may have a similar problem in the future.