I've recently picked up renderscript and really loving it but the lack of documentation and examples isn't helping. I've managed to use the live wallpapers and examples to get my own live wallpaper running but have been for texturing I have been using the fixed function shaders.
I've looked at GLSL tutorials but it doesn't seem to translate over exactly. I've looked into the renderscript source code but it still hasn't been of too much help either.
Here is some code that I dug up from the renderscript sources that seems like what the fixed function is doing:
Program vertex
shaderString.append("varying vec4 varColor;\n");
shaderString.append("varying vec2 varTex0;\n");
shaderString.append("void main() {\n");
shaderString.append(" gl_Position = UNI_MVP * ATTRIB_position;\n");
shaderString.append(" gl_PointSize = 1.0;\n");
shaderString.append(" varColor = ATTRIB_color;\n");
shaderString.append(" varTex0 = ATTRIB_texture0;\n");
shaderString.append("}\n");
Program fragment
shaderString.append("varying lowp vec4 varColor;\n");
shaderString.append("varying vec2 varTex0;\n");
shaderString.append("void main() {\n");
shaderString.append(" lowp vec4 col = UNI_Color;\n");
shaderString.append(" gl_FragColor = col;\n");
shaderString.append("}\n");
I don't think these are the best examples because the fragment doesn't seem to touch the varTex0 variable. I've tried to write my own program fragment and use the fixed function vertex shader.
Here's my fragment shader:
ProgramFragment.Builder b = new ProgramFragment.Builder(mRS);
String s = "void main() {" +
" gl_FragColor = vec4(1.0,1.0,1.0,0.5);" +
"}";
b.setShader(s);
pf = b.create();
mScript.set_gPFLights(pf);
Extremely basic but any attempt at binding a texture has failed. I don't know what variable is needed for the texture.
Could anyone provide an example of a basic program vertex and program fragment that uses textures? Thanks in advance.
Check out the FountainFBO sample. It uses a program fragment with a texture that is used as a frame buffer object.
I finally managed to find the sources for the FixedFunction classes that are used to create GLSL shaders. There are located within "android_frameworks_base / graphics / java / android / renderscript".
Here is what the fragment shader with these FixedFunction settings :
ProgramFragmentFixedFunction.Builder builder = new ProgramFragmentFixedFunction.Builder(mRS);
builder.setTexture(ProgramFragmentFixedFunction.Builder.EnvMode.REPLACE,
ProgramFragmentFixedFunction.Builder.Format.RGBA, 0); //CHANGED
ProgramFragment pf = builder.create(); //RENAMED
pf.bindSampler(Sampler.WRAP_NEAREST(mRS), 0);
would look like :
ProgramFragment.Builder pfBuilder = new ProgramFragment.Builder(mRS);
String s = "varying vec2 varTex0;" +
"void main() {" +
" lowp vec4 col;" +
" vec2 t0 = varTex0;" +
" col.rgba = texture2D(UNI_Tex0, t0).rgba;" +
" gl_FragColor = col;" +
"}";
pfBuilder.setShader(s);
pfBuilder.addTexture(TextureType.TEXTURE_2D);
pf = pfBuilder.create();
This fragment shader works with the ProgramVertexFixedFunction.
I haven't gotten around to seeing what the FixedFunction vertex shader looks like but I will update this answer when I do.
Related
I use these functions to draw elements in android using OpenGL-ES. First in the constructor after binding the buffer (that is bind correctly because using GL10 the are drawn) I create the program using the CreateProgram function and then I call Draw. I think that the problem is in the draw function. Can anyone tell to me what my mistakes are?
PS: I don't post the code for the binding of buffers because as i said using G10 they are drawn. Now I want to use GL20 because maybe I'm wrong but reading for examples different questions and some pages on android developer site OpenGL-ES is faster because it uses static functions.
Here there is the code thanks indeed :
private final String vertexShader="" +
"attribute vec3 vertex; \n" +
"void main(){\n" +
" gl_Position=vertex;\n" +
"}";
private final String fragmentShader="" +
"attribute vec4 colors;\n" +
"void main(){\n" +
" gl_FragColor=colors;\n" +
"}";
public int LoadShader(String shader,int type){
int sha= GLES20.glCreateShader(type);
GLES20.glShaderSource(sha,shader);
GLES20.glCompileShader(sha);
return sha;
}
int program=0;
public void CreateGLProgram()
{
program=GLES20.glCreateProgram();
GLES20.glAttachShader(program,LoadShader(vertexShader,GLES20.GL_VERTEX_SHADER));
GLES20.glAttachShader(program,LoadShader(fragmentShader,GLES20.GL_FRAGMENT_SHADER));
GLES20.glLinkProgram(program);
}
public void DrawShader(){
GLES20.glUseProgram(program);
int vertex_handle=GLES20.glGetAttribLocation(program,"vertex");
GLES20.glVertexAttribPointer(vertex_handle,3,GLES20.GL_FLOAT,false,4,coordinatesbuff);
int frag_handle=GLES20.glGetAttribLocation(program,"colors");
GLES20.glVertexAttribPointer(frag_handle,4,GLES20.GL_FLOAT,false,4,colorbuffer);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,indicies.length,GLES20.GL_UNSIGNED_SHORT,indiciesbuffer);
GLES20.glDisableVertexAttribArray(vertex_handle);
GLES20.glDisableVertexAttribArray(frag_handle);
}
In addition to what BDL wrote:
You should enable attribute pointers using glEnableVertexAttribArray(int index):
int vertex_handle=GLES20.glGetAttribLocation(program, "vertex");
GLES20.glEnableVertexAttribArray(vertex_handle);
GLES20.glVertexAttribPointer(vertex_handle, 3, GLES20.GL_FLOAT, false, 4, coordinatesbuff);
int frag_handle=GLES20.glGetAttribLocation(program, "colors");
GLES20.glEnableVertexAttribArray(frag_handle);
GLES20.glVertexAttribPointer(frag_handle, 4, GLES20.GL_FLOAT, false, 4, colorbuffer);
You can't have attributes in a fragment shader. If you want per-vertex colors in the fragment shader, you have to define this attribute in the vertex shader and pass the data to the fragment shader in a varying.
Additionally, you should check whether your shaders are compiling (glGetShaderiv(..., GL_COMPILE_STATUS... and linking (glProgramiv(..., GL_LINK_STATUS...) correctly. This would have brought you on the right track since using a attribute in a fragment shader should trigger a compile error. Also glGetError should be checked.
I have a shader that dynamically indexes uniform data, based on a vertex stream. It uses the following shader:
#version 300 es
uniform mat4 data[24];
attribute vec4 pos;
attribute float index;
void main()
{
gl_Position = (pos) * ( data[int(index * 1.01)]);
}
When run on any Adreno 300 series Android GPU, the data is not indexed correctly. It is sometimes correct, but frequently, the geometry seems to be accessing bogus uniform data, causing missing geometry, or corrupted rendering. The exact same code run on other Android devices (even Adreno 200 series) produces correct results. When captured with the Adreno profiler, the rendered result also shows correctly. Further, a shader which is essentially equivalent:
#version 300 es
uniform vec4 data[96];
attribute vec4 pos;
attribute float index;
void main()
{
gl_Position.x = dot(pos, data[int(index * 4.01 + 0.0)]);
gl_Position.y = dot(pos, data[int(index * 4.01 + 1.0)]);
gl_Position.z = dot(pos, data[int(index * 4.01 + 2.0)]);
gl_Position.w = dot(pos, data[int(index * 4.01 + 3.0)]);
}
Produces correct results (with modified glUniform code). Is this a bug in the Adreno 300 series driver dynamically indexing mat4 data, is there something incorrect about my shader code, or is there something non-standard/incorrect here that the Adreno 300 doesn't support?
I have a very large texture i am using as background and i want to apply a filter to a small part of it, the "small part" is defined by the alpha layer of another texture i have (which is still RGB8888), i am not sure what's the best approach to do this. I'd like to keep the same (very simple) shader i am already using for other sprites, which is similar to the basic one, i.e.:
precision mediump float;
uniform sampler2D uTexture;
varying vec2 vTexPos;
void main() {
gl_FragColor = texture2D(uTexture, vTexPos);
}
So, i have some questions
How can i apply my filter only to "masked" region and avoid drawing others?
Do i have any performance loss if i draw the big texture again once loaded to just apply it to a small portion of the screen?
Can i map a second texture to the shader and use something like "if uTexture2 != null" -> apply as mask? Will this give me any performance gain compared to using a second shader?
Both textures are premultiplied, how should i handle alpha masking?
What id like to do is something like this (original, mask, result):
My environment is Android 4.0, im using GLES20.
You need to use color testing of additional mask texture sampler, and some desaturation filter based on condition.
This fragment shader should work:
precision mediump float;
uniform sampler2D uTexture;
uniform sampler2D uMask;
varying vec2 vTexPos;
void main() {
vec4 mask = texture2D(uMask, vTexPos);
if(mask.r < 0.9) { // black mask
gl_FragColor = texture2D(uTexture, vTexPos);
} else { // white mask
vec4 texelColor = texture2D(uTexture, vTexPos); // original color
vec4 scaledColor = texelColor * vec4(0.3, 0.59, 0.11, 1.0); // weights calculation
float luminance = scaledColor.r + scaledColor.g + scaledColor.b; // greyscale
gl_FragColor = vec4(luminance, luminance, luminance, texelColor.a); // final color with original alpha value
}
}
Desaturation code is from this great article:
http://franzzle.wordpress.com/2013/03/25/use-a-opengl-es-2-0-shader-to-show-a-desaturated-sprite-in-cocos2d-2-0/
Please see Edit at end for progress.
I'm in the process of trying to learn OpenGL ES 2.0 (I'm going to be developing on Android devices)
I'm a little confused about the Vertex and Fragment shaders. I understand their purpose, but if I'm building a shape from a custom built class (say a 'point') and setting it's size and colour or applying a texture and assuming that both shaders are declared and defined initially in the object class's constructor, would this mean that each instance of that class would have it's very own pair of shaders?
That is my first question. My second is that if this is the case (shader pairs for each object).........is this the way to go? I've heard that having one shader pair and switching it's parameters isn't a good idea because of performance, but if I have 100 sprites all of the same size and colour (or texture) does it make sense for them all to have a different pair of shaders with exactly the same parameters?
I hope I'm asking the correct question, I've not been studying ES 2.0 for long so find it a little confusing. I currently only have a limited understanding of OpenGL!
Edit
Adding code as requested.
public class Dot {
int iProgId;
int iPosition;
float size = 10;
FloatBuffer vertexBuf;
float r = 1f;
float g = 1f;
float b = 1f;
float a = 1f;
int iBaseMap;
int texID;
Bitmap imgTexture;
//Constructor
public Dot() {
float[] vertices = {
0,0,0f
};
//Create vertex shader
String strVShader =
"attribute vec4 a_position;\n"+
"void main()\n" +
"{\n" +
"gl_PointSize = " +size+ ";\n" +
"gl_Position = a_position;\n"+
"}";
//Create fragment shader
String strFShader =
"precision mediump float;" +
"void main() " +
"{" +
"gl_FragColor = vec4(0,0,0,1);" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iPosition = GLES20.glGetAttribLocation(iProgId, "a_position");
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
}
My setTexture method
public void setTexture(GLSurfaceView view, Bitmap imgTexture){
this.imgTexture=imgTexture;
//Create vertex shader
String strVShader =
"attribute vec4 a_position;\n"+
"void main()\n" +
"{\n" +
"gl_PointSize = " +size+ ";\n" +
"gl_Position = a_position;\n"+
"}";
//Fragment shader
String strFShader =
"precision mediump float;" +
"uniform sampler2D u_baseMap;" +
"void main()" +
"{" +
"vec4 color;" +
"color = texture2D(u_baseMap, gl_PointCoord);" +
"gl_FragColor = color;" +
"}";
iProgId = Utils.LoadProgram(strVShader, strFShader);
iBaseMap = GLES20.glGetUniformLocation(iProgId, "u_baseMap");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glUniform1i(iBaseMap, 0);
texID = Utils.LoadTexture(view, imgTexture); //See code below
}
LoadTexture() method from my Utils class:
public static int LoadTexture(GLSurfaceView view, Bitmap imgTex) {
int textures[] = new int[1];
try {
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,GLES20.GL_LINEAR);
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, imgTex, 0);
} catch
}
return textures[0];
}
And finally my Drawing method:
public void drawDot(float x, float y){
float[] vertices = {
x,y,0f
};
vertexBuf = ByteBuffer.allocateDirect(vertices.length * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
vertexBuf.put(vertices).position(0);
GLES20.glUseProgram(iProgId);
GLES20.glVertexAttribPointer(iPosition, 3, GLES20.GL_FLOAT, false, 0, vertexBuf);
GLES20.glEnableVertexAttribArray(iPosition);
GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1);
}
So I can create stuff like so:
Dot dot1 = new Dot();
dot1.setSize(40);
setTexture(myBitmap); //(created earlier with BitmapFactory)
drawDot(0,0);
Thank you!
Edit 1: Thanks for the answer so far. On further research, it seem a few other people have had this exact same problem. The issue seems to be that I'm not calling glBindTexture in my rendering routine, thus OpenGL is just using the last texture that it loaded, which I guess makes sense.
If I put the following into my Rendering routine:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 1);
it will apply the first bitmap and display it
if I put:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, 2);
it will apply the second bitmap and display it
So, getting somewhere! But my question would now be how can I get my rendering method to automatically know which bitmap to use based on which object is calling it (the rendering routine)?
Thanks again
How it works (briefly)
Shaders are just programs that run in the graphics card. You compile and link them, and then you can pass some variables to them to modify properties of vertices and fragments.
This means that when you call certain drawing functions, such as glDrawElements or glDrawArrays, the vertex data (this means position, texture coords, normals, color, etc. depending on what you want to send) will be sent to the pipeline.
This means that the currently loaded vertex shader will get the vertices one by one and run its code to apply whatever transformations it needs. After that OpenGL will apply rasterization to generate the fragments for the current frame. Then the fragment shader will take every fragment and modify it accordingly.
You can always unload a shader and load a different one. If you need different shaders for different objects, you can group your objects according to their shader and render them independently while reloading the corresponding shader for every group.
However, sometimes it's easier to pass some parameters to the shader and change them for every object. For instance, if you want to render a 3D model, you can split it in submeshes, with every submesh having a different texture. Then, when you pass the vertex data for a mesh, you load the texture and pass it to the shader. For the next mesh you will pass another texture, and so on.
In the real world everything is more complex, but I hope it's useful for you to get an idea of how it works.
Your example
You are loading a pair of shaders on the constructor (with no texture), and then creating a new shader every time you set a texture. I'm not sure this is the better approach.
Without knowing what Utils.LoadShader does is difficult to know, but you could log the result every time you call it. Maybe the second time you are linking the shader it doesn't work.
If I were you, I would just use a pair of shaders outside your dot object. You can pass parameters to the shader (with glUniform...), indicating the dot size, texture, etc.
The setTexture function would just bind the new texture without loading the shaders. Compile then at the beggining (after setting the GL context and so on).
When this works you may consider to change your shaders every time, only if it is really necessary.
I will answer myself as I found out what the problem was.
Added this to my drawDot method:
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, texID);
texID being the texture ID that corresponds to the object calling the drawDot() method.
Works perfectly
Hope this helps anyone who may have a similar problem in the future.
I am having some problems uploading a small vector of vec4s to the GPU. I have boiled this problem down to the bare minimum code to throw an error.
Here is my Fragment shader:
precision mediump float;
uniform vec4 test[5];
void main() {
gl_FragColor = test[0]+test[1]+test[2]+test[3]+test[4];
}
And the vertex shader is trivial:
attribute vec4 vPosition;
void main(){
gl_Position = vPosition;
}
Here is the code that tries to upload the vec4 vector:
float[] testBuffer = new float[4*5];
// Fill with 1/5s for now
Arrays.fill(testBuffer, 0.2f);
// Get the location
int testLoc = GLES20.glGetUniformLocation(mProgram, "test");
checkGlError("glGetUniformLocation test");
// Upload the buffer
GLES20.glUniform4fv(testLoc, 5, testBuffer, 0);
checkGlError("glUniform4fv testBuffer");
The error is found on the second call to checkGlError(), and the error code is GL_INVALID_OPERATION.
I've read the documentation on glUniform and all of the sizes and types appear to be correct. testLoc is a valid location handle, and I have no errors when uploading the fragment and vertex shader code.
I just can't see what I'm doing wrong! Any ideas?
--UPDATED
See glUniform documentation:
GL_INVALID_OPERATION is generated if there is no current program
object
Make sure your shader is currently bound/used when calling glUniform (glUseProgram has been called with the corresponding shader program handle). The uniform keeps its value when unbinding the shader (e.g. glUseProgram(0)), but the program has to be active when setting the uniform value.