New to Opengl and GLSL.
I am using OpenGL es 3.0 and my GLSL version #version 300 es.
i want to get pixel(ARGB data) at every position in my vertex shader(vertex texture fetch). i have verified that my Android tablet supports vertex texture fetch.
Now i pass in the texture(image) and Texture coordinates to the vertex shader
and execute
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 4);
Is this the right way or should i use GL_POINTS .
if i am using GL_POINTS how to pass the texture cooordinate?
could you provide any samples/example code that does a full pixel read(ARGB) in the vertex shader.
attaching my vertex shader
uniform sampler2D sTexture;
in vec4 aTextureCoord;
out vec3 colorFactor;
vec2 vTextureCoord;
vec4 tex;
void main()
{
vTextureCoord = aTextureCoord.xy;
tex = texture(sTexture,vTextureCoord);
float luminance = 0.299 * tex.r + 0.587 * tex.g + 0.114 * tex.b;
colorFactor = vec3(1.0, 1.0, 1.0);
gl_Position = vec4(-1.0 + (luminance * 0.00784313725), 0.0, 0.0, 1.0);
gl_PointSize = 1.0;
};
My texture coordinates passed are
{0.f,1.f}
{1.f,1.f} {0.f,0.f}
{1.f,0.f}
and the shader is triggered by
GLES30.glDrawArrays(GLES30.GL_TRIANGLE_STRIP, 0, 4);
Just declare a sampler and sample it exactly as per usual. E.g.
"#version 150\n"
in vec4 position;
in vec2 texCoordinate;
uniform sampler2D texID;
uniform mat4 modelViewProjection;
void main()
{
gl_Position = modelViewProjection * (position + texture(texID, texCoordinate));
}
That will sample a 4d vector from the texture unit texID at location texCoordinate, using that to perturb position prior to applying the model-view-projection matrix. The type of geometry you're drawing makes no difference.
gl_Position = vec4(-1.0 + (luminance * 0.00784313725), 0.0, 0.0, 1.0);
That's clever. It's not going to work, but that's clever.
The thing that is confusing everyone is what you haven't told us. That you're computing the histogram by using blending. That you compute the location for each fragment based on the luminance, so you get lots of overlap. And blending just adds everything together, thus producing your histogram.
FYI: It's always best to explain what it is you're actually trying to accomplish in your question, rather than hoping that someone can deduce what you're attempting to do.
That's not going to work because you only have four vertices in this case. You have lots of fragments, but they will be generated based on interpolation from your 4 vertices. And you can't change the position of a fragment from within a fragment shader.
If you want to do what you're trying to do, you still have to render one vertex for every texel you fetch. You still need to use GL_POINTS.
Related
I've recently started looking into OpenGL ES for Android and am working on a drawing app. I've implemented some basics such as point sprites, path smoothing and FBO for double buffering. At the moment I am playing around with the glBlendFunc, more specifically when I put two textures close to each other with the same color/alpha values, the alpha gets added so it appears darker at the intersection of the sprites. This is a problem because the stroke opacity is not preserved if a lot of points are close together, as the color tends to more opaque rather than staying with the same opacity. Is there a way to make the textures have the same color on the intersection, i.e. have the same alpha value for the intersecting pixels, but keep the alpha values for the rest of the pixels?
Here's how I've done the relevant parts of the app:
for drawing the list of point sprites I use blending like this:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
the app uses an FBO with a texture, where it renders each brush stroke first and then this texture is rendered to the main screen. The blending func there is:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
OpenGL ES 2.0 doesn't support alpha masking;
there is no DEPTH_TEST function used anywhere in the app;
the textures for the point sprites are PNGs with transparent backgrounds;
the app supports texture masking which means one texture is used for the shape and one texture is used for the content;
my fragment shader looks like this:
precision mediump float;
uniform sampler2D uShapeTexture;
uniform sampler2D uFillTexture;
uniform float vFillScale;
varying vec4 vColor;
varying float vShapeRotation;
varying float vFillRotation;
varying vec4 vFillPosition;
vec2 calculateRotation(float rotationValue) {
float mid = 0.5;
return vec2(cos(rotationValue) * (gl_PointCoord.x - mid) + sin(rotationValue) * (gl_PointCoord.y - mid) + mid,
cos(rotationValue) * (gl_PointCoord.y - mid) - sin(rotationValue) * (gl_PointCoord.x - mid) + mid);
}
void main() {
// Calculations.
vec2 rotatedShape = calculateRotation(vShapeRotation);
vec2 rotatedFill = calculateRotation(vFillRotation);
vec2 scaleVector = vec2(vFillScale, vFillScale);
vec2 positionVector = vec2(vFillPosition[0], vFillPosition[1]);
// Obtain colors.
vec4 colorShape = texture2D(uShapeTexture, rotatedShape);
vec4 colorFill = texture2D(uFillTexture, (rotatedFill * scaleVector) + positionVector);
gl_FragColor = colorShape * colorFill * vColor;
}
my vertex shader is this:
attribute vec4 aPosition;
attribute vec4 aColor;
attribute vec4 aJitter;
attribute float aShapeRotation;
attribute float aFillRotation;
attribute vec4 aFillPosition;
attribute float aPointSize;
varying vec4 vColor;
varying float vShapeRotation;
varying float vFillRotation;
varying vec4 vFillPosition;
uniform mat4 uMVPMatrix;
void main() {
// Sey position and size.
gl_Position = uMVPMatrix * (aPosition + aJitter);
gl_PointSize = aPointSize;
// Pass values to fragment shader.
vColor = aColor;
vShapeRotation = aShapeRotation;
vFillRotation = aFillRotation;
vFillPosition = aFillPosition;
}
I've tried playing around with the glBlendFunc parameters but I can't find the right combination to draw what I want. I've attached some images showing what I would like to achieve and what I have at the moment. Any suggestions?
The Solution
Finally managed to get this working properly with a few lines thanks to # Rabbid76. First of all I had to configure my depth test function before I draw to the FBO:
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LESS);
// Drawing code for FBO.
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Then in my fragment shader I had to make sure that any pixels with alpha < 1 in the mask are discarded like this:
...
vec4 colorMask = texture2D(uMaskTexture, gl_PointCoord);
if (colorMask.a < 1.0)
discard;
else
gl_FragColor = calculatedColor;
And the result is (flickering is due to Android emulator and gif capture tool):
If you set the glBlendFunc
with the functions (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and you use
glBlendEquation with the equation GL_FUNC_ADD then the destination color is
calculated as follows:
C_dest = C_src * A_src + C_dest * (1-A_src)
If you blend for example C_dest = 1 with C_src = 0.5 and A_src = 0.5 then:
C_dest = 0.75 = 1 * 0.5 + 0.5 * 0.5
If you repeat blending the same color C_src = 0.5 and A_src = 0.5 then the destination color becomes darker:
C_dest = 0.625 = 0.75 * 0.5 + 0.5 * 0.5
Since the new target color is always a function of the original target color and the source color, the color can not remain equel when blending 2 times, because the target color has already changed after the 1st time blending (except GL_ZERO).
You have to avoid that any fragment is blended twice. If all fragments are drawn to the same depth (2D) then you can use the depth test for this:
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
// do the drawing with the color
glDisable( GL_DEPTH_TEST );
Or the stencil test can be used. For example, the stencil test can be set to pass only when the stencil buffer is equal to 0.
Every time a fragment is to be written the stencil buffer is incremented:
glClear( GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
glStencilOp( GL_KEEP, GL_KEEP, GL_INCR );
glStencilFunc( GL_EQUAL, 0, 255 );
// do the drawing with the color
glDisable( GL_STENCIL_TEST );
Extension to the answer
Note that you can discard fragments which should not be drawn.
If the fragment in your sprite texture has an alpha channel of 0 you should discard it.
Note, if you discard a fragment neither the color nor the depth and stencil buffer will be written.
Fragment shaders also have access to the discard command. When executed, this command causes the fragment's output values to be discarded. Thus, the fragment does not proceed on to the next pipeline stages, and any fragment shader outputs are lost.
Fragment shader
if ( color.a < 1.0/255.0 )
discard;
It's not possible to do this using the fixed-function blending in OpenGL ES 2.0, because what you want isn't actually alpha blending. What you want is a logical operation (e.g. max(src, dst)) which is rather different to how OpenGL ES blending works.
If you want to do path / stroke / fill rendering with pixel-exact edges you might get somewhere with using stencil masks and stencil tests, but you can't do transparency in this case - just boolean operators.
I would like to be able to pass more per-vertex-data to my own custom shaders in kivy than the usual vertex coords + texture coords. Specifically, I would like to pass a value that says which animation frame should be used in selecting the texture coords.
I found an example (http://shadowmint.blogspot.com/2013/10/kivy-textured-quad-easy-right-no.html), and succeeded in changing the format of the vertices passed to a mesh using an argument to the constructor of the Mesh, like this:
Mesh(mode = 'triangles', fmt=[('v_pos', 2, 'float'),
('v_tex0', 2, 'float'),
('v_frame_i', 1, 'float')]
I can then set the vertices to be drawn to something like this:
vertices = [x-r,y-r, uvpos[0],uvpos[1],animationFrame,
x-r,y+r, uvpos[0],uvpos[1]+uvsize[1],animationFrame,
x+r,y-r, uvpos[0]+uvsize[0],uvpos[1],animationFrame,
x+r,y+r, uvpos[0]+uvsize[0],uvpos[1]+uvsize[1],animationFrame,
x+r,y-r, uvpos[0]+uvsize[0],uvpos[1],animationFrame,
x-r,y+r, uvpos[0],uvpos[1]+uvsize[1],animationFrame,
]
..this works well when I run in Ubuntu, but when I run on my android device the drawn texture either doesn't draw, or it looks like the vertex or texture coordinate data is corrupt / not aligned or something.
Here is my shader code in case that is relevant. Again, this all behaves as I want it to when I run in ubuntu, but not when I run on android device.
---VERTEX SHADER---
#ifdef GL_ES
precision highp float;
#endif
/* vertex attributes */
attribute vec2 v_pos;
attribute vec2 v_tex0;
attribute float v_frame_i; // for animation
/* uniform variables */
uniform mat4 modelview_mat;
uniform mat4 projection_mat;
uniform vec4 color;
uniform float opacity;
uniform float sqrtNumFrames; // the width/height of the sprite-sheet
uniform float frameWidth;
/* Outputs to the fragment shader */
varying vec4 frag_color;
varying vec2 tc;
void main() {
frag_color = color * vec4(1.0, 1.0, 1.0, opacity);
gl_Position = projection_mat * modelview_mat * vec4(v_pos.xy, 0.0, 1.0);
float f = round(v_frame_i);
tc = v_tex0;
float w = (1.0/sqrtNumFrames);
tc *= w;
tc.x += w*mod(f,sqrtNumFrames); //////////// I think that the problem might
tc.y += w*round(f / sqrtNumFrames); ///////////// be related to this code, here?
}
---FRAGMENT SHADER---
#ifdef GL_ES
precision highp float;
#endif
/* Outputs from the vertex shader */
varying vec4 frag_color;
varying vec2 tc;
/* uniform texture samplers */
uniform sampler2D texture0;
uniform vec2 player_pos;
uniform vec2 window_size; // in pixels
void main (void){
gl_FragColor = frag_color * texture2D(texture0, tc);
}
I wonder if it may have to do with a version of GLSL and int / float math (in particular in identifying which image from the sprite sheet to draw, see the comments in the glsl code. One version is running on my desktop and another on the device?
Any suggestions for things to experiment with would be much appreciated!
After looking at the log from the running version on the android device (a Moto X phone), I saw that the custom shader was not linking. This appeared to be due to the use of the function round(x), which I replaced with floor(x+0.5) in both cases, and the shader now works on the phone and my desktop properly.
I think the problem is that the version of GLSL supported on the phone and on my PC are different..but I am not 100% certain about this.
I'm attempting to create an alpha radial gradient effect (kind of lighting) using a simple shader.
The effect is created correctly, however the gradient is not smooth.
The precision is set to highp, so I don't really know where to look.
This shader is currently running on Android, using OpenGL ES 2.0.
This is how the gradient currently looks like:
And this is my current shader:
Vertex:
precision highp float;
attribute vec4 vPosition;
attribute vec2 vStaticInterpolation;
varying vec2 interpolator;
void main() {
interpolator = vStaticInterpolation;
gl_Position = vPosition;
}
Fragment:
precision highp float;
uniform float alphaFactor;
varying vec2 interpolator;
float MAX_ALPHA = 0.75;
void main() {
float x = distance(interpolator, vec2(0.0, 0.0));
float alpha = MAX_ALPHA - MAX_ALPHA * x;
alpha = max(alpha, 0.0);
gl_FragColor = vec4(0.925, 0.921, 0.843, alpha);
gl_FragColor.a *= alphaFactor;
}
The shader receives constant attributes for interpolation (from -1.0 to 1.0) in vStaticInterpolation.
The actual color is currently hard-coded in the shader.
It looks to be related to a dithering problem.
This could depend on the OpenGL driver implementation of your mobile device (though I don't know which model you are currently using). In the past it used to be an issue.
Possible tests you could perform are:
Disable Opengl dithering:
GLES20.glDisable(GLES20.GL_DITHER);
Impose an RGB888 surface when you create the surface. It is usually done in the ConfigChooser function. I try to remember by hard, this is part of the code of my application:
new AndroidGL.ConfigChooser(8, 8, 8, 8, depth, stencil) :
Currently, the application displays an ImageView that successfully zooms and pans around on the screen. On top of that image, I would like to provide a texture that I would like to update itself to zoom or pan when the image below it zooms or pans.
As I understand, this should be possible with using the getImageMatrix() of my current setup on the ImageView and then applying that to my textured bitmap that is on top of the original image.
Edit & Resolution: (with strong aide from the selected answer below)
Currently, the panning in the texture occurs at a different speed than that of the ImageView, but when that has been resolved I will update this posting with additional edits and provide the solution to the entirety of the application. Then only the solution and a quick problem description paragraph will remain. Perhaps I'll even post a surface view and renderer source code for using a zoomable surface view.
In order to accomplish a mapping of an ImageView to an OpenGL texture, there were a couple of things that needed to be done in order to accomplish this correctly. And hopefully, this will aide other people who might want to use a zoomable SurfaceView in the future.
The shader code is written below. The gl_FragColor takes a translation matrix of a 3x3 from the getImageMatrix of any ImageView in order to update the screen from within the onDraw() method.
private final String vertexShader_ =
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position;\n" +
" v_texCoord = a_texCoord.xy;\n" +
"}\n";
private final String fragmentShader_ =
"precision mediump float;\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D texture;\n" +
"uniform mat3 transform;\n" +
"void main() {\n" +
" vec2 uv = (transform * vec3(v_texCoord, 1.0)).xy;\n" +
" gl_FragColor = texture2D( texture, uv );\n" +
"}\n";
At the start, the translation matrix for the OpenGL is simply an identify matrix and our ImageView will update these shared values with the shader through a listener.
private float[] translationMatrix = {1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f};
...
uniforms_[TRANSLATION_UNIFORM] = glGetUniformLocation(program_, "transform");
checkGlError("glGetUniformLocation transform");
if (uniforms_[TRANSLATION_UNIFORM] == -1) {
throw new RuntimeException("Could not get uniform location for transform");
}
....
glUniformMatrix3fv(uniforms_[TRANSLATION_UNIFORM], 1, false, FloatBuffer.wrap(translationMatrix));
....
However, the OpenGL code needs an inverse of the OpenGL matrix calculations that come from Android, as well as a transpose operation performed on them before they are handed off to the renderer for being displayed on the screen. This has to do with the way that that information is stored.
protected void onMatrixChanged()
{
//...
float[] matrixValues = new float[9];
Matrix imageViewMatrix = getImageViewMatrix();
Matrix invertedMatrix = new Matrix();
imageViewMatrix.invert(invertedMatrix);
invertedMatrix.getValues(matrixValues);
transpose(matrixValues);
matrixChangedListener.onTranslation(matrixValues);
//...
}
In addition to the fact that the values are in the wrong locations for input into the OpenGL renderer, we also have the problem that we are dealing with our translations on the scale of imageHeight and imageWidth instead of the normalized [0, 1] range that OpenGL expects. So, in order to correct with this we have to target the last column with divisible numbers of our width and height.
matrixValues[6] = matrixValues[6] / getImageWidth();
matrixValues[7] = matrixValues[7] / getImageHeight();
Then, after you have accomplished this, you can start mapping with the ImageView matrix functions or the updated ImageTouchView matrix functions (which have some additional nice features when handling images on Android).
The effect can be seen below with the ImageView behind it taking up the whole screen and the texture in front of it being updated based upon updates from the ImageView. This handles zooming and panning translations between the two.
The only other step in the process would be to change the size of the SurfaceView to match that of the ImageView in order to get the full screen experience when the user is zooming in. In order to get only a surfaceview shown, you can simply put an 'invisible' imageview behind it in the background. And just let the ImageTouchView updates change the way that your OpenGL SurfaceView is shown.
Vote up on the answer below, as they helped me offline through chat significantly to get to this point in understanding what needs to be done for linking an imageview with a texture. They deserve it.
If you do end up going with the vertex shader modification below, instead of the fragment shader resolution, then it would seem that the values no longer need to be inverted to work with the pinch zoom anymore (and in fact work backwards if you leave it as is). Additionally, panning seems to work backwards than is expected as well.
You could forge fragment shader for that, just set this matrix as uniform parameter of the fragment shaderand modify texture coordinates using this matrix.
precision mediump float;
uniform sampler2D samp;
uniform mat3 transform;
varying vec2 texCoord;
void main()
{
vec2 uv = (transform * vec3(texCoord, 1.0)).xy;
gl_FragColor = texture2D(samp, uv);
}
Note that matrix you get from image view is row-major order you should transform it to get OpenGL column-major order, also you would have divide x and y translation components by width and height respectively.
Update: It just occured to me that modifying vertex shader would be better, you can just scale your quad with the same matrix you used in fragment shader, and it would be clamped to your screen if to big, and it will not have a problem with edge clamping when small, and matrix multiplication will happen only 4 times instead of one for each pixel. Just do in vertex shader:
gl_Position = vec4((transform * vec3(a_position.xy, 1.0)).xy, 0.0, 1.0);
and use untransformed texCoord in fragment shader.
I want to use shading on my OpenGL objects but can't seem to access GLSL functions in my opengl package. Is there a GLSL package available for OpenGL ES in Eclipse?
EDIT: As Tim pointed out. Shaders are written as text files and then loaded using glShaderSoure, I have a C++ shader file which I had written once for a ray tracing application. But, I am really confused as to how I would go about implementing in java. Suppose I have a 2D square drawn in my Renderer class using a gl object MySquare , how will I go about implementing the java equivalent of the shader file below.
Shader.vert
varying vec3 N;
varying vec3 v;
void main()
{
// Need to transform the normal into eye space.
N = normalize(gl_NormalMatrix * gl_Normal);
// Always have to transform vertex positions so they end
// up in the right place on the screen.
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
// Fragment shader for per-pixel Phong interpolation and shading.
Shader.Frag
// The "varying" keyword means that the parameter's value is interpolated
// between the nearby vertices.
varying vec3 N;
varying vec3 v;
//Used for Environmental mapping shader calculations
const vec3 xUnitVec=vec3(1.0, 0.0, 0.0), yUnitVec=vec3(1.0, 1.0, 0.0);
uniform vec3 BaseColor, MixRatio;
uniform sampler2D EnvMap;
void main()
{
// The scene's ambient light.
vec4 ambient = gl_LightModel.ambient * gl_FrontMaterial.ambient;
// The normal vectors is generally not normalized after being
// interpolated across a triangle. Here we normalize it.
vec3 Normal = normalize(N);
// Since the vertex is in eye space, the direction to the
// viewer is simply the normalized vector from v to the
// origin.
vec3 Viewer = -normalize(v);
// Get the lighting direction and normalize it.
vec3 Light = normalize(gl_LightSource[0].position.xyz);
// Compute halfway vector
vec3 Half = normalize(Viewer+Light);
// Compute factor to prevent light leakage from below the
// surface
float B = 1.0;
if(dot(Normal, Light)<0.0) B = 0.0;
// Compute geometric terms of diffuse and specular
float diffuseShade = max(dot(Normal, Light), 0.0);
float specularShade =
B * pow(max(dot(Half, Normal), 0.0), gl_FrontMaterial.shininess);
// Compute product of geometric terms with material and
// lighting values
vec4 diffuse = diffuseShade * gl_FrontLightProduct[0].diffuse;
vec4 specular = specularShade * gl_FrontLightProduct[0].specular;
ambient += gl_FrontLightProduct[0].ambient;
// Assign final color
gl_FragColor= ambient + diffuse + specular + gl_FrontMaterial.emission;
}
Check out the tutorials over at learnopengles.com. They'll answer all the questions you have.
There's no 'java equivalent' of a shader file. The shader is written in GLSL. The shader will be the same whether your opengl is wrapped in java, or c++, or python, or whatever. Aside from small API differences between OpenGL and OpenGLES, you can upload the exact same shader in Java as you used in C++, character for character.
You can use this source:
https://github.com/markusfisch/ShaderEditor
ShaderView class is the key!
GLSL files are in raw folder.