GLSL - fragment shaders for image manipulation/blending - android

I'm trying to build a shader that allows you to combine two images by applying a gradient opacity mask to the one on top, like you do in photoshop. I've gotten to the point where I can overlay the masked image over the other but as a newbie am confused about a few things.
It seems that the images inside the shader are sometimes skewed to fit the canvas size, and they always start at position 0,0. I have played around with a few snippets I have found to try and scale the textures, but always end up with unsatisfactory results.
I am curious if there is a standard way to size, skew, and translate textures within a view, or if images in GLSL are necessarily limited in some way that will stop me from accomplishing my goal.
I'm also unsure of how I am applying the gradient/mask and if it is the right way to do it, because I do not have a lot of control over the shape of the gradient at the moment.
Here's what I have so far:
precision highp float;
varying vec2 uv;
uniform sampler2D originalImage;
uniform sampler2D image;
uniform vec2 resolution;
void main(){
float mask;
vec4 result;
vec2 position = gl_FragCoord.xy / ((resolution.x + resolution.y) * 2.0 );
mask = vec4(1.0,1.0,1.0,1.0 - position.x).a;
vec4 B = texture2D(image,uv);
vec4 A = texture2D(originalImage,uv) * (mask);
result = mix(B, A, A.a);
gl_FragColor = result;
}
Which produces an image like this:
What I would like to be able to do is change the positions of the images independently and also make sure that they conform to their proper dimensions.
I have tried naively shifting positions like this:
vec2 pos = uv;
pos.y = pos.y + 0.25;
texture2D(image, pos)
Which does shift the texture, but leads to a bunch of strange lines dragging:
I tried to get rid of them like this:
gl_FragColor = uv.y < 0.25 ? vec4(0.0,0.0,0.0,0.0) : result;
but it does nothing

You really need to decide what you want to happen when images are not the same size. What you probably want is for it to appear there's no image so check your UV coordinates and use 0,0,0,0 when outside of the image
//vec4 B = texture2D(image,uv);
vec4 getImage(sampler2D img, vec2 uv) {
if (uv.x < 0.0 || uv.x > 1.0 || uv.y < 0.0 || uv.y > 1.0) {
return vec4(0);
}
return texture2D(img, uv);
}
vec4 B = getImage(image, uv);
As for a standard way to size/skew/translate images use a matrix
uniform mat4 u_imageMatrix;
...
vec2 newUv = u_imageMatrix * vec4(uv, 0, 1).xy;
An example of implementing canvas 2d's drawImage using a texture matrix.
In general though I don't think most image manipulations programs/library would try to do everything in the shader. Rather they'd build up the image with very very simple primitives. My best guess would be they use a shader that's just A * MASK then draw B followed by A * MASK with blending on.
To put it another way, if you have 30 layers in photoshop they wouldn't generate a single shader that computes the final image in one giant shader taking in all 30 layers at once. Instead each layer would be applied on its own with simpler shaders.
I also would expect them to create an texture for the mask instead of using math in the shader. That way the mask can be arbitrarily complex, not just a 2 stop ramp.
Note I'm not saying you're doing it wrong. You're free to do whatever you want. I'm only saying I suspect that if you want to build a generic image manipulation library you'll have more success with smaller building blocks you combine rather than trying to do more complex things in shaders.
ps: I think getImage can be simplified to
vec4 getImage(sampler2D img, vec2 uv) {
return texture2D(img, uv) * step(0.0, uv) * step(-1.0, -uv);
}

Related

Render FBO to same FBO

I am trying to create a ghost-like camera filter. This requires mixing the previous frame to current one. I use one FBO to make the mixing and a second one to simply put the context to the screen.
My implementation works on 4 out of 5 devices I have tried. On the fifth (Samsung galaxy S7) I get some random pixels.
The simpler shader to reproduce the error is the following (the frame counter and cropping is just for debugging). The result is that I get on the center of the screen on line gradually going up.
uniform samplerExternalOES camTexture;
uniform sampler2D fbo;
uniform int frame_no;
varying vec2 v_CamTexCoordinate;
void main ()
{
vec2 uv = v_CamTexCoordinate;
if(frame_no<10){
gl_FragColor = texture2D(camTexture, uv);
}else{
if(uv.y>0.2 && uv.y<0.8 && uv.x>0.2 && uv.x<0.8)
gl_FragColor = texture2D(fbo, uv + vec2(0.0, +0.005));
else
gl_FragColor = texture2D(camTexture, uv);
}
}
But on the Samsung I get some correct pixels and some random ones as the following sample. Some black and other random pixels going up together with the camera's pixels. Any idea of what might be wrong?
Fault sample
Correct sample

Point sprite alpha blending issue (Android / OpenGL ES 2.0)

I've recently started looking into OpenGL ES for Android and am working on a drawing app. I've implemented some basics such as point sprites, path smoothing and FBO for double buffering. At the moment I am playing around with the glBlendFunc, more specifically when I put two textures close to each other with the same color/alpha values, the alpha gets added so it appears darker at the intersection of the sprites. This is a problem because the stroke opacity is not preserved if a lot of points are close together, as the color tends to more opaque rather than staying with the same opacity. Is there a way to make the textures have the same color on the intersection, i.e. have the same alpha value for the intersecting pixels, but keep the alpha values for the rest of the pixels?
Here's how I've done the relevant parts of the app:
for drawing the list of point sprites I use blending like this:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_ONE, GLES20.GL_ONE_MINUS_SRC_ALPHA);
the app uses an FBO with a texture, where it renders each brush stroke first and then this texture is rendered to the main screen. The blending func there is:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
OpenGL ES 2.0 doesn't support alpha masking;
there is no DEPTH_TEST function used anywhere in the app;
the textures for the point sprites are PNGs with transparent backgrounds;
the app supports texture masking which means one texture is used for the shape and one texture is used for the content;
my fragment shader looks like this:
precision mediump float;
uniform sampler2D uShapeTexture;
uniform sampler2D uFillTexture;
uniform float vFillScale;
varying vec4 vColor;
varying float vShapeRotation;
varying float vFillRotation;
varying vec4 vFillPosition;
vec2 calculateRotation(float rotationValue) {
float mid = 0.5;
return vec2(cos(rotationValue) * (gl_PointCoord.x - mid) + sin(rotationValue) * (gl_PointCoord.y - mid) + mid,
cos(rotationValue) * (gl_PointCoord.y - mid) - sin(rotationValue) * (gl_PointCoord.x - mid) + mid);
}
void main() {
// Calculations.
vec2 rotatedShape = calculateRotation(vShapeRotation);
vec2 rotatedFill = calculateRotation(vFillRotation);
vec2 scaleVector = vec2(vFillScale, vFillScale);
vec2 positionVector = vec2(vFillPosition[0], vFillPosition[1]);
// Obtain colors.
vec4 colorShape = texture2D(uShapeTexture, rotatedShape);
vec4 colorFill = texture2D(uFillTexture, (rotatedFill * scaleVector) + positionVector);
gl_FragColor = colorShape * colorFill * vColor;
}
my vertex shader is this:
attribute vec4 aPosition;
attribute vec4 aColor;
attribute vec4 aJitter;
attribute float aShapeRotation;
attribute float aFillRotation;
attribute vec4 aFillPosition;
attribute float aPointSize;
varying vec4 vColor;
varying float vShapeRotation;
varying float vFillRotation;
varying vec4 vFillPosition;
uniform mat4 uMVPMatrix;
void main() {
// Sey position and size.
gl_Position = uMVPMatrix * (aPosition + aJitter);
gl_PointSize = aPointSize;
// Pass values to fragment shader.
vColor = aColor;
vShapeRotation = aShapeRotation;
vFillRotation = aFillRotation;
vFillPosition = aFillPosition;
}
I've tried playing around with the glBlendFunc parameters but I can't find the right combination to draw what I want. I've attached some images showing what I would like to achieve and what I have at the moment. Any suggestions?
The Solution
Finally managed to get this working properly with a few lines thanks to # Rabbid76. First of all I had to configure my depth test function before I draw to the FBO:
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glDepthFunc(GLES20.GL_LESS);
// Drawing code for FBO.
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Then in my fragment shader I had to make sure that any pixels with alpha < 1 in the mask are discarded like this:
...
vec4 colorMask = texture2D(uMaskTexture, gl_PointCoord);
if (colorMask.a < 1.0)
discard;
else
gl_FragColor = calculatedColor;
And the result is (flickering is due to Android emulator and gif capture tool):
If you set the glBlendFunc
with the functions (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) and you use
glBlendEquation with the equation GL_FUNC_ADD then the destination color is
calculated as follows:
C_dest = C_src * A_src + C_dest * (1-A_src)
If you blend for example C_dest = 1 with C_src = 0.5 and A_src = 0.5 then:
C_dest = 0.75 = 1 * 0.5 + 0.5 * 0.5
If you repeat blending the same color C_src = 0.5 and A_src = 0.5 then the destination color becomes darker:
C_dest = 0.625 = 0.75 * 0.5 + 0.5 * 0.5
Since the new target color is always a function of the original target color and the source color, the color can not remain equel when blending 2 times, because the target color has already changed after the 1st time blending (except GL_ZERO).
You have to avoid that any fragment is blended twice. If all fragments are drawn to the same depth (2D) then you can use the depth test for this:
glEnable( GL_DEPTH_TEST );
glDepthFunc( GL_LESS );
// do the drawing with the color
glDisable( GL_DEPTH_TEST );
Or the stencil test can be used. For example, the stencil test can be set to pass only when the stencil buffer is equal to 0.
Every time a fragment is to be written the stencil buffer is incremented:
glClear( GL_STENCIL_BUFFER_BIT );
glEnable( GL_STENCIL_TEST );
glStencilOp( GL_KEEP, GL_KEEP, GL_INCR );
glStencilFunc( GL_EQUAL, 0, 255 );
// do the drawing with the color
glDisable( GL_STENCIL_TEST );
Extension to the answer
Note that you can discard fragments which should not be drawn.
If the fragment in your sprite texture has an alpha channel of 0 you should discard it.
Note, if you discard a fragment neither the color nor the depth and stencil buffer will be written.
Fragment shaders also have access to the discard command. When executed, this command causes the fragment's output values to be discarded. Thus, the fragment does not proceed on to the next pipeline stages, and any fragment shader outputs are lost.
Fragment shader
if ( color.a < 1.0/255.0 )
discard;
It's not possible to do this using the fixed-function blending in OpenGL ES 2.0, because what you want isn't actually alpha blending. What you want is a logical operation (e.g. max(src, dst)) which is rather different to how OpenGL ES blending works.
If you want to do path / stroke / fill rendering with pixel-exact edges you might get somewhere with using stencil masks and stencil tests, but you can't do transparency in this case - just boolean operators.

Plasma Shader Performance in OpenGL ES 2.0

I am using a plasma shader in my Android (libGDX) app, which I found from here:
http://www.bidouille.org/prog/plasma
Here is my shader (slightly modified):
#define LOWP lowp
precision mediump float;
#else
define LOWP
#endif
#define PI 3.1415926535897932384626433832795
uniform float time;
uniform float alpha;
uniform vec2 scale;
void main() {
float v = 0.0;
vec2 c = gl_FragCoord.xy * scale - scale/2.0;
v += sin((c.x+time));
v += sin((c.y+time)/2.0);
v += sin((c.x+c.y+time)/2.0);
c += scale/2.0 * vec2(sin(time/3.0), cos(time/2.0));
v += sin(sqrt(c.x*c.x+c.y*c.y+1.0)+time);
v = v/2.0;
vec3 col = vec3(1, sin(PI*v), cos(PI*v));
gl_FragColor = vec4(col *.5 + .5, alpha);
}
I am rendering it via an ImmediateModeRendererGL20, as a quad.
However, it simply appears to be way too slow for my needs. I am trying to fill almost the whole screen on my Nexus 7 (first gen) with the shader, and I cannot get even close to 60 FPS.
This is really my first real trip onto the GLSL world, and I have no idea how these things usually should perform!
I am wondering how one could optimize this shader? I really don't care about the image quality, I can sacrifice it. I came to a conclusion that somekind of lookup table might be what I am after and/or dropping the resolution of the shader..? But I am not quite sure where to begin. I am still very new to GLSL and "low-level" programming has never really been my cup of tea, but I am eager to at least try!
The ultimate way to speed this up is to pre-calculate computation-heavy stuff (sin() ans cos() stuff) and bake it into texture(s), then get ready values from them. This will make it super-fast because your shader won't make any heavy computations but consume pre-calculated values.
As was stated in comments, you can optimize performance by moving certain calculations to vertex shader. For example, you can move this line
vec2 c = gl_FragCoord.xy * scale - scale/2.0;
to vertex shader without sacrificing any quality because this is a linear function so interpolation won't distort it.
Do it like this:
// vertex
...
uniform float scale;
varying mediump vec2 c;
const float TWO = 2.0;
...
void main() {
...
gl_FragCoord = ...
c = gl_FragCoord.xy * scale - scale / TWO;
...
}
// fragment
...
varying mediump vec2 c;
...
void main() {
...
// just use `c` as usual
...
}
Also, please use constants instead of literals - literals use your uniform space. While this may not affect performance it is still bad (if you use too much of them you can run out of max uniforms on certain GPUs). More on this: Declaring constants instead of literals in vertex shader. Standard practice, or needless rigor?

android.grapics.matrix to OpenGL 2.0 ES texture translation

Currently, the application displays an ImageView that successfully zooms and pans around on the screen. On top of that image, I would like to provide a texture that I would like to update itself to zoom or pan when the image below it zooms or pans.
As I understand, this should be possible with using the getImageMatrix() of my current setup on the ImageView and then applying that to my textured bitmap that is on top of the original image.
Edit & Resolution: (with strong aide from the selected answer below)
Currently, the panning in the texture occurs at a different speed than that of the ImageView, but when that has been resolved I will update this posting with additional edits and provide the solution to the entirety of the application. Then only the solution and a quick problem description paragraph will remain. Perhaps I'll even post a surface view and renderer source code for using a zoomable surface view.
In order to accomplish a mapping of an ImageView to an OpenGL texture, there were a couple of things that needed to be done in order to accomplish this correctly. And hopefully, this will aide other people who might want to use a zoomable SurfaceView in the future.
The shader code is written below. The gl_FragColor takes a translation matrix of a 3x3 from the getImageMatrix of any ImageView in order to update the screen from within the onDraw() method.
private final String vertexShader_ =
"attribute vec4 a_position;\n" +
"attribute vec4 a_texCoord;\n" +
"varying vec2 v_texCoord;\n" +
"void main() {\n" +
" gl_Position = a_position;\n" +
" v_texCoord = a_texCoord.xy;\n" +
"}\n";
private final String fragmentShader_ =
"precision mediump float;\n" +
"varying vec2 v_texCoord;\n" +
"uniform sampler2D texture;\n" +
"uniform mat3 transform;\n" +
"void main() {\n" +
" vec2 uv = (transform * vec3(v_texCoord, 1.0)).xy;\n" +
" gl_FragColor = texture2D( texture, uv );\n" +
"}\n";
At the start, the translation matrix for the OpenGL is simply an identify matrix and our ImageView will update these shared values with the shader through a listener.
private float[] translationMatrix = {1.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 1.0f};
...
uniforms_[TRANSLATION_UNIFORM] = glGetUniformLocation(program_, "transform");
checkGlError("glGetUniformLocation transform");
if (uniforms_[TRANSLATION_UNIFORM] == -1) {
throw new RuntimeException("Could not get uniform location for transform");
}
....
glUniformMatrix3fv(uniforms_[TRANSLATION_UNIFORM], 1, false, FloatBuffer.wrap(translationMatrix));
....
However, the OpenGL code needs an inverse of the OpenGL matrix calculations that come from Android, as well as a transpose operation performed on them before they are handed off to the renderer for being displayed on the screen. This has to do with the way that that information is stored.
protected void onMatrixChanged()
{
//...
float[] matrixValues = new float[9];
Matrix imageViewMatrix = getImageViewMatrix();
Matrix invertedMatrix = new Matrix();
imageViewMatrix.invert(invertedMatrix);
invertedMatrix.getValues(matrixValues);
transpose(matrixValues);
matrixChangedListener.onTranslation(matrixValues);
//...
}
In addition to the fact that the values are in the wrong locations for input into the OpenGL renderer, we also have the problem that we are dealing with our translations on the scale of imageHeight and imageWidth instead of the normalized [0, 1] range that OpenGL expects. So, in order to correct with this we have to target the last column with divisible numbers of our width and height.
matrixValues[6] = matrixValues[6] / getImageWidth();
matrixValues[7] = matrixValues[7] / getImageHeight();
Then, after you have accomplished this, you can start mapping with the ImageView matrix functions or the updated ImageTouchView matrix functions (which have some additional nice features when handling images on Android).
The effect can be seen below with the ImageView behind it taking up the whole screen and the texture in front of it being updated based upon updates from the ImageView. This handles zooming and panning translations between the two.
The only other step in the process would be to change the size of the SurfaceView to match that of the ImageView in order to get the full screen experience when the user is zooming in. In order to get only a surfaceview shown, you can simply put an 'invisible' imageview behind it in the background. And just let the ImageTouchView updates change the way that your OpenGL SurfaceView is shown.
Vote up on the answer below, as they helped me offline through chat significantly to get to this point in understanding what needs to be done for linking an imageview with a texture. They deserve it.
If you do end up going with the vertex shader modification below, instead of the fragment shader resolution, then it would seem that the values no longer need to be inverted to work with the pinch zoom anymore (and in fact work backwards if you leave it as is). Additionally, panning seems to work backwards than is expected as well.
You could forge fragment shader for that, just set this matrix as uniform parameter of the fragment shaderand modify texture coordinates using this matrix.
precision mediump float;
uniform sampler2D samp;
uniform mat3 transform;
varying vec2 texCoord;
void main()
{
vec2 uv = (transform * vec3(texCoord, 1.0)).xy;
gl_FragColor = texture2D(samp, uv);
}
Note that matrix you get from image view is row-major order you should transform it to get OpenGL column-major order, also you would have divide x and y translation components by width and height respectively.
Update: It just occured to me that modifying vertex shader would be better, you can just scale your quad with the same matrix you used in fragment shader, and it would be clamped to your screen if to big, and it will not have a problem with edge clamping when small, and matrix multiplication will happen only 4 times instead of one for each pixel. Just do in vertex shader:
gl_Position = vec4((transform * vec3(a_position.xy, 1.0)).xy, 0.0, 1.0);
and use untransformed texCoord in fragment shader.

Android and Planet Rendering

I am learning how to do 3D development on android. I started with a simple rotating planet with some clouds. I have spent past 2 days trying to get atmospheric glow added to the planet. I looked online and tried working with shaders but was unable to get far.
My question is, what is the best way to achieve this? I nudge in the right direction may be all I need.
I attached a screenshot of the planet I have so far, as well as the end goal I am shooting for.
Thank you for any help.
Current progress:
End Goal:
http://25.media.tumblr.com/tumblr_m06q7l7BpO1qk01v6o1_500.png
What you need is usually done in post process render pass.So for example :
Pass -1 : Render the planet model into FBO texture attachment.
Pass- 2 :Set screen quad , and attach the texture from the previous pass as sampler uniform.
Then in the fragment shader apply a glow effect .
For the glow, there are many ways of doing it.For example you can draw the silhouette of the planet in the first pass ,give it color fill of your glow and then bluring it using smoothstep().Then in post -process pass you put it under the main planet texture.
In fact here you can see a lot of code samples on how to do a glow for circular objects.
One more thing.Adding blur based glow can impact the performance greatly on mobile device.There is a technique called "Distance Field".Valve used it to anti-alias fonts.But it also can be used to do glow.You create a distance field texture texture copy of your planet silhouette once ,then in the post - process pass use it to do smooth glow effect.Fortunately for you there are functions to generate it .Here you can find the papers and the code how to get it done.
Looks like you want Rim Lighting. This does not require any additional passes.
c.f. http://oneclick-code.blogspot.co.uk/2012/01/ios-opengl-es-20-lighting-models-for.html
Note: there's a lot of extra stuff in the example, but the key point is: you compare the normal at each vertex to the direction of your camera's view vector. When they are at right angles, (dot product == 0) apply "full" light. When they are parellel (dot product == length), apply zero light.
varying mediump vec3 OUT_View;
varying mediump vec3 OUT_Light;
varying mediump vec3 OUT_Normal;
varying mediump vec2 OUT_TexCoord;
uniform sampler2D EXT_TEXTURE_01;
uniform sampler2D EXT_TEXTURE_02;
uniform sampler2D EXT_TEXTURE_03;
uniform sampler2D EXT_TEXTURE_04;
void main(void)
{
highp vec4 vDiffuseColor = vec4( 0.0, 0.0, 0.5, 1.0 );
highp float fDiffuseFactor = 0.2 + max ( dot ( OUT_Normal, OUT_Light ), 0.0 );
highp vec4 clr;
if ( fDiffuseFactor < 0.3 )
vDiffuseColor = vDiffuseColor * 0.1;
else
if ( fDiffuseFactor < 0.7 )
vDiffuseColor = vDiffuseColor * 0.5;
else
if ( fDiffuseFactor < 0.8 )
vDiffuseColor = vDiffuseColor;
else
vDiffuseColor = vDiffuseColor * 1.3;
gl_FragColor = vDiffuseColor;
}

Categories

Resources