How to make smoother borders using Fragment shader in OpenGL? - android

I have been trying to draw border of an Image with transparent background using OpenGL in Android. I am using Fragment Shader & Vertex Shader. (From the GPUImage Library)
Below I have added Fig. A & Fig B.
Fig A.
Fig B.
I have achieved Fig A. With the customised Fragment Shader. But Unable to make the border smoother as in Fig B. I am attaching the Shader code that I have used (to achieve rough border). Can someone here help me on how to make the border smoother?
Here is my Vertex Shader :
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Here is my Fragment Shader :
I have calculated 8 pixels around the current pixel. If any one pixel of those 8 is opaque(having alpha greater than 0.4), It it drawn as a border color.
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform lowp float thickness;
uniform lowp vec4 color;
void main() {
float x = textureCoordinate.x;
float y = textureCoordinate.y;
vec4 current = texture2D(inputImageTexture, vec2(x,y));
if ( current.a != 1.0 ) {
float offset = thickness * 0.5;
vec4 top = texture2D(inputImageTexture, vec2(x, y - offset));
vec4 topRight = texture2D(inputImageTexture, vec2(x + offset,y - offset));
vec4 topLeft = texture2D(inputImageTexture, vec2(x - offset, y - offset));
vec4 right = texture2D(inputImageTexture, vec2(x + offset, y ));
vec4 bottom = texture2D(inputImageTexture, vec2(x , y + offset));
vec4 bottomLeft = texture2D(inputImageTexture, vec2(x - offset, y + offset));
vec4 bottomRight = texture2D(inputImageTexture, vec2(x + offset, y + offset));
vec4 left = texture2D(inputImageTexture, vec2(x - offset, y ));
if ( top.a > 0.4 || bottom.a > 0.4 || left.a > 0.4 || right.a > 0.4 || topLeft.a > 0.4 || topRight.a > 0.4 || bottomLeft.a > 0.4 || bottomRight.a > 0.4 ) {
if (current.a != 0.0) {
current = mix(color , current , current.a);
} else {
current = color;
}
}
}
gl_FragColor = current;
}

You were almost on the right track.
The main algorithm is:
Blur the image.
Use pixels with opacity above a certain threshold as outline.
The main problem is the blur step. It needs to be a large and smooth blur to get the smooth outline you want. For blurring, we can use convolution filter Kernel. And to achieve a large blur, we should use a large kernel. And I suggest using the Gaussian Blur distribution, as it is very well known and used.
The Overview of he Algorithm is:
For each fragment, we sample many locations around it. The samples are made in an N by N grid. We average them together using weights that follow a 2D Gaussian Distribution.
This results in a blurred image.
With the blurred image, we paint the fragments that have alpha greater than a threshold with our outline color. And, of course, any opaque pixels in the original image should also appear in the result.
On a sidenote, your solution is almost a blur with a 3 x 3 kernel (you sample locations around the fragment in a 3 by 3 grid). However, a 3 x 3 kernel won't give you the amount of blur you need. You need more samples (e.g. 11 x 11). Also, the weights closer to the center should have a greater impact on the result. Thus, uniform weights won't work very well.
Oh, and one more important thing:
One single shader to acomplish this is NOT the fastest way to implement this. Usually, this would be acomplished with 2 separate renders. The first one would render the image as usual and the second render would blur and add the outline. I assumed that you want to do this with 1 single render.
The following is a vertex and fragment shader that accomplish this:
Vertex Shader
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
void main() {
vecUV = uv * 3.0 - 1.0;
vecPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
vecNormal = (modelViewMatrix * vec4(normal, 0.0)).xyz;
gl_Position = projectionMatrix * vec4(vecPos, 1.0);
}
Fragment Shader
precision highp float;
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
uniform sampler2D inputImageTexture;
float normalProbabilityDensityFunction(in float x, in float sigma)
{
return 0.39894*exp(-0.5*x*x/(sigma*sigma))/sigma;
}
vec4 gaussianBlur()
{
// The gaussian operator size
// The higher this number, the better quality the outline will be
// But this number is expensive! O(n2)
const int matrixSize = 11;
// How far apart (in UV coordinates) are each cell in the Gaussian Blur
// Increase this for larger outlines!
vec2 offset = vec2(0.005, 0.005);
const int kernelSize = (matrixSize-1)/2;
float kernel[matrixSize];
// Create the 1-D kernel using a sigma
float sigma = 7.0;
for (int j = 0; j <= kernelSize; ++j)
{
kernel[kernelSize+j] = kernel[kernelSize-j] = normalProbabilityDensityFunction(float(j), sigma);
}
// Generate the normalization factor
float normalizationFactor = 0.0;
for (int j = 0; j < matrixSize; ++j)
{
normalizationFactor += kernel[j];
}
normalizationFactor = normalizationFactor * normalizationFactor;
// Apply the kernel to the fragment
vec4 outputColor = vec4(0.0);
for (int i=-kernelSize; i <= kernelSize; ++i)
{
for (int j=-kernelSize; j <= kernelSize; ++j)
{
float kernelValue = kernel[kernelSize+j]*kernel[kernelSize+i];
vec2 sampleLocation = vecUV.xy + vec2(float(i)*offset.x,float(j)*offset.y);
vec4 sample = texture2D(inputImageTexture, sampleLocation);
outputColor += kernelValue * sample;
}
}
// Divide by the normalization factor, so the weights sum to 1
outputColor = outputColor/(normalizationFactor*normalizationFactor);
return outputColor;
}
void main()
{
// After blurring, what alpha threshold should we define as outline?
float alphaTreshold = 0.3;
// How smooth the edges of the outline it should have?
float outlineSmoothness = 0.1;
// The outline color
vec4 outlineColor = vec4(1.0, 1.0, 1.0, 1.0);
// Sample the original image and generate a blurred version using a gaussian blur
vec4 originalImage = texture2D(inputImageTexture, vecUV);
vec4 blurredImage = gaussianBlur();
float alpha = smoothstep(alphaTreshold - outlineSmoothness, alphaTreshold + outlineSmoothness, blurredImage.a);
vec4 outlineFragmentColor = mix(vec4(0.0), outlineColor, alpha);
gl_FragColor = mix(outlineFragmentColor, originalImage, originalImage.a);
}
This is the result I got:
And for the same image as yours, with matrixSize = 33, alphaTreshold = 0.05
And to try to get crispier results we can tweak the parameters. Here is an example with matrixSize = 111, alphaTreshold = 0.05, offset = vec2(0.002, 0.002), alphaTreshold = 0.01, outlineSmoothness = 0.00. Note that increasing matrixSize will heavily impact performance, which is a limitation of rendering this outline with only one shader pass.
I tested the shader on this site. Hopefully you will be able to adapt it to your solution.
Regarding references, I have used quite a lot of this shadertoy example as basis for the code I wrote for this answer.

In my filters, the smoothness is achieved by a simple boxblur on the border.. You have decided that alpha > 0.4 is a border. The value of alpha between 0-0.4 in surrounding pixels gives an edge. Just blur this edge with a 3x3 window to get the smooth edge.
if ( current.a != 1.0 ) {
// other processing
if (current.a > 0.4) {
if ( (top.a < 0.4 || bottom.a < 0.4 || left.a < 0.4 || right.a < 0.4
|| topLeft.a < 0.4 || topRight.a < 0.4 || bottomLeft.a < 0.4 || bottomRight.a < 0.4 )
{
// Implement 3x3 box blur here
}
}
}
You need to tweak which edge pixels you blur. The basic issue is that it drops from an opaque to a transparent pixel - what you need is a gradual transition.
Another options is quick anti-aliasing. From my comment below - Scale up 200% and then scale down 50% to original method. Use nearest neighbour scaling. This technique is used for smooth edges on text sometimes.

Related

How to overlay texture over another, without clipping the base texture?

I have written this simple shader to overlay texture over another (base) texture -
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
void main()
{
mediump vec4 base = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
mediump float ra = (overlay.a) * overlay.r + (1.0 - overlay.a) * base.r;
mediump float ga = (overlay.a) * overlay.g + (1.0 - overlay.a) * base.g;
mediump float ba = (overlay.a) * overlay.b + (1.0 - overlay.a) * base.b;
gl_FragColor = vec4(ra, ga, ba, 1.0);
}
Issue - This works except for one issue. If the overlay image is smaller than the base image, the outside region of overlay image gives alpha value of 1.0, i.e overlay.a == 1.0. Due to this the base image is clipped by overlay image. The region outside overlay appears as black.
I am new to opengl, and was expecting that outside its bounds, the texture's alpha should appear as 0.0? How can I fix my shader code to achieve desired behaviour? Or do I need to modify my graphics pipeline?
EDIT Vertex shader below-
attribute vec4 inputTextureCoordinate2;
varying vec2 textureCoordinate;
varying vec2 textureCoordinate2;
void main()
{
gl_Position = pos;
textureCoordinate = uv;
textureCoordinate2 = inputTextureCoordinate2.xy;
}
I was expecting that outside its bounds, the texture's alpha should appear as 0.0
How are you sampling the texture outside of its bounds? When sampling a texture, the uv coordinates should range from 0 to 1. If the coordinates are outside of this range, then one of two things will happen:
If GL_CLAMP_TO_EDGE is set, then the cooridnate will be clamped to the (0, 1) range, and you'll sample an edge pixel
If GL_REPEAT is set, then the fractional part of the coordinate will be taken, and you'll sample somewhere in the middle of the texture
See the docs on glTexParameter for more details.
If your use case is simply overlaying images, perhaps you should try writing a pixel shader.
Set the viewport to the base image dimensions and draw a quad from (-1, 1).
Your fragment shader will now operate on each pixel, known as a texel. Get the texel with gl_FragCoord
Sample the base and overlay by texel e.g. using texelFetch
If the texel is outside of the overlay, set the overlay's rgba values to 0
For example
//fragment shader
uniform ivec2 overlayDim;
uniform sampler2D baseTexture;
uniform sampler2D overlayTexture;
void main() {
vec2 texelf = floor(gl_FragCoord.xy);
ivec2 texel = (int(texelf.x), int(texelf.y));
vec4 base = texelFetch(baseTexture, texel, 0);
vec4 overlay = texelFetch(overlayTexture, texel, 0);
float overlayIsValid = float(texel.x < overlayDim.x && texel.y < overlayDim.y);
overlay *= overlayIsValid;
//rest of code
}
What happens if you sample outside the range of the texture is controlled by the value you set for GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T using glTexParameteri().
In full OpenGL, you could set the value to GL_CLAMP_TO_BORDER, set the border color to a value with alpha 0.0, and be done with it. But texture borders are not available in OpenGL ES 2.0 (the option is introduced in ES 3.2, but not in earlier versions).
Without this, I can think of two options:
If you have control over the texture data, you could set a one pixel border to transparent values. The GL_CLAMP_TO_EDGE then gives you a transparent value when sampling outside the range.
Check the range in the fragment shader.
The fragment shader code for the second option could look something like this (untested):
mediump vec3 col = texture2D(inputImageTexture, textureCoordinate).xyz;
if (all(greaterThan(textureCoordinate2, vec2(0.0))) &&
all(lessThan(textureCoordinate2, vec2(1.0))))
{
mediump vec3 overlay = texture2D(inputImageTexture2, textureCoordinate2).xyz;
col = mix(col, overlay, overlay.a);
}
gl_FragColor = vec4(col, 1.0);
Compared to your original code, also note the use of vector operations. Whenever there is a good way of operating on vectors, it will make the code simpler. It will also make the job of the optimizer easier for GPUs with vector operations.
I found the issue in my code. I had
GLES20.glClearColor(0, 0, 0, 1);
in my code. Changing it to -
GLES20.glClearColor(0, 0, 0, 0);
fixed the issue.
Also as mentioned by #Reto, I have changed my fragment shader to use vector operations for optimisation.
void main()
{
mediump vec4 overlay = texture2D(inputImageTexture2, textureCoordinate2);
mediump vec3 col = texture2D(inputImageTexture, textureCoordinate).xyz;
col = mix(col, overlay.xyz, overlay.a);
gl_FragColor = vec4(col, 1.0);
}

How to use OpenGL to emulate OpenCV's warpPerspective functionality (perspective transform)

I've done image warping using OpenCV in Python and C++, see the Coca Cola logo warped in place in the corners I had selected:
Using the following images:
and this:
Full album with transition pics and description here
I need to do exactly this, but in OpenGL. I'll have:
Corners inside which I've to map the warped image
A homography matrix that maps the transformation of the logo image
into the logo image you see inside the final image (using OpenCV's
warpPerspective), something like this:
[[ 2.59952324e+00, 3.33170976e-01, -2.17014066e+02],
[ 8.64133587e-01, 1.82580111e+00, -3.20053715e+02],
[ 2.78910149e-03, 4.47911310e-05, 1.00000000e+00]]
Main image (the running track image here)
Overlay image (the Coca Cola image here)
Is it possible ? I've read a lot and started OpenGL basics tutorials, but can it be done from just what I have? Would the OpenGL implementation be faster, say, around ~10ms?
I'm currently playing with this tutorial here:
http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
Am I going in the right direction? Total OpenGL newbie here, please bear. Thanks.
After trying a number of solutions proposed here and elsewhere, I ended solving this by writing a fragment shader that replicates what 'warpPerspective' does.
The fragment shader code looks something like:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
// NOTE: you will need to pass the INVERSE of the homography matrix, as well as
// the width and height of your image as uniforms!
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
// Texture coordinates will run [0,1],[0,1];
// Convert to "real world" coordinates
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
// Determine what 'z' is
highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
highp float zed = 1.0 / (m.x + m.y + m.z);
frameCoordinate = frameCoordinate * zed;
// Determine translated x and y coordinates
highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;
// Normalize back to [0,1],[0,1] space
highp vec2 coords = vec2(xTrans / width, yTrans / height);
// Sample the texture if we're mapping within the image, otherwise set color to black
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
}
Note that the homography matrix we are passing in here is the INVERSE HOMOGRAPHY MATRIX! You have to invert the homography matrix that you would pass into 'warpPerspective'- otherwise this code will not work.
The vertex shader does nothing but pass through the coordinates:
// Vertex shader
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main() {
// Nothing happens in the vertex shader
textureCoordinate = inputTextureCoordinate.xy;
gl_Position = position;
}
Pass in unaltered texture coordinates and position coordinates (i.e. textureCoordinates = [(0,0),(0,1),(1,0),(1,1)] and positionCoordinates = [(-1,-1),(-1,1),(1,-1),(1,1)], for a triangle strip), and this should work!
You can do perspective warping of the texture using texture2DProj(), or alternatively using texture2D() by dividing the st coordinates of the texture (which is what texture2DProj does).
Have a look here: Perspective correct texturing of trapezoid in OpenGL ES 2.0.
warpPerspective projects the (x,y,1) coordinate with the matrix and then divides (u,v) by w, like texture2DProj(). You'll have to modify the matrix so the resulting coordinates are properly normalised.
In terms of performance, if you want to read the data back to the CPU your bottleneck is glReadPixels. How long it will take depends on your device. If you're just displaying, the OpenGL ES calls will take much less than 10ms, assuming that you have both textures loaded to GPU memory.
[edit] This worked on my Galaxy S9 but on my car's Android it had an issue that the whole output texture was white. I've sticked to the original shader and it works :)
You can use mat3*vec3 ops in the fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
highp vec3 trans = inverseHomographyMatrix * frameCoordinate;
highp vec2 coords = vec2(trans.x / width, trans.y / height) / trans.z;
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
};
If you want to have transparent background don't forget to add
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
And set transpose flag (in case you use the above shader):
GLES20.glUniformMatrix3fv(H_P2D, 1, true, homography, 0);

GLSL ES fragment shader produces very different results on different devices

I am developing a game for Android using OpenGL ES 2.0 and have a problem with a fragment shader for drawing stars in the background. I've got the following code:
precision mediump float;
varying vec2 transformed_position;
float rand(vec2 co) {
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main(void) {
float distance = 10.0;
float quantization_strength = 4.0;
vec3 background_color = vec3(0.09, 0.0, 0.288);
vec2 zero = vec2(0.0, 0.0);
vec2 distance_vec = vec2(distance, distance);
vec2 quantization_vec = vec2(quantization_strength, quantization_strength);
vec2 new_vec = floor(transformed_position / quantization_vec) * quantization_vec;
if(all(equal(mod(new_vec, distance_vec), zero))) {
float rand_val = rand(new_vec);
vec3 current_color = background_color * (1.0 + rand_val);
gl_FragColor = vec4(current_color.x, current_color.y, current_color.z, 1.0);
} else {
gl_FragColor = vec4(background_color.x, background_color.y, background_color.z, 1.0 );
}
}
My aim is to 'quantize' the fragment coordinates, so 'stars' are not 1px in size, and then light up quantized pixels that are distant enough by a random amount. This code, however, produces different results depending on where it is executed. I have used GLSL Sandbox (http://glsl.heroku.com), Nexus 7 and HTC Desire S to create comparison:
As you can see, GLSL Sandbox produces dense grid with many stars visible. On Nexus 7 stars are much fewer and distributed along lines (which may be not obvious on this small image) - the rand function does not work as expected. Desire S draws no stars at all.
Why does the rand function work so strangely on Nexus 7 (if I modify the vector used for dot product, stars are distributed along lines at different angle)? And what might cause Desire S not to render the stars?
I would also appreciate any optimization tips for this shader, as I am very inexperienced with GLSL. Or perhaps there is better way to draw 'stars' via fragment shader?
UPDATE
I changed the code to this (I used http://glsl.heroku.com/e#9364.0 as reference):
precision mediump float;
varying highp vec2 transformed_position;
highp float rand(vec2 co) {
highp float a = 1e3;
highp float b = 1e-3;
highp float c = 1e5;
return fract(sin((co.x+co.y*a)*b)*c);
}
void main(void) {
float size = 15.0;
float prob = 0.97;
lowp vec3 background_color = vec3(0.09, 0.0, 0.288);
highp vec2 world_pos = transformed_position;
vec2 pos = floor(1.0 / size * world_pos);
float color = 0.0;
highp float starValue = rand(pos);
if(starValue > prob) {
vec2 center = size * pos + vec2(size, size) * 0.5;
float xy_dist = abs(world_pos.x - center.x) * abs(world_pos.y - center.y) / 5.0;
color = 0.6 - distance(world_pos, center) / (0.5 * size) * xy_dist;
}
if(starValue < prob || color < 0.0) {
gl_FragColor = vec4(background_color, 1.0);
} else {
float starIntensity = fract(100.0 * starValue);
gl_FragColor = vec4(background_color * (1.0 + color * 3.0 * starIntensity), 1.0);
}
}
Desire S now gives me very nice, uniformly distributed stars. But the problem with Nexus 7 is still there. With prob = 0.97, no stars are displayed, and with very low prob = 0.01, they appear very sparsely placed along horizontal lines. Why does Tegra 3 behave so strangely?

Very slow fract operation on Galaxy SII and SIII

My terrain uses shader which itself uses four different textures. It runs fine on windows and linux machines, but on android it gets only ~25FPS on both galaxies. I thought, that textures are the problem, but no, as it appears the problem is with the part where I divide texture coordinates and use frac to get tiled coordinates. Without it, I get 60FPS.
// Material data.
//uniform vec3 uAmbient;
//uniform vec3 uDiffuse;
//uniform vec3 uLightPos[8];
//uniform vec3 uEyePos;
//uniform vec3 uFogColor;
uniform sampler2D terrain_blend;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D dirt;
varying vec2 varTexCoords;
//varying vec3 varEyeNormal;
//varying float varFogWeight;
//------------------------------------------------------------------
// Name: fog
// Desc: applies calculated fog weight to fog color and mixes with
// specified color.
//------------------------------------------------------------------
//vec4 fog(vec4 color) {
// return mix(color, vec4(uFogColor, 1.0), varFogWeight);
//}
void main(void)
{
/*vec3 N = normalize(varEyeNormal);
vec3 L = normalize(uLightPos[0]);
vec3 H = normalize(L + normalize(uEyePos));
float df = max(0.0, dot(N, L));
vec3 col = uAmbient + uDiffuse * df;*/
// Take color information from textures and tile them.
vec2 tiledCoords = varTexCoords;
//vec2 tiledCoords = fract(varTexCoords / 0.05); // <========= HERE!!!!!!!!!
//vec4 colGrass = texture2D(grass, tiledCoords);
vec4 colGrass = texture2D(grass, tiledCoords);
//vec4 colDirt = texture2D(dirt, tiledCoords);
vec4 colDirt = texture2D(dirt, tiledCoords);
//vec4 colRock = texture2D(rock, tiledCoords);
vec4 colRock = texture2D(rock, tiledCoords);
// Take color information from not tiled blend map.
vec4 colBlend = texture2D(terrain_blend, varTexCoords);
// Find the inverse of all the blend weights.
float inverse = 1.0 / (colBlend.r + colBlend.g + colBlend.b);
// Scale colors by its corresponding weight.
colGrass *= colBlend.r * inverse;
colDirt *= colBlend.g * inverse;
colRock *= colBlend.b * inverse;
vec4 final = colGrass + colDirt + colRock;
//final = fog(final);
gl_FragColor = final;
}
Note: there's some more code for light calculation and fog, but it isn't used. I indicated the line that, when uncommented, causes massive lag. I tried using floor and calculating fractional part manually, but lag is the same. What might be wrong?
EDIT: Now here's what I don't understand.
This:
vec2 tiledCoords = fract(varTexCoords * 2.0);
Runs great.
This:
vec2 tiledCoords = fract(varTexCoords * 10.0);
Runs average on SIII.
This:
vec2 tiledCoords = fract(varTexCoords * 20.0);
Lags...
This:
vec2 tiledCoords = fract(varTexCoords * 100.0);
Well 5FPS is still better than I expected...
So what gives? Why is this happening? To my understanding this shouldn't make any difference. But it does. And a huge one.
I would run your code on a profiler (check Mali-400), but by the looks of it, you are killing the texture cache. For the first pixel computed, all those 4 texture look-ups are fetched but also the contiguous data is also fetched into the texture cache. For the next pixel, you are not accessing data in the cache but looking quite far (10, 20..etc) which completely defies the purpose of such a cache.
This of course a guess, without proper profiling is hard to tell.
EDIT: #harism also pointed you to that direction.

Android OpenGL ES 2.0 Vignette Mask

I am on Android API Level 9. I have a Camera preview loaded into a SurfaceView. I am trying to draw a vignette mask over this. In order to do so I am using a GLSurfaceView. I prepared a mask in XCode shader builder using the following fragment shader code (or is it a pixel shader?) which compiles successfully so far:
uniform sampler2D tex;
void main()
{
float innerAlpha = 0.0;
float outerAlpha = 1.0;
float len = 1.7;
float startAdjustment = -0.2;
float diff = 0.4;
float alphaStep = outerAlpha / len;
vec2 center = vec2(0.5, 0.5);
vec2 foc1 = vec2(diff,0.);
vec2 foc2 = vec2(-diff,0.);
float r = distance(center+foc1,gl_TexCoord[0].xy) + distance(center+foc2,gl_TexCoord[0].xy);
float alpha = r - (diff * 2.0) * alphaStep - startAdjustment;
vec4 vColor = vec4(0.,0.,0., innerAlpha + alpha);
gl_FragColor = vColor;
}
However, I do not know how to implement this into code for Android. Basically I think I would need to create a rectangle, which would cover the whole view and apply this kind of code generated texture on it. I just can not manage to figure out the actual code. Ideally, it should be in OpenGL ES 2.0.
Edit1:
#Tim - I tried to follow the tutorials here http://developer.android.com/training/graphics/opengl/draw.html
and here
http://www.learnopengles.com/android-lesson-one-getting-started/
and I basically understand, how to draw a triangle. But I do not understand, how to draw rectangle - I mean do I really need to draw two triangles actually or can I just define rectangle (or other complex shapes) right away?
As for the textures - in all tutorials I have seen, textures are actually being loaded from image files, but I would be interested in knowing, how can I actually kind of generate one using the pixel shader above.
Meanwhile, I have found the answer, how to draw the oval shaped mask.
Actually, the problem was, that I was thinking of gl_FragCoord in the range of 0.0 to 1.0,
but they have to be specified in actual pixels instead, e.g. 600.0 x 900.0 etc.
With little tweaks (changing vec2's to floats) I have been able to draw nice oval shaped mask over the whole screen in OpenGL. Here is the final fragment shader. Note, that you must specify the uniforms before drawing. If you are gonna try this, make sure to keep uSlope to somewhere between 0.1 and 2.0 to get meaningful results. Also, please, note that uInnerAlpha has to be lower than uOuterAlpha with this particular piece of code. For a typical vignette,
uInnerAlpha is 0.0 and uOuterAlpha is 1.0.
precision mediump float;
uniform float uWidth;
uniform float uHeight;
uniform float uSlope;
uniform float uStartAdjustment;
uniform float uEllipseLength;
uniform float uInnerAlpha;
uniform float uOuterAlpha;
void main() {
float gradientLength = uHeight * uSlope;
float alphaStep = uOuterAlpha / gradientLength;
float x1 = (uWidth / 2.0);
float y1 = (uHeight / 2.0) - uEllipseLength;
float x2 = (uWidth / 2.0);
float y2 = (uHeight / 2.0) + uEllipseLength;
float dist1 = sqrt(pow(abs(gl_FragCoord.x - x1), 2.0) + pow(abs(gl_FragCoord.y - y1), 2.0));
float dist2 = sqrt(pow(abs(gl_FragCoord.x - x2), 2.0) + pow(abs(gl_FragCoord.y - y2), 2.0));
float dist = (dist1 + dist2);
float alpha = ((dist - (uEllipseLength * 2.0)) * alphaStep - uStartAdjustment) + uInnerAlpha;
if (alpha > uOuterAlpha) {
alpha = uOuterAlpha;
}
if (alpha < uInnerAlpha) {
alpha = uInnerAlpha;
}
vec4 newColor = vec4(1.0, 1.0, 1.0, alpha);
gl_FragColor = newColor;
}

Categories

Resources