I am developing a game for Android using OpenGL ES 2.0 and have a problem with a fragment shader for drawing stars in the background. I've got the following code:
precision mediump float;
varying vec2 transformed_position;
float rand(vec2 co) {
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main(void) {
float distance = 10.0;
float quantization_strength = 4.0;
vec3 background_color = vec3(0.09, 0.0, 0.288);
vec2 zero = vec2(0.0, 0.0);
vec2 distance_vec = vec2(distance, distance);
vec2 quantization_vec = vec2(quantization_strength, quantization_strength);
vec2 new_vec = floor(transformed_position / quantization_vec) * quantization_vec;
if(all(equal(mod(new_vec, distance_vec), zero))) {
float rand_val = rand(new_vec);
vec3 current_color = background_color * (1.0 + rand_val);
gl_FragColor = vec4(current_color.x, current_color.y, current_color.z, 1.0);
} else {
gl_FragColor = vec4(background_color.x, background_color.y, background_color.z, 1.0 );
}
}
My aim is to 'quantize' the fragment coordinates, so 'stars' are not 1px in size, and then light up quantized pixels that are distant enough by a random amount. This code, however, produces different results depending on where it is executed. I have used GLSL Sandbox (http://glsl.heroku.com), Nexus 7 and HTC Desire S to create comparison:
As you can see, GLSL Sandbox produces dense grid with many stars visible. On Nexus 7 stars are much fewer and distributed along lines (which may be not obvious on this small image) - the rand function does not work as expected. Desire S draws no stars at all.
Why does the rand function work so strangely on Nexus 7 (if I modify the vector used for dot product, stars are distributed along lines at different angle)? And what might cause Desire S not to render the stars?
I would also appreciate any optimization tips for this shader, as I am very inexperienced with GLSL. Or perhaps there is better way to draw 'stars' via fragment shader?
UPDATE
I changed the code to this (I used http://glsl.heroku.com/e#9364.0 as reference):
precision mediump float;
varying highp vec2 transformed_position;
highp float rand(vec2 co) {
highp float a = 1e3;
highp float b = 1e-3;
highp float c = 1e5;
return fract(sin((co.x+co.y*a)*b)*c);
}
void main(void) {
float size = 15.0;
float prob = 0.97;
lowp vec3 background_color = vec3(0.09, 0.0, 0.288);
highp vec2 world_pos = transformed_position;
vec2 pos = floor(1.0 / size * world_pos);
float color = 0.0;
highp float starValue = rand(pos);
if(starValue > prob) {
vec2 center = size * pos + vec2(size, size) * 0.5;
float xy_dist = abs(world_pos.x - center.x) * abs(world_pos.y - center.y) / 5.0;
color = 0.6 - distance(world_pos, center) / (0.5 * size) * xy_dist;
}
if(starValue < prob || color < 0.0) {
gl_FragColor = vec4(background_color, 1.0);
} else {
float starIntensity = fract(100.0 * starValue);
gl_FragColor = vec4(background_color * (1.0 + color * 3.0 * starIntensity), 1.0);
}
}
Desire S now gives me very nice, uniformly distributed stars. But the problem with Nexus 7 is still there. With prob = 0.97, no stars are displayed, and with very low prob = 0.01, they appear very sparsely placed along horizontal lines. Why does Tegra 3 behave so strangely?
Related
I have been trying to draw border of an Image with transparent background using OpenGL in Android. I am using Fragment Shader & Vertex Shader. (From the GPUImage Library)
Below I have added Fig. A & Fig B.
Fig A.
Fig B.
I have achieved Fig A. With the customised Fragment Shader. But Unable to make the border smoother as in Fig B. I am attaching the Shader code that I have used (to achieve rough border). Can someone here help me on how to make the border smoother?
Here is my Vertex Shader :
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Here is my Fragment Shader :
I have calculated 8 pixels around the current pixel. If any one pixel of those 8 is opaque(having alpha greater than 0.4), It it drawn as a border color.
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform lowp float thickness;
uniform lowp vec4 color;
void main() {
float x = textureCoordinate.x;
float y = textureCoordinate.y;
vec4 current = texture2D(inputImageTexture, vec2(x,y));
if ( current.a != 1.0 ) {
float offset = thickness * 0.5;
vec4 top = texture2D(inputImageTexture, vec2(x, y - offset));
vec4 topRight = texture2D(inputImageTexture, vec2(x + offset,y - offset));
vec4 topLeft = texture2D(inputImageTexture, vec2(x - offset, y - offset));
vec4 right = texture2D(inputImageTexture, vec2(x + offset, y ));
vec4 bottom = texture2D(inputImageTexture, vec2(x , y + offset));
vec4 bottomLeft = texture2D(inputImageTexture, vec2(x - offset, y + offset));
vec4 bottomRight = texture2D(inputImageTexture, vec2(x + offset, y + offset));
vec4 left = texture2D(inputImageTexture, vec2(x - offset, y ));
if ( top.a > 0.4 || bottom.a > 0.4 || left.a > 0.4 || right.a > 0.4 || topLeft.a > 0.4 || topRight.a > 0.4 || bottomLeft.a > 0.4 || bottomRight.a > 0.4 ) {
if (current.a != 0.0) {
current = mix(color , current , current.a);
} else {
current = color;
}
}
}
gl_FragColor = current;
}
You were almost on the right track.
The main algorithm is:
Blur the image.
Use pixels with opacity above a certain threshold as outline.
The main problem is the blur step. It needs to be a large and smooth blur to get the smooth outline you want. For blurring, we can use convolution filter Kernel. And to achieve a large blur, we should use a large kernel. And I suggest using the Gaussian Blur distribution, as it is very well known and used.
The Overview of he Algorithm is:
For each fragment, we sample many locations around it. The samples are made in an N by N grid. We average them together using weights that follow a 2D Gaussian Distribution.
This results in a blurred image.
With the blurred image, we paint the fragments that have alpha greater than a threshold with our outline color. And, of course, any opaque pixels in the original image should also appear in the result.
On a sidenote, your solution is almost a blur with a 3 x 3 kernel (you sample locations around the fragment in a 3 by 3 grid). However, a 3 x 3 kernel won't give you the amount of blur you need. You need more samples (e.g. 11 x 11). Also, the weights closer to the center should have a greater impact on the result. Thus, uniform weights won't work very well.
Oh, and one more important thing:
One single shader to acomplish this is NOT the fastest way to implement this. Usually, this would be acomplished with 2 separate renders. The first one would render the image as usual and the second render would blur and add the outline. I assumed that you want to do this with 1 single render.
The following is a vertex and fragment shader that accomplish this:
Vertex Shader
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
void main() {
vecUV = uv * 3.0 - 1.0;
vecPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
vecNormal = (modelViewMatrix * vec4(normal, 0.0)).xyz;
gl_Position = projectionMatrix * vec4(vecPos, 1.0);
}
Fragment Shader
precision highp float;
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
uniform sampler2D inputImageTexture;
float normalProbabilityDensityFunction(in float x, in float sigma)
{
return 0.39894*exp(-0.5*x*x/(sigma*sigma))/sigma;
}
vec4 gaussianBlur()
{
// The gaussian operator size
// The higher this number, the better quality the outline will be
// But this number is expensive! O(n2)
const int matrixSize = 11;
// How far apart (in UV coordinates) are each cell in the Gaussian Blur
// Increase this for larger outlines!
vec2 offset = vec2(0.005, 0.005);
const int kernelSize = (matrixSize-1)/2;
float kernel[matrixSize];
// Create the 1-D kernel using a sigma
float sigma = 7.0;
for (int j = 0; j <= kernelSize; ++j)
{
kernel[kernelSize+j] = kernel[kernelSize-j] = normalProbabilityDensityFunction(float(j), sigma);
}
// Generate the normalization factor
float normalizationFactor = 0.0;
for (int j = 0; j < matrixSize; ++j)
{
normalizationFactor += kernel[j];
}
normalizationFactor = normalizationFactor * normalizationFactor;
// Apply the kernel to the fragment
vec4 outputColor = vec4(0.0);
for (int i=-kernelSize; i <= kernelSize; ++i)
{
for (int j=-kernelSize; j <= kernelSize; ++j)
{
float kernelValue = kernel[kernelSize+j]*kernel[kernelSize+i];
vec2 sampleLocation = vecUV.xy + vec2(float(i)*offset.x,float(j)*offset.y);
vec4 sample = texture2D(inputImageTexture, sampleLocation);
outputColor += kernelValue * sample;
}
}
// Divide by the normalization factor, so the weights sum to 1
outputColor = outputColor/(normalizationFactor*normalizationFactor);
return outputColor;
}
void main()
{
// After blurring, what alpha threshold should we define as outline?
float alphaTreshold = 0.3;
// How smooth the edges of the outline it should have?
float outlineSmoothness = 0.1;
// The outline color
vec4 outlineColor = vec4(1.0, 1.0, 1.0, 1.0);
// Sample the original image and generate a blurred version using a gaussian blur
vec4 originalImage = texture2D(inputImageTexture, vecUV);
vec4 blurredImage = gaussianBlur();
float alpha = smoothstep(alphaTreshold - outlineSmoothness, alphaTreshold + outlineSmoothness, blurredImage.a);
vec4 outlineFragmentColor = mix(vec4(0.0), outlineColor, alpha);
gl_FragColor = mix(outlineFragmentColor, originalImage, originalImage.a);
}
This is the result I got:
And for the same image as yours, with matrixSize = 33, alphaTreshold = 0.05
And to try to get crispier results we can tweak the parameters. Here is an example with matrixSize = 111, alphaTreshold = 0.05, offset = vec2(0.002, 0.002), alphaTreshold = 0.01, outlineSmoothness = 0.00. Note that increasing matrixSize will heavily impact performance, which is a limitation of rendering this outline with only one shader pass.
I tested the shader on this site. Hopefully you will be able to adapt it to your solution.
Regarding references, I have used quite a lot of this shadertoy example as basis for the code I wrote for this answer.
In my filters, the smoothness is achieved by a simple boxblur on the border.. You have decided that alpha > 0.4 is a border. The value of alpha between 0-0.4 in surrounding pixels gives an edge. Just blur this edge with a 3x3 window to get the smooth edge.
if ( current.a != 1.0 ) {
// other processing
if (current.a > 0.4) {
if ( (top.a < 0.4 || bottom.a < 0.4 || left.a < 0.4 || right.a < 0.4
|| topLeft.a < 0.4 || topRight.a < 0.4 || bottomLeft.a < 0.4 || bottomRight.a < 0.4 )
{
// Implement 3x3 box blur here
}
}
}
You need to tweak which edge pixels you blur. The basic issue is that it drops from an opaque to a transparent pixel - what you need is a gradual transition.
Another options is quick anti-aliasing. From my comment below - Scale up 200% and then scale down 50% to original method. Use nearest neighbour scaling. This technique is used for smooth edges on text sometimes.
there is a problem that i just can't seem to get a handle on..
i have a fragment shader:
precision mediump float;
uniform vec3 u_AmbientColor;
uniform vec3 u_LightPos;
uniform float u_Attenuation_Constant;
uniform float u_Attenuation_Linear;
uniform float u_Attenuation_Quadradic;
uniform vec3 u_LightColor;
varying vec3 v_Normal;
varying vec3 v_fragPos;
vec4 fix(vec3 v);
void main() {
vec3 color = vec3(1.0,1.0,1.0);
vec3 vectorToLight = u_LightPos - v_fragPos;
float distance = length(vectorToLight);
vec3 direction = vectorToLight / distance;
float attenuation = 1.0/(u_Attenuation_Constant +
u_Attenuation_Linear * distance + u_Attenuation_Quadradic * distance * distance);
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 d = u_AmbientColor + diffuse;
gl_FragColor = fix(color * d);
}
vec4 fix(vec3 v){
float r = min(1.0,max(0.0,v.r));
float g = min(1.0,max(0.0,v.g));
float b = min(1.0,max(0.0,v.b));
return vec4(r,g,b,1.0);
}
i've been following some tutorial i found on the web,
anyways, the ambientColor and lightColor uniforms are (0.2,0.2,0.2), and (1.0,1.0,1.0)
respectively. the v_Normal is calculated at the vertex shader using the
inverted transposed matrix of the model-view matrix.
the v_fragPos is the model result of multiplying the position with the normal model-view matrix.
now, i expect that when i move the light position closer to the cube i render, it will just appear brighter, but the resulting image is very different:
(the little square there is an indicator for the light position)
now, i just don't understand how this can happen?
i mean, i multiply the color components each by the SAME value..
so, how is it that it seems to vary so??
EDIT: i noticed that if i move the camera in front of the cube, the light is just shades of blue.. which is the same problem but maybe it's a clue i don't know..
The Lambertian reflectance is computed with the Dot product of the normal vector and the vector to the light source, instead of the component wise product.
See How does the calculation of the light model work in a shader program?
Use the dot function instead of the * (multiplication) operator:
vec3 diffuse = u_LightColor * attenuation * max(normalize(v_Normal) * direction,0.0);
vec3 diffuse = u_LightColor * attenuation * max(dot(normalize(v_Normal), direction), 0.0);
You can simplify the code in the fix function. min and max can be substituted with clamp. This functions work component wise, so they do not have to be called separately for each component:
vec4 fix(vec3 v)
{
return vec4(clamp(v, 0.0, 1.0), 1.0);
}
I try to implement warp shader (black hole). It works great on desktop, but it looks wrong on mobile devices. The problem is in its size. When I increase the size of black hole the warped edges disappear. If it has very small size it looks as I want. I am using LibGDX.
How it should look like (Desktop):
How it looks on mobile device with the same size:
When I decrease the size the warped edges appears:
Vertex shader:
attribute vec2 a_position;
void main()
{
gl_Position = vec4(a_position.xy, 0.0, 1.0);
}
Fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texture;
uniform vec2 u_res;
uniform float u_size;
// Gravitational field at
// position pos, given a black hole
// of mass m at position center
// (ignoring G constant)
vec2 Fgrav(float m, vec2 center, vec2 pos)
{
vec2 dir = center - pos;
float distance = length(dir);
dir /= distance;
return dir * (m / (distance*distance));
}
void main(void)
{
vec2 texCoord = gl_FragCoord.xy / u_res.xy;
vec4 sceneColor;
vec2 blackHoleCenters = vec2(u_res.xy*.5);
vec2 netGrav = Fgrav( (50. - u_size) * 500., blackHoleCenters, gl_FragCoord.xy);
float netGravMag = length(netGrav);
if(netGravMag < 1.0)
{
texCoord = (gl_FragCoord.xy + netGrav*((50. - u_size) * 15.0))/u_res.xy;
}
else texCoord.xy=vec2(.5,.5);
sceneColor = texture2D(u_texture, texCoord);
gl_FragColor = sceneColor;
}
How to fix this issue? Thanks.
UPDATE:
That's how I get u_size parameter:
public float getDistance(Camera cam){
return (float) Math.sqrt(Math.pow(cam.position.x, 2) + Math.pow(cam.position.y, 2) + Math.pow(cam.position.z, 2));
}
public void render(Camera cam, Mesh quad) {
program.begin();
program.setUniformf(u_size, getDistance(cam));
...
Initializing of camera:
cam = new PerspectiveCamera(64, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
cam.position.set(0f, 0.0f, 30.0f);
cam.lookAt(0,0.0f,0);
cam.near = 0.1f;
cam.far = 450f;
cam.update();
So, the initial value of u_size is 30.
My terrain uses shader which itself uses four different textures. It runs fine on windows and linux machines, but on android it gets only ~25FPS on both galaxies. I thought, that textures are the problem, but no, as it appears the problem is with the part where I divide texture coordinates and use frac to get tiled coordinates. Without it, I get 60FPS.
// Material data.
//uniform vec3 uAmbient;
//uniform vec3 uDiffuse;
//uniform vec3 uLightPos[8];
//uniform vec3 uEyePos;
//uniform vec3 uFogColor;
uniform sampler2D terrain_blend;
uniform sampler2D grass;
uniform sampler2D rock;
uniform sampler2D dirt;
varying vec2 varTexCoords;
//varying vec3 varEyeNormal;
//varying float varFogWeight;
//------------------------------------------------------------------
// Name: fog
// Desc: applies calculated fog weight to fog color and mixes with
// specified color.
//------------------------------------------------------------------
//vec4 fog(vec4 color) {
// return mix(color, vec4(uFogColor, 1.0), varFogWeight);
//}
void main(void)
{
/*vec3 N = normalize(varEyeNormal);
vec3 L = normalize(uLightPos[0]);
vec3 H = normalize(L + normalize(uEyePos));
float df = max(0.0, dot(N, L));
vec3 col = uAmbient + uDiffuse * df;*/
// Take color information from textures and tile them.
vec2 tiledCoords = varTexCoords;
//vec2 tiledCoords = fract(varTexCoords / 0.05); // <========= HERE!!!!!!!!!
//vec4 colGrass = texture2D(grass, tiledCoords);
vec4 colGrass = texture2D(grass, tiledCoords);
//vec4 colDirt = texture2D(dirt, tiledCoords);
vec4 colDirt = texture2D(dirt, tiledCoords);
//vec4 colRock = texture2D(rock, tiledCoords);
vec4 colRock = texture2D(rock, tiledCoords);
// Take color information from not tiled blend map.
vec4 colBlend = texture2D(terrain_blend, varTexCoords);
// Find the inverse of all the blend weights.
float inverse = 1.0 / (colBlend.r + colBlend.g + colBlend.b);
// Scale colors by its corresponding weight.
colGrass *= colBlend.r * inverse;
colDirt *= colBlend.g * inverse;
colRock *= colBlend.b * inverse;
vec4 final = colGrass + colDirt + colRock;
//final = fog(final);
gl_FragColor = final;
}
Note: there's some more code for light calculation and fog, but it isn't used. I indicated the line that, when uncommented, causes massive lag. I tried using floor and calculating fractional part manually, but lag is the same. What might be wrong?
EDIT: Now here's what I don't understand.
This:
vec2 tiledCoords = fract(varTexCoords * 2.0);
Runs great.
This:
vec2 tiledCoords = fract(varTexCoords * 10.0);
Runs average on SIII.
This:
vec2 tiledCoords = fract(varTexCoords * 20.0);
Lags...
This:
vec2 tiledCoords = fract(varTexCoords * 100.0);
Well 5FPS is still better than I expected...
So what gives? Why is this happening? To my understanding this shouldn't make any difference. But it does. And a huge one.
I would run your code on a profiler (check Mali-400), but by the looks of it, you are killing the texture cache. For the first pixel computed, all those 4 texture look-ups are fetched but also the contiguous data is also fetched into the texture cache. For the next pixel, you are not accessing data in the cache but looking quite far (10, 20..etc) which completely defies the purpose of such a cache.
This of course a guess, without proper profiling is hard to tell.
EDIT: #harism also pointed you to that direction.
I'm developipng an 2D live wallpaper for Andoid using libgdx and its SpriteBatch class. This is a watch, so all I need, from libgdx is just drawing multiple textures, rotating them to specific angle.
I've solved this problem using SpriteBatch class and all was fine.
But now I need to add magnifying lens effect to my scene. Task is to magnify specific area with specific ratio to simulate real lens.
I've solved this by rendering to FrameBuffer, getting texture region from FrameBuffer and finally, drawing this region with custom shader, using SpriteBatch.
THE PROBLEM: on Samsung devices (I've tried on galaxy note N7000, and on Galaxy Tab 10.1) there is a small rectangle in the center of magnifying area about 16x16px, where magnification ratio slightly increases. Sorry, I can't post a screenshot now, because I don't have Samsung device.
On other devices all works fine, tried in HTC Vivid, Acer A500, Google Nexus One.
I think, the problem if in Mali GPU on Samsung devices, but don't know, how to fix it.
I've adapted fragment shader code from this question to SpriteBatch Add Fisheye effect to images at runtime using OpenGL ES
Here is it:
#ifdef GL_ES
#define LOWP lowp
precision mediump float;
#else
#define LOWP
#endif
varying LOWP vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main() {
vec2 m = vec2(0.622 , 0.4985); // lens center point, note that in live wallpaper center of texture is (0.5,0.5)
float lensSize = 0.045; //diameter of lens
float lensCut = 0.0342; //length of cut lines
vec2 d = v_texCoords - m;
float r = sqrt(dot(d, d)); // distance of pixel from mouse
float cuttop = m.y + lensCut;
float cutbottom = m.y - lensCut;
vec2 uv;
if (r >= lensSize) {
uv = v_texCoords;
} else if ( v_texCoords.y >= cuttop) {
uv = v_texCoords;
} else if (v_texCoords.y <= cutbottom) {
uv = v_texCoords;
} else {
uv = m + normalize(d) * asin(r) / (3.14159 * 0.37);
}
gl_FragColor = texture2D(u_texture, uv);
}
Vertex shader is default from SpriteBatch sources:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
Here is my render() method:
GL20 gl = Gdx.graphics.getGL20();
gl.glEnable(GL20.GL_TEXTURE_2D);
gl.glActiveTexture(GL20.GL_TEXTURE0);
gl.glClearColor(WallpaperService.bg_red, WallpaperService.bg_green, WallpaperService.bg_blue, 1f);
gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
// Creating framebuffer, if it has gone
if (mFrameBuffer == null) {
mFrameBuffer = new FrameBuffer(Format.RGBA4444, BG_WIDTH, BG_HEIGHT, true);
mFrameBufferRegion = new TextureRegion(mFrameBuffer.getColorBufferTexture());
mFrameBufferRegion.flip(false, true);
}
//Camera setting according to background texture
camera = new OrthographicCamera(BG_WIDTH, BG_HEIGHT);
camera.position.set(BG_WIDTH / 2, BG_WIDTH / 2, 0);
camera.update();
batch.setProjectionMatrix(camera.combined);
//Rendering scene to framebuffer
mFrameBuffer.begin();
batch.begin();
//main Drawing goes here
batch.end();
//Drawing frame to screen with applying shaders for needed effects
if (mFrameBuffer != null) {
mFrameBuffer.end();
camera = new OrthographicCamera(width, height);
camera.position.set(width / 2, height / 2, 0);
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
batch.setShader(null);
batch.setShader(shader);
batch.draw(mFrameBufferRegion, width / 2 - (BG_WIDTH / 2) + (MARGIN_BG_LEFT * ratio), height / 2
- (BG_HEIGHT / 2) + (MARGIN_BG_TOP * ratio), (float) BG_WIDTH / 2, (float) BG_HEIGHT / 2,
(float) BG_WIDTH, (float) BG_HEIGHT, (float) ratio, (float) ratio, 0f);
batch.setShader(null);
batch.end();
}
I've managed to fix this issue. The problem is in Mali gpu.
Mali doesn't support high precision float calculations in fragment shader.
When distance from center to point was near zero - r variable became zero.
So I've added constant coefficient to my calculations and this did the trick.
Here is working fragment shader:
precision mediump float;
varying lowp vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
const float PI = 3.14159;
const float koef = 10.0;
void main() {
vec2 uv;
vec2 m = vec2(0.622 , 0.4985); // lens center point, note that in live wallpaper center of texture is (0.5,0.5)
float lensSize = 0.045; //radius of lens
float lensCut = 0.0342; //height of cut
float cuttop = m.y + lensCut;
float cutbottom = m.y - lensCut;
float cutleft = m.x-lensSize;
float cutright = m.x+lensSize;
//don't transform pixels, that aren't in lens area
if ( v_texCoords.y >= cuttop) {
uv = v_texCoords;
} else if (v_texCoords.y <= cutbottom) {
uv = v_texCoords;
} else if (v_texCoords.x <= cutleft) {
uv = v_texCoords;
} else if (v_texCoords.x >= cutright) {
uv = v_texCoords;
} else {
vec2 p = v_texCoords*koef; //current point
vec2 d = p - m*koef; //vector differnce between current point and center point
float r = distance(m*koef,p); // distance of pixel from center
if (r/koef >= lensSize) {
uv = v_texCoords;
} else {
uv =m + normalize(d) * asin(r) / (PI*0.37*koef);
}
}
gl_FragColor = texture2D(u_texture, uv);
}