I want to implement backround blurring for selfie camera.
I have a blurring fragment shader
precision mediump float;
uniform sampler2D uSampler;
uniform float uBlur;
uniform float uRadius;
varying vec2 vTextureCoord;
void main() {
vec3 sum = vec3(0);
if (uBlur > 0.0) {
for (float i = -uBlur; i < uBlur; i++) {
for (float j = -uBlur; j < uBlur; j++) {
sum += texture2D(uSampler, vTextureCoord + vec2(i, j) * (uRadius / uBlur)).rgb / pow(uBlur * 2.0, 2.0);
}
}
} else {
sum = texture2D(uSampler, vTextureCoord).rgb;
}
gl_FragColor = vec4(sum, 1.0);
}
Also I have an array which indicates where I need to put blurring effect.
My idea is to pass the array somehow to the fragment shader and skip coordinates that should not be blurred.
Is there any way to do that or is there any other way I should follow?
UPD0:
Mask is generated dynamically several times per second. The mask data contained by com.google.mlkit.vision.segmentation.SegmentationMask class - It has ByteBuffer with width and heigh inside.
I tried to generate a bitmap with
Bitmap.createBitmap(maskColorsFromByteBuffer(mask), maskWidth, maskHeight, Config.ARGB_8888)
and
private int[] maskColorsFromByteBuffer(ByteBuffer byteBuffer) {
#ColorInt int[] colors = new int[maskWidth * maskHeight];
for (int i = 0; i < maskWidth * maskHeight; i++) {
float backgroundLikelihood = 1 - byteBuffer.getFloat();
if (backgroundLikelihood > 0.9) {
colors[i] = Color.argb(128, 51, 112, 69);
} else if (backgroundLikelihood > 0.2) {
// Linear interpolation to make sure when backgroundLikelihood is 0.2, the alpha is 0 and
// when backgroundLikelihood is 0.9, the alpha is 128.
// +0.5 to round the float value to the nearest int.
int alpha = (int) (182.9 * backgroundLikelihood - 36.6 + 0.5);
colors[i] = Color.argb(alpha, 51, 112, 69);
}
}
return colors;
}
After generating the bitmap I tried to draw it with ImageObjectFilterRender - as far as I see that class generating texture from the bitmap.
Related
I have been trying to draw border of an Image with transparent background using OpenGL in Android. I am using Fragment Shader & Vertex Shader. (From the GPUImage Library)
Below I have added Fig. A & Fig B.
Fig A.
Fig B.
I have achieved Fig A. With the customised Fragment Shader. But Unable to make the border smoother as in Fig B. I am attaching the Shader code that I have used (to achieve rough border). Can someone here help me on how to make the border smoother?
Here is my Vertex Shader :
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Here is my Fragment Shader :
I have calculated 8 pixels around the current pixel. If any one pixel of those 8 is opaque(having alpha greater than 0.4), It it drawn as a border color.
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform lowp float thickness;
uniform lowp vec4 color;
void main() {
float x = textureCoordinate.x;
float y = textureCoordinate.y;
vec4 current = texture2D(inputImageTexture, vec2(x,y));
if ( current.a != 1.0 ) {
float offset = thickness * 0.5;
vec4 top = texture2D(inputImageTexture, vec2(x, y - offset));
vec4 topRight = texture2D(inputImageTexture, vec2(x + offset,y - offset));
vec4 topLeft = texture2D(inputImageTexture, vec2(x - offset, y - offset));
vec4 right = texture2D(inputImageTexture, vec2(x + offset, y ));
vec4 bottom = texture2D(inputImageTexture, vec2(x , y + offset));
vec4 bottomLeft = texture2D(inputImageTexture, vec2(x - offset, y + offset));
vec4 bottomRight = texture2D(inputImageTexture, vec2(x + offset, y + offset));
vec4 left = texture2D(inputImageTexture, vec2(x - offset, y ));
if ( top.a > 0.4 || bottom.a > 0.4 || left.a > 0.4 || right.a > 0.4 || topLeft.a > 0.4 || topRight.a > 0.4 || bottomLeft.a > 0.4 || bottomRight.a > 0.4 ) {
if (current.a != 0.0) {
current = mix(color , current , current.a);
} else {
current = color;
}
}
}
gl_FragColor = current;
}
You were almost on the right track.
The main algorithm is:
Blur the image.
Use pixels with opacity above a certain threshold as outline.
The main problem is the blur step. It needs to be a large and smooth blur to get the smooth outline you want. For blurring, we can use convolution filter Kernel. And to achieve a large blur, we should use a large kernel. And I suggest using the Gaussian Blur distribution, as it is very well known and used.
The Overview of he Algorithm is:
For each fragment, we sample many locations around it. The samples are made in an N by N grid. We average them together using weights that follow a 2D Gaussian Distribution.
This results in a blurred image.
With the blurred image, we paint the fragments that have alpha greater than a threshold with our outline color. And, of course, any opaque pixels in the original image should also appear in the result.
On a sidenote, your solution is almost a blur with a 3 x 3 kernel (you sample locations around the fragment in a 3 by 3 grid). However, a 3 x 3 kernel won't give you the amount of blur you need. You need more samples (e.g. 11 x 11). Also, the weights closer to the center should have a greater impact on the result. Thus, uniform weights won't work very well.
Oh, and one more important thing:
One single shader to acomplish this is NOT the fastest way to implement this. Usually, this would be acomplished with 2 separate renders. The first one would render the image as usual and the second render would blur and add the outline. I assumed that you want to do this with 1 single render.
The following is a vertex and fragment shader that accomplish this:
Vertex Shader
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
void main() {
vecUV = uv * 3.0 - 1.0;
vecPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
vecNormal = (modelViewMatrix * vec4(normal, 0.0)).xyz;
gl_Position = projectionMatrix * vec4(vecPos, 1.0);
}
Fragment Shader
precision highp float;
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
uniform sampler2D inputImageTexture;
float normalProbabilityDensityFunction(in float x, in float sigma)
{
return 0.39894*exp(-0.5*x*x/(sigma*sigma))/sigma;
}
vec4 gaussianBlur()
{
// The gaussian operator size
// The higher this number, the better quality the outline will be
// But this number is expensive! O(n2)
const int matrixSize = 11;
// How far apart (in UV coordinates) are each cell in the Gaussian Blur
// Increase this for larger outlines!
vec2 offset = vec2(0.005, 0.005);
const int kernelSize = (matrixSize-1)/2;
float kernel[matrixSize];
// Create the 1-D kernel using a sigma
float sigma = 7.0;
for (int j = 0; j <= kernelSize; ++j)
{
kernel[kernelSize+j] = kernel[kernelSize-j] = normalProbabilityDensityFunction(float(j), sigma);
}
// Generate the normalization factor
float normalizationFactor = 0.0;
for (int j = 0; j < matrixSize; ++j)
{
normalizationFactor += kernel[j];
}
normalizationFactor = normalizationFactor * normalizationFactor;
// Apply the kernel to the fragment
vec4 outputColor = vec4(0.0);
for (int i=-kernelSize; i <= kernelSize; ++i)
{
for (int j=-kernelSize; j <= kernelSize; ++j)
{
float kernelValue = kernel[kernelSize+j]*kernel[kernelSize+i];
vec2 sampleLocation = vecUV.xy + vec2(float(i)*offset.x,float(j)*offset.y);
vec4 sample = texture2D(inputImageTexture, sampleLocation);
outputColor += kernelValue * sample;
}
}
// Divide by the normalization factor, so the weights sum to 1
outputColor = outputColor/(normalizationFactor*normalizationFactor);
return outputColor;
}
void main()
{
// After blurring, what alpha threshold should we define as outline?
float alphaTreshold = 0.3;
// How smooth the edges of the outline it should have?
float outlineSmoothness = 0.1;
// The outline color
vec4 outlineColor = vec4(1.0, 1.0, 1.0, 1.0);
// Sample the original image and generate a blurred version using a gaussian blur
vec4 originalImage = texture2D(inputImageTexture, vecUV);
vec4 blurredImage = gaussianBlur();
float alpha = smoothstep(alphaTreshold - outlineSmoothness, alphaTreshold + outlineSmoothness, blurredImage.a);
vec4 outlineFragmentColor = mix(vec4(0.0), outlineColor, alpha);
gl_FragColor = mix(outlineFragmentColor, originalImage, originalImage.a);
}
This is the result I got:
And for the same image as yours, with matrixSize = 33, alphaTreshold = 0.05
And to try to get crispier results we can tweak the parameters. Here is an example with matrixSize = 111, alphaTreshold = 0.05, offset = vec2(0.002, 0.002), alphaTreshold = 0.01, outlineSmoothness = 0.00. Note that increasing matrixSize will heavily impact performance, which is a limitation of rendering this outline with only one shader pass.
I tested the shader on this site. Hopefully you will be able to adapt it to your solution.
Regarding references, I have used quite a lot of this shadertoy example as basis for the code I wrote for this answer.
In my filters, the smoothness is achieved by a simple boxblur on the border.. You have decided that alpha > 0.4 is a border. The value of alpha between 0-0.4 in surrounding pixels gives an edge. Just blur this edge with a 3x3 window to get the smooth edge.
if ( current.a != 1.0 ) {
// other processing
if (current.a > 0.4) {
if ( (top.a < 0.4 || bottom.a < 0.4 || left.a < 0.4 || right.a < 0.4
|| topLeft.a < 0.4 || topRight.a < 0.4 || bottomLeft.a < 0.4 || bottomRight.a < 0.4 )
{
// Implement 3x3 box blur here
}
}
}
You need to tweak which edge pixels you blur. The basic issue is that it drops from an opaque to a transparent pixel - what you need is a gradual transition.
Another options is quick anti-aliasing. From my comment below - Scale up 200% and then scale down 50% to original method. Use nearest neighbour scaling. This technique is used for smooth edges on text sometimes.
There are dozens of image filters written for the Android version of our app in GLSL (ES). As of iOS 12, OpenGL is deprecated, and CIFilter kernels have to be written in Metal.
I had some previous background in OpenGL, however writing CIFilter kernels in Metal is new to me.
Here is one of the filters. Could you help me in translating it to Metal as a CIFilter kernel? That would provide a good example for me so I could translate others.
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES sTexture;
uniform float texelWidth;
uniform float texelHeight;
uniform float intensivity;
void main() {
float SIZE = 1.25 + (intensivity / 100.0)*2.0;
vec4 color;
float min = 1.0;
float max = 0.0;
float val = 0.0;
for (float x = -SIZE; x < SIZE; x++) {
for (float y = -SIZE; y < SIZE; y++) {
color = texture2D(sTexture, vTextureCoord + vec2(x * texelWidth, y * texelHeight));
val = (color.r + color.g + color.b) / 3.;
if (val > max) { max = val; } else if (val < min) { min = val; }
}
}
float range = 5. * (max - min);
gl_FragColor = vec4(pow(1. - range, SIZE * 1.5));
gl_FragColor = vec4((gl_FragColor.r + gl_FragColor.g + gl_FragColor.b) / 3. > 0.75 ? vec3(1.) : gl_FragColor.rgb, 1.);
}
Here's the Metal source for a kernel that attempts to replicate your described filter:
#include <metal_stdlib>
#include <CoreImage/CoreImage.h>
using namespace metal;
extern "C" {
namespace coreimage {
float4 sketch(sampler src, float texelWidth, float texelHeight, float intensity40) {
float size = 1.25f + (intensity40 / 100.0f) * 2.0f;
float minVal = 1.0f;
float maxVal = 0.0f;
for (float x = -size; x < size; ++x) {
for (float y = -size; y < size; ++y) {
float4 color = src.sample(src.coord() + float2(x * texelWidth, y * texelHeight));
float val = (color.r + color.g + color.b) / 3.0f;
if (val > maxVal) {
maxVal = val;
} else if (val < minVal) {
minVal = val;
}
}
}
float range = 5.0f * (maxVal - minVal);
float4 outColor(pow(1.0f - range, size * 1.5f));
outColor = float4((outColor.r + outColor.g + outColor.b) / 3.0f > 0.75f ? float3(1.0f) : outColor.rgb, 1.0f);
return outColor;
}
}
}
I assume you're already familiar with the basics of how to correctly build Metal shaders into a library that can be loaded by Core Image.
You can instantiate your kernel at runtime by loading the default Metal library and requesting the "sketch" function (the name is arbitrary, so long as it matches the kernel source):
NSURL *libraryURL = [NSBundle.mainBundle URLForResource:#"default" withExtension:#"metallib"];
NSData *libraryData = [NSData dataWithContentsOfURL:libraryURL];
NSError *error;
CIKernel *kernel = [CIKernel kernelWithFunctionName:#"sketch" fromMetalLibraryData:libraryData error:&error];
You can then apply this kernel to an image by wrapping it in your own CIFilter subclass, or just invoke it directly:
CIImage *outputImage = [kernel applyWithExtent:CGRectMake(0, 0, width, height)
roiCallback:^CGRect(int index, CGRect destRect)
{ return destRect; }
arguments:#[inputImage, #(1.0f/width), #(1.0f/height), #(60.0f)]];
I've tried to select sensible defaults for each of the arguments (the first of which should be an instance of CIImage), but of course these can be adjusted to taste.
First, I wrote down the code below. When running it on Nexus 6P, the performance got only 1 FPS.
vec4 show(vec2 start[100], vec2 end[100], int n, float scale) {
vec4 color;
vec2 loc = v_TexCoordinate;
for (int i = 0; i < 4; i ++) {
loc = real_loc(start[i], end[i], scale, loc);
}
color = texture2D(u_TextureA, loc);
return (1.0 - abs(scale)) * color;
}
After that, I tried to modify it as below. Then the performance got up to 60 FPS.
vec4 show(vec2 start[100], vec2 end[100], int n, float scale) {
vec4 color;
vec2 loc = v_TexCoordinate;
for (int i = 0; i < 3; i ++) {
loc = real_loc(start[i], end[i], scale, loc);
}
loc = real_loc(start[3], end[3], scale, loc);
color = texture2D(u_TextureA, loc);
return (1.0 - abs(scale)) * color;
}
So my questions are:
1. Why is the performance so different?
2. How to write a high-performance code in GLSL? I cannot unroll all loops.
3. How to know the compiler actually do to my code?
I am creating game in Libgdx and I'm using OpenGL water shader.
On desktop everything works fine (60 fps without V-Sync), but on Android I have only 1 FPS (tested on Samsung Galaxy S3 Neo and HTC One).
My fragment shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform vec3 iResolution; // viewport resolution (in pixels)
uniform float iGlobalTime; // shader playback time (in seconds)
const int NUM_STEPS = 8;
const float PI = 3.1415;
const float EPSILON = 1e-3;
float EPSILON_NRM = 0.1 / iResolution.x;
// sea
const int ITER_GEOMETRY = 3;
const int ITER_FRAGMENT = 5;
const float SEA_HEIGHT = 0.6;
const float SEA_CHOPPY = 4.0;
const float SEA_SPEED = 0.8;
const float SEA_FREQ = 0.16;
const vec3 SEA_BASE = vec3(0.1,0.19,0.22);
const vec3 SEA_WATER_COLOR = vec3(0.8,0.9,0.6);
float SEA_TIME = iGlobalTime * SEA_SPEED;
mat2 octave_m = mat2(1.6,1.2,-1.2,1.6);
// math
mat3 fromEuler(vec3 ang) {
vec2 a1 = vec2(sin(ang.x),cos(ang.x));
vec2 a2 = vec2(sin(ang.y),cos(ang.y));
vec2 a3 = vec2(sin(ang.z),cos(ang.z));
mat3 m;
m[0] = vec3(a1.y*a3.y+a1.x*a2.x*a3.x,a1.y*a2.x*a3.x+a3.y*a1.x,-a2.y*a3.x);
m[1] = vec3(-a2.y*a1.x,a1.y*a2.y,a2.x);
m[2] = vec3(a3.y*a1.x*a2.x+a1.y*a3.x,a1.x*a3.x-a1.y*a3.y*a2.x,a2.y*a3.y);
return m;
}
float hash( vec2 p ) {
float h = dot(p,vec2(127.1,311.7));
return fract(sin(h)*43758.5453123);
}
float noise( in vec2 p ) {
vec2 i = floor( p );
vec2 f = fract( p );
vec2 u = f*f*(3.0-2.0*f);
return -1.0+2.0*mix( mix( hash( i + vec2(0.0,0.0) ), hash( i + vec2(1.0,0.0) ), u.x), mix( hash( i + vec2(0.0,1.0) ), hash( i + vec2(1.0,1.0) ), u.x), u.y);
}
// lighting
float diffuse(vec3 n,vec3 l,float p) {
return pow(dot(n,l) * 0.4 + 0.6,p);
}
float specular(vec3 n,vec3 l,vec3 e,float s) {
float nrm = (s + 8.0) / (3.1415 * 8.0);
return pow(max(dot(reflect(e,n),l),0.0),s) * nrm;
}
// sky
vec3 getSkyColor(vec3 e) {
e.y = max(e.y,0.0);
vec3 ret;
ret.x = pow(1.0-e.y,2.0);
ret.y = 1.0-e.y;
ret.z = 0.6+(1.0-e.y)*0.4;
return ret;
}
// sea
float sea_octave(vec2 uv, float choppy) {
uv += noise(uv);
vec2 wv = 1.0-abs(sin(uv));
vec2 swv = abs(cos(uv));
wv = mix(wv,swv,wv);
return pow(1.0-pow(wv.x * wv.y,0.65),choppy);
}
float map(vec3 p) {
float freq = SEA_FREQ;
float amp = SEA_HEIGHT;
float choppy = SEA_CHOPPY;
vec2 uv = p.xz; uv.x *= 0.75;
float d, h = 0.0;
for(int i = 0; i < ITER_GEOMETRY; i++) {
d = sea_octave((uv+SEA_TIME)*freq,choppy);
d += sea_octave((uv-SEA_TIME)*freq,choppy);
h += d * amp;
uv *= octave_m; freq *= 1.9; amp *= 0.22;
choppy = mix(choppy,1.0,0.2);
}
return p.y - h;
}
float map_detailed(vec3 p) {
float freq = SEA_FREQ;
float amp = SEA_HEIGHT;
float choppy = SEA_CHOPPY;
vec2 uv = p.xz; uv.x *= 0.75;
float d, h = 0.0;
for(int i = 0; i < ITER_FRAGMENT; i++) {
d = sea_octave((uv+SEA_TIME)*freq,choppy);
d += sea_octave((uv-SEA_TIME)*freq,choppy);
h += d * amp;
uv *= octave_m; freq *= 1.9; amp *= 0.22;
choppy = mix(choppy,1.0,0.2);
}
return p.y - h;
}
vec3 getSeaColor(vec3 p, vec3 n, vec3 l, vec3 eye, vec3 dist) {
float fresnel = 1.0 - max(dot(n,-eye),0.0);
fresnel = pow(fresnel,3.0) * 0.65;
vec3 reflected = getSkyColor(reflect(eye,n));
vec3 refracted = SEA_BASE + diffuse(n,l,80.0) * SEA_WATER_COLOR * 0.12;
vec3 color = mix(refracted,reflected,fresnel);
float atten = max(1.0 - dot(dist,dist) * 0.001, 0.0);
color += SEA_WATER_COLOR * (p.y - SEA_HEIGHT) * 0.18 * atten;
color += vec3(specular(n,l,eye,60.0));
return color;
}
// tracing
vec3 getNormal(vec3 p, float eps) {
vec3 n;
n.y = map_detailed(p);
n.x = map_detailed(vec3(p.x+eps,p.y,p.z)) - n.y;
n.z = map_detailed(vec3(p.x,p.y,p.z+eps)) - n.y;
n.y = eps;
return normalize(n);
}
float heightMapTracing(vec3 ori, vec3 dir, out vec3 p) {
float tm = 0.0;
float tx = 10000.0;
float hx = map(ori + dir * tx);
if(hx > 0.0) return tx;
float hm = map(ori + dir * tm);
float tmid = 0.0;
for(int i = 0; i < NUM_STEPS; i++) {
tmid = mix(tm,tx, hm/(hm-hx));
p = ori + dir * tmid;
float hmid = map(p);
if(hmid < 0.0) {
tx = tmid;
hx = hmid;
} else {
tm = tmid;
hm = hmid;
}
}
return tmid;
}
// main
void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
vec2 uv = fragCoord.xy / iResolution.xy;
uv = uv * 2.0 - 1.65;
uv.x *= iResolution.x / iResolution.y;
float time = iGlobalTime * 0.3;
// ray
vec3 ang = vec3(0,0,0);
vec3 ori = vec3(0.0,20,0);
vec3 dir = normalize(vec3(uv.xy,-2.0)); dir.z += length(uv) * 0.15;
dir = normalize(dir) * fromEuler(ang);
// tracing
vec3 p;
heightMapTracing(ori,dir,p);
vec3 dist = p - ori;
vec3 n = getNormal(p, dot(dist,dist) * EPSILON_NRM);
vec3 light = normalize(vec3(0.0,1.0,0.8));
// color
vec3 color = mix(
getSkyColor(dir),
getSeaColor(p,n,light,dir,dist),
pow(smoothstep(0.0,-0.05,dir.y),.3));
// post
fragColor = vec4(pow(color,vec3(0.75)), 1.0);
}
void main() {
vec4 color;
mainImage(color, gl_FragCoord.xy);
color.w = 1.0;
gl_FragColor = color;
}
My vertex shader:
uniform mat4 u_projTrans;
varying vec4 v_color;
varying vec2 v_texCoords;
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
My Libgdx code:
private Mesh mesh;
private ShaderProgram shader;
private float time = 0;
private OrthographicCamera camera = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
#Override
public void show() {
shader = new ShaderProgram(Gdx.files.internal("seaVertex.txt"), Gdx.files.internal("seaFragment.txt"));
shader.pedantic = false;
mesh = new Mesh(true, 4, 6, new VertexAttribute(Usage.Position, 2, "a_position"));
mesh.setVertices(new float[]{-Gdx.graphics.getWidth() / 2, -Gdx.graphics.getHeight() / 2,
Gdx.graphics.getWidth() / 2, -Gdx.graphics.getHeight() / 2,
-Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2,
Gdx.graphics.getWidth() / 2, Gdx.graphics.getHeight() / 2});
mesh.setIndices(new short[]{0, 2, 3, 0, 3, 1});
#Override
public void render(float delta) {
Gdx.gl.glClearColor(0, 0, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
time += delta;
if (shader.isCompiled()){
shader.begin();
shader.setUniformMatrix("u_projTrans", camera.combined);
shader.setUniformf("iGlobalTime", time);
shader.setUniformf("iResolution", new Vector3(Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), 0));
mesh.render(shader, GL20.GL_TRIANGLES);
shader.end();
}
}
Any ideas?
I am not familiar with Libgdx but I have a basic knowledge of OpenGL and graphics rendering in general(and I worked with this shader once) so here are my thoughts (I'm guessing you don't have a deep understand of the method this shader uses to generate water):
Your current "water shader" uses Raymarch wich is a very expensive way to make graphics and is usually used for demo scenes and some basic elements in most graphics scenes. Noise functions are very expensive and you are currently calling it a bunch of times for every ray, wich is also expensive because the raymarch procedure needs to call it a bunch of times to detect intersection with the water.
You are making very hard to create a game using this approach since combination of raymarched elements with tradicional OpenGL rendering system is not trivial, unless you are willing to work the hole game in procedural form, wich is also difficult.
I suggest you try other ways to render water that are not procedural, take a look at "thinMatrix" series on generating water in OpenGL, he uses Java but it should not be difficult to port it : https://www.youtube.com/watch?v=HusvGeEDU_U&list=PLRIWtICgwaX23jiqVByUs0bqhnalNTNZh
Good Luck!
I need to pass LUT from Android application to the fragment shader for color correction. I've found some examples where LUT is passed as Bitmap
GLES20.glBindTexture(GLES20.GL_TEXTURE_CUBE_MAP, name);
...
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.raw.brick1);
GLUtils.texImage2D(GLES20.GL_TEXTURE_CUBE_MAP, 0, bitmap, 0);
But what to do if my LUT is not 3D image file and is not built with series of texture2D maps? My LUT is a float[] array. How I can bind it with uniform samplerCube in my fragment shader?
Short answer is "It's not possible do directly via OpenGL shader, but possible via renderscript"
More details about "shader" approach:
Fragment shader code is bellow. Pay attention the 1 line must be defined to use texture3D
#extension GL_OES_texture_3D : enable
precision mediump float;
uniform sampler2D u_texture0;
uniform vec4 uColor;
varying vec4 v_vertex;
uniform sampler3D u_lut;
void main() {
vec2 texcoord0 = v_vertex.xy;
vec4 rawColor=texture2D(u_texture0, texcoord0);
vec4 outColor = texture3D(u_lut, rawColor.rgb);
gl_FragColor = outColor; //rawColor;
}
java code:
FloatBuffer texBuffer = ByteBuffer.allocateDirect(array.length * Float.SIZE).order(ByteOrder.nativeOrder()).asFloatBuffer();
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, iAxisSize, iAxisSize, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, texBuffer);
It works without compile or run time error but as result you'll see black screen. Sure you must use glTexImage3D function instead glTexImage2D, BUT it's not implemented in android SDK17 and you can't do anything with it.
The good news: in Android SDK17 implemented ScriptIntrinsicLUT that can be used to apply 1D LUT to source image. Java code is bellow:
private RenderScript mRS;
private Allocation mInAllocation;
private Allocation mOutAllocation;
private ScriptC_mono mScript;
private ScriptIntrinsicLUT mIntrinsic;
...
mRS = RenderScript.create(this);
mIntrinsic = ScriptIntrinsicLUT.create(mRS, Element.U8_4(mRS) );
createLUT();
mInAllocation = Allocation.createFromBitmap(mRS, mBitmapIn,
Allocation.MipmapControl.MIPMAP_NONE,
Allocation.USAGE_SCRIPT);
mOutAllocation = Allocation.createTyped(mRS, mInAllocation.getType());
mIntrinsic.forEach(mInAllocation, mOutAllocation);
mOutAllocation.copyTo(mBitmapOut);
...
private void createLUT() {
for (int ct=0; ct < 256; ct++) {
float f = ((float)ct) / 255.f;
float r = f;
if (r < 0.5f) {
r = 4.0f * r * r * r;
} else {
r = 1.0f - r;
r = 1.0f - (4.0f * r * r * r);
}
mIntrinsic.setRed(ct, (int)(r * 255.f + 0.5f));
float g = f;
if (g < 0.5f) {
g = 2.0f * g * g;
} else {
g = 1.0f - g;
g = 1.0f - (2.0f * g * g);
}
mIntrinsic.setGreen(ct, (int)(g * 255.f + 0.5f));
float b = f * 0.5f + 0.25f;
mIntrinsic.setBlue(ct, (int)(b * 255.f + 0.5f));
}
}
More details about:
http://developer.android.com/reference/android/renderscript/ScriptIntrinsicLUT.html
http://my.fit.edu/~vkepuska/ece5570/adt-bundle-windows-x86_64/sdk/sources/android-17/com/android/rs/image/CrossProcess.java