Use texture as palette in OpenGL ES 2.0 shader - android

I am using OpenGL ES 2.0 shaders to generate fractals. It has worked well, until I decided that black and white is not enough and I need a palette. I pass the palette as 1D texture to the shader, but all I get is black screen.
The shader is based on this one, with the texture passed being 2D (nx1), because of OpenGL ES 2.0 not allowing 1D textures, hence the pixel color is being got by
gl_FragColor = texture2D(palette, vec2((j == int(iterations) ? 0.0 : float(j)) / iterations, 0.5));
(I am not sure about the 0.5 here).
The relevant texture loading code:
Bitmap bitmap = Bitmap.createBitmap(colors, colors.length, 1, Bitmap.Config.ARGB_8888);
int handle = ShaderUtils.loadTexture(bitmap);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle);
GLES20.glUniform1i(paletteHandle, handle);
[...]
public static int loadTexture(Bitmap bitmap)
{
final int[] textureHandle = new int[1];
GLES20.glGenTextures(1, textureHandle, 0);
if (textureHandle[0] == 0)
{
throw new RuntimeException("Error generating texture name.");
}
// Bind to the texture in OpenGL
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle[0]);
// Set filtering
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER,
GLES20.GL_NEAREST);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER,
GLES20.GL_NEAREST);
// Load the bitmap into the bound texture.
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
// Recycle the bitmap, since its data has been loaded into OpenGL.
bitmap.recycle();
return textureHandle[0];
}
Vertex shader:
attribute vec4 vPosition;
void main() {
gl_Position = vPosition;
}
Fragment shader:
precision mediump float;
uniform sampler2D palette;
uniform float centerX;
uniform float centerY;
uniform float scale;
uniform float iterations;
uniform vec2 resolution;
#define maxiter 1024
void main() {
vec2 center = vec2(centerX, centerY);
vec2 coord = vec2(gl_FragCoord.x, gl_FragCoord.y) / resolution;
vec2 c = (coord - center) / scale;
int j = 0;
vec2 z = c;
for(int i = 0; i<maxiter; i++) {
if (float(i) >= iterations) break;
j++;
float x = (z.x * z.x - z.y * z.y) + c.x;
float y = (z.y * z.x + z.x * z.y) + c.y;
if((x * x + y * y) > 4.0) break;
z.x = x;
z.y = y;
}
gl_FragColor = texture2D(palette, vec2((j == int(iterations) ? 0.0 : float(j)) / iterations, 0.5));
// vec3 color = vec3(float(j)/float(iterations));
// gl_FragColor = vec4(color, 1.0);
}
Problem is that this is very hard to debug. From inside the IDE I made sure that the bitmap contains proper data and there are no opengl errors in the logcat. The shader works without the texture, so it is probably the problem here. What could be the cause?

The value which you have to set to the texture sampler uniform is not the "name" of the texture object, it has to be the index of the texture unit:
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle);
// this is wrong
//GLES20.glUniform1i(paletteHandle, handle);
GLES20.glUniform1i(paletteHandle, 0); // 0, because of GLES20.GL_TEXTURE0
See OpenGL 4.6 API Compatibility Profile Specification; 7.10 Samplers; page 154:
Samplers are special uniforms used in the OpenGL Shading Language to identify
the texture object used for each texture lookup. The value of a sampler indicates the texture image unit being accessed. Setting a sampler’s value to i selects texture image unit number i.

Related

How to create a simple fragment shader that maps color from a lookup table?

I am trying to create image filters like Instagram for my Android application. I am new to image processing and have just stumbled upon this term called color mapping. After many research, I tried to create my own filter using OpenGL using a lookup table (LUT). But upon adding the filter to a camera, Here is the result:
You can see that there is this weird blueish color at the edge of my thumb. It only happens on overexposed areas of the image.
Here is the fragment shader code:
#extension GL_OES_EGL_image_external : require
precision lowp float;
varying highp vec2 vTextureCoord;
uniform samplerExternalOES inputImage;
uniform sampler2D lookup;
void main() {
vec2 tiles = vec2(8.0);
vec2 tileSize = vec2(64.0);
vec4 texel = texture2D(inputImage, vTextureCoord);
float index = texel.b * (tiles.x * tiles.y - 1.0);
float index_min = min(62.0, floor(index));
float index_max = index_min + 1.0;
vec2 tileIndex_min;
tileIndex_min.y = floor(index_min / tiles.x);
tileIndex_min.x = floor(index_min - tileIndex_min.y * tiles.x);
vec2 tileIndex_max;
tileIndex_max.y = floor(index_max / tiles.x);
tileIndex_max.x = floor(index_max - tileIndex_max.y * tiles.x);
vec2 tileUV = mix(0.5/tileSize, (tileSize-0.5)/tileSize, texel.rg);
vec2 tableUV_1 = tileIndex_min / tiles + tileUV / tiles;
vec2 tableUV_2 = tileIndex_max / tiles + tileUV / tiles;
vec3 lookUpColor_1 = texture2D(lookup, tableUV_1).rgb;
vec3 lookUpColor_2 = texture2D(lookup, tableUV_2).rgb;
vec3 lookUpColor = mix(lookUpColor_1, lookUpColor_2, index-index_min);
gl_FragColor = vec4(lookUpColor, 1.0);
}
Here is the lookup table. This is a base lookup table. I tried editing the lookup tables and applying the filter but the result is same, irrespective of the table.
What is causing this issue? Can anyone show me how to create a simple fragment shader that maps color from lookup table to the current texture? Any help would be appreciated. Regards.
Here is the code for loading the textures:
public static int loadTexture(final Bitmap img) {
int[] textures = new int[1];
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, img, 0);
return textures[0];
}
Not sure what color space or gamma your external sampler is using, but it looks like you are getting input values outside of the zero-to-one range, so the over exposed areas are jumping a LUT tile boundary.
As a diagnostic, clamp your inputs between zero and one, but this is rarely the right "proper fix" as the clipping in the colors will be obvious.
Something like this:
vec4 texel = texture2D(inputImage, vTextureCoord);
texel = min(texel, 1.0);

Colour/chroma texture is stretched when converting from YUV-NV12 to RGB using OpenGL ES

I have a byte buffer of YUV-NV12 formatted image data. When I try to convert it to RGB, I get an output with a stretched colour (chroma) layer like in the image below.
I followed this great answer, which guides to convert YUV-NV21 to RGB. Since NV-12 is just NV-21 with flipped U and V data, the only change I should do is to replace u and v values in the fragment shader.
Vertex shader:
precision mediump float;
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec4 vTextureCoordinate;
varying vec2 position;
void main()
{
gl_Position = uMVPMatrix * vPosition;
position = vTextureCoordinate.xy;
}
Fragment shader:
precision mediump float;
varying vec2 position;
uniform sampler2D uTextureY;
uniform sampler2D uTextureUV;
void main()
{
float y, u, v;
y = texture2D(uTextureY, position).r;
u = texture2D(uTextureUV, position).a - 0.5;
v = texture2D(uTextureUV, position).r - 0.5;
float r, g, b;
r = y + 1.13983 * v;
g = y - 0.39465 * u - 0.58060 * v;
b = y + 2.03211 * u;
gl_FragColor = vec4(r, g, b, 1.0);
}
Split and put image data into 2 ByteBuffer's which are mYBuffer and mUVBuffer. mSourceImage is just a Buffer which contains the image data as byte data.
ByteBuffer bb = (ByteBuffer) mSourceImage;
if (bb == null) {
return;
}
int size = mWidth * mHeight;
bb.position(0).limit(size);
mYBuffer = bb.slice();
bb.position(size).limit(bb.remaining());
mUVBuffer = bb.slice();
Generating textures:
GLES20.glGenTextures(2, mTexture, 0);
for(int i = 0; i < 2; i++) {
GLES20.glActiveTexture(GLES20.GL_TEXTURE0 + i);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTexture[i]);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
}
Passing buffer data to textures:
mTextureYHandle = GLES20.glGetUniformLocation(mProgramId, "uTextureY");
mTextureUVHandle = GLES20.glGetUniformLocation(mProgramId, "uTextureUV");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTexture[0]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE, mWidth, mHeight, 0, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE, mYBuffer);
GLES20.glUniform1i(mTextureYHandle, 0);
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mTexture[1]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE_ALPHA, mWidth / 2, mHeight / 2, 0, GLES20.GL_LUMINANCE_ALPHA, GLES20.GL_UNSIGNED_BYTE, mUVBuffer);
GLES20.glUniform1i(mTextureUVHandle, 1);
I couldn't figure out why I'm getting such an output. Any help would be much appreciated.
Nevermind, It was a tiny mistake in my code.
When splitting the byte buffer, I have used bb.position(size).limit(bb.remaining()) for the UV buffer. For some reason, bb.remaining() become 0 after getting some frames (This is actually a camera preview). Therefore I have changed it to bb.position(size).limit(size + size / 2).
Also the assumption I made by reading this,
the only change I should do is to replace u and v values in the fragment shader
appears to be wrong. It is observed that GL20.GL_LUMINANCE_ALPHA will always put the U byte into the A component of the texture, and the V byte into R, G, B components (You can use either one). Hence, no need to swap u and v values in the fragment shader (I have edited my question with the correct fragment shader code).
I will keep the question hoping this would help someone in the future.

pass 2 textures to a single shader in opengl-es 2.0

I am using code from How can I pass multiple textures to a single shader?. It works fine until I load and bind new textures after my bumpmap image. I only can get this to work if my bumpmap is the last bound and loaded texture. Even if all my images are the same size. Here's my code...
public int[] textureIDs = new int[]{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; //texture image ID's
//load bitmap and bind texture (done 10 times) textureIndex is 1-10
Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), imageiD);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureIDs[textureIndex]);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
//render
shaderProgram = GraphicTools.sp_ImageBump;
GLES20.glUseProgram(shaderProgram);
t1 = GLES20.glGetUniformLocation(shaderProgram, "u_texture");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, globals.textureIDs[1]);//textureIndex
GLES20.glUniform1i(t1, 0);
t2 = GLES20.glGetUniformLocation(shaderProgram, "u_bumptex");
GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, globals.textureIDs[2]);//bumpMapIndex);
GLES20.glUniform1i(t2, 1);
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);//added this, it allows me to pass 2 textures to the shaders, otherwise TEXTURE1 is black
//fragment shader
precision mediump float;
uniform sampler2D u_bumptex;
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main()
{
vec4 bumpColor = texture2D(u_bumptex, v_texCoord);//v_bumpCoord);// get bump map color, just use green channel for brightness
gl_FragColor = texture2D(u_texture, v_texCoord) * bumpColor.g;
}

Options to efficiently draw a stream of byte arrays to display in Android

In simple words, all I need to do is display a live stream of video frames in Android (each frame is YUV420 format). I have a callback function where I receieve individual frames as a byte array. Something that looks like this :
public void onFrameReceived(byte[] frame, int height, int width, int format) {
// display this frame to surfaceview/textureview.
}
A feasible but slow option is to convert the byte array to a Bitmap and draw to canvas on SurfaceView. In the future, I would ideally like to be able to alter brightness, contrast etc of this frame, and hence am hoping I can use OpenGL-ES for the same. What are my other options to do this efficiently?
Remember, unlike in implementations of Camera or MediaPlayer class, I can't direct my output to a surfaceview/textureview using camera.setPreviewTexture(surfaceTexture); as I am receiving individual frames using Gstreamer in C.
I'm using ffmpeg for my project, but the principal for rendering the YUV frame should be the same for yourself.
If a frame, for example, is 756 x 576, then the Y frame will be that size. The U and V frame are half the width and height of the Y frame, so you will have to make sure you account for the size differences.
I don't know about the camera API, but the frames I get from a DVB source have a width and also each line has a stride. Extras pixels at the end of each line in the frame. Just in case yours is the same, then account for this when calculating your texture coordinates.
Adjusting the texture coordinates to account for the width and stride (linesize):
float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge
The vertex shader I've used takes screen coordinates from 0.0 to 1.0, but you can change these to suit. It also takes in the texture coords and a colour input. I've used the colour input so that I can add fading, etc.
Vertex shader:
#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif
attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
v_texcoord = a_texcoord;
v_colorout = a_colorin;
float x = a_vertex.x * c2 - c1;
float y = -(a_vertex.y * c2 - c1);
gl_Position = vec4(x, y, a_vertex.z, c1);
}
The fragment shader which takes three uniform textures, one for each Y, U and V framges and converts to RGB. This also multiplies by the colour passed in from the vertex shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
float y = texture2D(u_texturey, v_texcoord).r;
float u = texture2D(u_textureu, v_texcoord).r - 0.5;
float v = texture2D(u_texturev, v_texcoord).r - 0.5;
vec4 rgb = vec4(y + 1.403 * v,
y - 0.344 * u - 0.714 * v,
y + 1.770 * u,
1.0);
gl_FragColor = rgb * v_colorout;
}
The vertices used are in:
float x, y, z; // coords
float s, t; // texture coords
uint8_t r, g, b, a; // colour and alpha
Hope this helps!
EDIT:
For NV12 format you can still use a fragment shader, although I've not tried it myself. It takes in the interleaved UV as a luminance-alpha channel or similar.
See here for how one person has answered this: https://stackoverflow.com/a/22456885/2979092
I took several answers from SO and various articles plus #WLGfx's answer above to come up with this:
I created two byte buffers, one for Y and one for the UV part of the texture. Then converted the byte buffers to textures using
public static int createImageTexture(ByteBuffer data, int width, int height, int format, int textureHandle) {
if (GLES20.glIsTexture(textureHandle)) {
return updateImageTexture(data, width, height, format, textureHandle);
}
int[] textureHandles = new int[1];
GLES20.glGenTextures(1, textureHandles, 0);
textureHandle = textureHandles[0];
GlUtil.checkGlError("glGenTextures");
// Bind the texture handle to the 2D texture target.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
// Configure min/mag filtering, i.e. what scaling method do we use if what we're rendering
// is smaller or larger than the source image.
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GlUtil.checkGlError("loadImageTexture");
// Load the data from the buffer into the texture handle.
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height,
0, format, GLES20.GL_UNSIGNED_BYTE, data);
GlUtil.checkGlError("loadImageTexture");
return textureHandle;
}
Both these textures are then sent as normal 2D textures to the glsl shader:
precision highp float;
varying vec2 vTextureCoord;
uniform sampler2D sTextureY;
uniform sampler2D sTextureUV;
uniform float sBrightnessValue;
uniform float sContrastValue;
void main (void) {
float r, g, b, y, u, v;
// We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
// that's why we're pulling it from the R component, we could also use G or B
y = texture2D(sTextureY, vTextureCoord).r;
// We had put the U and V values of each pixel to the A and R,G,B components of the
// texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
// in the texture, this is probably the fastest way to use them in the shader
u = texture2D(sTextureUV, vTextureCoord).r - 0.5;
v = texture2D(sTextureUV, vTextureCoord).a - 0.5;
// The numbers are just YUV to RGB conversion constants
r = y + 1.13983*v;
g = y - 0.39465*u - 0.58060*v;
b = y + 2.03211*u;
// setting brightness/contrast
r = r * sContrastValue + sBrightnessValue;
g = g * sContrastValue + sBrightnessValue;
b = b * sContrastValue + sBrightnessValue;
// We finally set the RGB color of our pixel
gl_FragColor = vec4(r, g, b, 1.0);
}

OpenGL ES 2.0 - the Fragment Shader making everything look blue when applying a Vignette effect

I've been trying to apply the filters I use in the android-gpuimage library in the Mediacodec surface context. So far I've succeeded in using the filters that only require one extra texture map. However, when I try to apply a filter that needs at least two, the result is an either blue-colored or rainbow-colored mess.
The following issue deals with the one that uses a texture lookup filter and an vignette filter.
The vertex shader I used is as follows:
uniform mat4 uMVPMatrix;
uniform mat4 textureTransform;
attribute vec4 vPosition;
attribute vec4 vTexCoordinate;
varying vec2 v_TexCoordinate;
void main() {
gl_Position = uMVPMatrix * vPosition;
v_TexCoordinate = (textureTransform * vTexCoordinate).xy;
}
The fragment shader I used is as follows:
#extension GL_OES_EGL_image_external : require
precision lowp float;
varying highp vec2 v_TexCoordinate;
uniform samplerExternalOES u_Texture; //MediaCodec decoder provided data
uniform sampler2D inputImageTexture2; //Amaro filter map
uniform sampler2D inputImageTexture3; //Common vignette map
void main()
{
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
vec2 red = vec2(texel.r, 0.16666);
vec2 green = vec2(texel.g, 0.5);
vec2 blue = vec2(texel.b, 0.83333);
texel.rgb = vec3(
texture2D(inputImageTexture2, red).r,
texture2D(inputImageTexture2, green).g,
texture2D(inputImageTexture2, blue).b);
//After further research I found the problem is somewhere below
vec2 tc = (2.0 * v_TexCoordinate) - 1.0;
float d = dot(tc, tc);
vec2 lookup = vec2(d, texel.r);
texel.r = texture2D(inputImageTexture3, lookup).r;
lookup.y = texel.g;
texel.g = texture2D(inputImageTexture3, lookup).g;
lookup.y = texel.b;
texel.b = texture2D(inputImageTexture3, lookup).b;
//The problem is somewhere above
gl_FragColor = vec4(texel, 1.0);
}
The end result of that program looked like this:
Is this the result of a bad vignette map, or is it something to do with the vignette application part of the fragment shader?
EDIT:
The texture used for inputImageTexture2:
The texture used for inputImageTexture3:
Turns out the way I load my textures matters.
My current code for loading textures:
public int loadColormap(final Bitmap colormap) {
IntBuffer textureIntBuf = IntBuffer.allocate(1);
GLES20.glGenTextures(1, textureIntBuf);
int textureHandle = textureIntBuf.get();
//if (textures[2] != 0) {
if (textureHandle != 0) {
//GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[2]);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLUtils.texImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, colormap, 0);
}
//if (textures[2] == 0) {
if (textureHandle == 0) {
throw new RuntimeException("Error loading texture.");
}
//return textures[2];
return textureHandle;
}
The previous incarnation used the textures array, an array I use to load the data from MediaCodec and the watermark. For some reason if I use that instead of generating an IntBuffer for each texture, the textures used in the fragment shader get jumbled or something.

Categories

Resources