Blend textures of different size/coordinates in GLSL - android

If blending two textures of different size in the fragment shader, is it possible to map the textures to different coordinates?
For example, if blending the textures from the following two images:
with the following shaders:
// Vertex shader
uniform mat4 uMVPMatrix;
attribute vec4 vPosition;
attribute vec2 aTexcoord;
varying vec2 vTexcoord;
void main() {
gl_Position = uMVPMatrix * vPosition;
vTexcoord = aTexcoord;
}
// Fragment shader
uniform sampler2D uContTexSampler;
uniform sampler2D uMaskTextSampler;
varying vec2 vTexcoord;
void main() {
vec4 mask = texture2D(uMaskTextSampler, vTexcoord);
vec4 text = texture2D(uContTexSampler, vTexcoord);
gl_FragColor = vec4(text.r * mask.r), text.g * mask.r, text.b * mask.r, text.a * mask.r);
}
(The fragment shader replaces the white spaces of the black and white mask with the second texture).
Since both texture use the same gl_Position and coordinates (1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 0.0f), both texture get mapped to the same coordinates in the view:
However, my goal is to maintain the original texture ratio:
I want to achieve this within the shader, rather than glBlendFunc and glBlendFuncSeparate, in order to my own values for blending.
Is there a way to achieve this within GLSL? I have a feeling that my approach for blending texture vertices for different position coordinates is broken by design...

It is indeed possible but you need to get a 2D scale vector for the mask texture coordinates. I suggest you to compute those on the CPU and send them to the shader via uniform, alternative is to compute them per vertex or even per fragment but you will still need to pass image dimensions into the shader so just do it on the CPU and add an uniform.
To get that scale vector you will need a bit of simple math. What you want to do is respect the mask ratio but scale it so one of the 2 scale coordinates is 1.0 and other is <=1.0. That means that you will either see the whole mask width or height while the opposite dimension will be scaled down. For instance if you have an image of size 1.0x1.0 and a mask of size 2.0x1.0 your scale vector will be (.5, 1.0).
To use this scaleVector you simply need to multiply the texture coordinates:
vec4 mask = texture2D(uMaskTextSampler, vTexcoord* scaleVector);
To compute the scale vector try this:
float imageWidth;
float imageHeight;
float maskWidth;
float maskHeight;
float imageRatio = imageWidth/imageHeight;
float maskRatio = maskWidth/maskHeight;
float scaleX, scaleY;
if(imageRatio/maskRatio > 1.0f) {
//x will be 1.0
scaleX = 1.0f;
scaleY = 1.0f/(imageRatio/maskRatio);
}
else {
//y will be 1.0
scaleX = imageRatio/maskRatio;
scaleY = 1.0f;
}
Note I did not try this code so you might need to play around a bit.
EDIT: Scaling the texture coordinates fix
The above scale of texture coordinates makes the mask texture use the top-left part instead of centre part. The coordinates must be scaled around centre. That means get the original vector from centre which is vTexcoord-vec2(.5,.5) then scale this vector and add it back from the centre:
vec2 fromCentre = vTexcoord-vec2(.5,.5);
vec2 scaledFromCenter = fromCenter*scaleVector;
vec2 resultCoordinate = vec2(.5,.5) + scaledFromCenter;
You can put this into a single line and even try to shorten it a bit (do it on the paper first).

Related

Options to efficiently draw a stream of byte arrays to display in Android

In simple words, all I need to do is display a live stream of video frames in Android (each frame is YUV420 format). I have a callback function where I receieve individual frames as a byte array. Something that looks like this :
public void onFrameReceived(byte[] frame, int height, int width, int format) {
// display this frame to surfaceview/textureview.
}
A feasible but slow option is to convert the byte array to a Bitmap and draw to canvas on SurfaceView. In the future, I would ideally like to be able to alter brightness, contrast etc of this frame, and hence am hoping I can use OpenGL-ES for the same. What are my other options to do this efficiently?
Remember, unlike in implementations of Camera or MediaPlayer class, I can't direct my output to a surfaceview/textureview using camera.setPreviewTexture(surfaceTexture); as I am receiving individual frames using Gstreamer in C.
I'm using ffmpeg for my project, but the principal for rendering the YUV frame should be the same for yourself.
If a frame, for example, is 756 x 576, then the Y frame will be that size. The U and V frame are half the width and height of the Y frame, so you will have to make sure you account for the size differences.
I don't know about the camera API, but the frames I get from a DVB source have a width and also each line has a stride. Extras pixels at the end of each line in the frame. Just in case yours is the same, then account for this when calculating your texture coordinates.
Adjusting the texture coordinates to account for the width and stride (linesize):
float u = 1.0f / buffer->y_linesize * buffer->wid; // adjust texture coord for edge
The vertex shader I've used takes screen coordinates from 0.0 to 1.0, but you can change these to suit. It also takes in the texture coords and a colour input. I've used the colour input so that I can add fading, etc.
Vertex shader:
#ifdef GL_ES
precision mediump float;
const float c1 = 1.0;
const float c2 = 2.0;
#else
const float c1 = 1.0f;
const float c2 = 2.0f;
#endif
attribute vec4 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_colorin;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
v_texcoord = a_texcoord;
v_colorout = a_colorin;
float x = a_vertex.x * c2 - c1;
float y = -(a_vertex.y * c2 - c1);
gl_Position = vec4(x, y, a_vertex.z, c1);
}
The fragment shader which takes three uniform textures, one for each Y, U and V framges and converts to RGB. This also multiplies by the colour passed in from the vertex shader:
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D u_texturey;
uniform sampler2D u_textureu;
uniform sampler2D u_texturev;
varying vec2 v_texcoord;
varying vec4 v_colorout;
void main(void)
{
float y = texture2D(u_texturey, v_texcoord).r;
float u = texture2D(u_textureu, v_texcoord).r - 0.5;
float v = texture2D(u_texturev, v_texcoord).r - 0.5;
vec4 rgb = vec4(y + 1.403 * v,
y - 0.344 * u - 0.714 * v,
y + 1.770 * u,
1.0);
gl_FragColor = rgb * v_colorout;
}
The vertices used are in:
float x, y, z; // coords
float s, t; // texture coords
uint8_t r, g, b, a; // colour and alpha
Hope this helps!
EDIT:
For NV12 format you can still use a fragment shader, although I've not tried it myself. It takes in the interleaved UV as a luminance-alpha channel or similar.
See here for how one person has answered this: https://stackoverflow.com/a/22456885/2979092
I took several answers from SO and various articles plus #WLGfx's answer above to come up with this:
I created two byte buffers, one for Y and one for the UV part of the texture. Then converted the byte buffers to textures using
public static int createImageTexture(ByteBuffer data, int width, int height, int format, int textureHandle) {
if (GLES20.glIsTexture(textureHandle)) {
return updateImageTexture(data, width, height, format, textureHandle);
}
int[] textureHandles = new int[1];
GLES20.glGenTextures(1, textureHandles, 0);
textureHandle = textureHandles[0];
GlUtil.checkGlError("glGenTextures");
// Bind the texture handle to the 2D texture target.
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureHandle);
// Configure min/mag filtering, i.e. what scaling method do we use if what we're rendering
// is smaller or larger than the source image.
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GlUtil.checkGlError("loadImageTexture");
// Load the data from the buffer into the texture handle.
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height,
0, format, GLES20.GL_UNSIGNED_BYTE, data);
GlUtil.checkGlError("loadImageTexture");
return textureHandle;
}
Both these textures are then sent as normal 2D textures to the glsl shader:
precision highp float;
varying vec2 vTextureCoord;
uniform sampler2D sTextureY;
uniform sampler2D sTextureUV;
uniform float sBrightnessValue;
uniform float sContrastValue;
void main (void) {
float r, g, b, y, u, v;
// We had put the Y values of each pixel to the R,G,B components by GL_LUMINANCE,
// that's why we're pulling it from the R component, we could also use G or B
y = texture2D(sTextureY, vTextureCoord).r;
// We had put the U and V values of each pixel to the A and R,G,B components of the
// texture respectively using GL_LUMINANCE_ALPHA. Since U,V bytes are interspread
// in the texture, this is probably the fastest way to use them in the shader
u = texture2D(sTextureUV, vTextureCoord).r - 0.5;
v = texture2D(sTextureUV, vTextureCoord).a - 0.5;
// The numbers are just YUV to RGB conversion constants
r = y + 1.13983*v;
g = y - 0.39465*u - 0.58060*v;
b = y + 2.03211*u;
// setting brightness/contrast
r = r * sContrastValue + sBrightnessValue;
g = g * sContrastValue + sBrightnessValue;
b = b * sContrastValue + sBrightnessValue;
// We finally set the RGB color of our pixel
gl_FragColor = vec4(r, g, b, 1.0);
}

Changing camera's forward vector has strange results in Android openGL

I'm trying to create a 3D Android game using the OpenGL library. From what I can tell, the Android platform (or OpenGL) doesn't provide a camera object, so I made my own. I managed to create a square that is drawn on the screen. I also managed to get the camera to move in all 3 directions without issue. The problem I'm having is when I turn the camera either in the x or y axis, the square is displayed in a very strange way. It seems highly distorted its perspective. Here's some pictures
Camera at origin looking forward: Position [0,0,0.55f], Forward [0,0,-1], Up [0,1,0]
Camera moved to the right: Position [3,0,0.55f], Forward [0,0,-1], Up [0,1,0]
Camera turned slightly to the right: Position [0,0,0.55f], Forward [0.05f,0,-1], Up [0,1,0]
I'm not sure where the problem is getting generated. Here are the vertices of the square:
static float vertices[] = {
// Front face (CC order)
-1.0f, 1.0f, 1.0f, // top left
-1.0f, -1.0f, 1.0f, // bottom left
1.0f, -1.0f, 1.0f, // bottom right
1.0f, 1.0f, 1.0f, // top right
};
From what I've read, my matrix multiplications are in the correct order. I pass the Matrices to the vertex shader and do the multiplications there:
private final String vertexShaderCode =
"uniform mat4 uVMatrix;" +
"uniform mat4 uMMatrix;" +
"uniform mat4 uPMatrix;" +
"attribute vec4 aVertexPosition;" + // passed in
"attribute vec4 aVertexColor;" +
"varying vec4 vColor;" +
"void main() {" +
" gl_Position = uPMatrix * uVMatrix * uMMatrix * aVertexPosition;" +
" vColor = aVertexColor;" + // pass the vertex's color to the pixel shader
"}";
The Model matrix just moves it to the origin and makes it scale 1:
Matrix.setIdentityM(ModelMatrix, 0);
Matrix.translateM(ModelMatrix, 0, 0, 0, 0);
Matrix.scaleM(ModelMatrix, 0, 1, 1, 1);
I use my Camera object to update the View Matrix:
Matrix.setLookAtM(ViewMatrix, 0,
position.x, position.y, position.z,
position.x + forward.x, position.y + forward.y, position.z + forward.z,
up.x, up.y, up.z);
Here is my ProjectionMatrix:
float ratio = width / (float) height;
Matrix.frustumM(ProjectionMatrix, 0, -ratio, ratio, -1, 1, 0.01f, 10000f);
What am I missing?
With the parameters you pass to frustrumM():
Matrix.frustumM(ProjectionMatrix, 0, -ratio, ratio, -1, 1, 0.01f, 10000f);
you have an extremely strong perspective. This is the reason why the geometry looks very distorted as soon as you rotate it just slightly.
The left, right, bottom, and top values are distances measured at the depth of the near clip plane. So for example for your top value of 1.0, with the near value at 0.01, the top plane of the view volume will move at distance of 1.0 away from the viewing direction at a forward distance of 0.01.
Doing the math, you get atan(top / near) = atan(1.0 / 0.01) = atan(100.0) = 89.42 degrees for half the vertical view angle, or 178.85 degrees for the whole view angle, which corresponds to an extreme fish eye lense on a camera, covering almost the whole space in front of the camera.
To use a more sane level of perspective, you can calculate the values based on the desired view angle. With alpha being the vertical view angle:
float near = 0.01f;
float top = tan(0.5f * alpha) * near;
float right = top * ratio;
Matrix.frustumM(ProjectionMatrix, 0, -right, right, -top, top, near, 10000f);
Start with view angles in the range of 45 to 60 degrees for a generally pleasing amount of perspective. And remember that the tan() function used above takes angles in radian, so you'll have to convert it first if your original angle is in degrees.
Or, if you're scared of math, you can always use perspectiveM() instead. ;)
Depending on your setup, your vertex buffer may be wrong . Your vertices array is a vec3. And the attribute position in your shader is a vec4.
Also you projection matrix is strange , depending on the aspect ratio you want you'll get bizarre results.
You should use perspectiveM instead.

Android GL_BLEND no alpha and GREEN Background

I'm new to OpenGL.
My goal is to work on an alpha video in a OpenGL structure in a textureview.
I started with Video Effects and try to modify some colors (for start: black to transparent) using a custom [EDITED] shader:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES sTexture;
vec3 first;
vec4 second;
varying vec2 vTextureCoord;
vec2 oTextureCoord;
void main() {
first[0] = 0.0;
first[1] = 0.0;
first[2] = 0.0;
second[0] = 0.0;
second[1] = 1.0;
second[2] = 0.0;
second[3] = 1.0;
vec4 color = texture2D(sTexture, vTextureCoord);
oTextureCoord = vec2(vTextureCoord.x ,vTextureCoord.y + 0.5);
vec4 color2 = texture2D(sTexture, oTextureCoord);
if(vTextureCoord.y < 0.5){
gl_FragColor = vec4(color.r , color.g, color.b, color2.b);
}else{
gl_FragColor = color;
}
But I never saw the background under the view.
After some research I added
GLES20.glEnable(GLES20.GL_BLEND);
But now the "transparent color" is a green one :/
To add more information: the source video is a combination of colored frames on the upper part and the alpha mask on the bottom one.
Am I doing it wrong?
thanks by advance.
[EDIT]
Here is my actual onDrawFame method:
#Override
public void onDrawFrame(GL10 glUnused) {
synchronized (this) {
if (updateSurface) {
mSurface.updateTexImage();
mSurface.getTransformMatrix(mSTMatrix);
updateSurface = false;
}
}
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glClearColor(1.0f,1.0f, 1.0f, 0.0f);
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT
| GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glUseProgram(mProgram);
checkGlError("glUseProgram");
GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
GLES20.glBindTexture(GL_TEXTURE_EXTERNAL_OES, mTextureID[0]);
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_POS_OFFSET);
GLES20.glVertexAttribPointer(maPositionHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
checkGlError("glVertexAttribPointer maPosition");
GLES20.glEnableVertexAttribArray(maPositionHandle);
checkGlError("glEnableVertexAttribArray maPositionHandle");
mTriangleVertices.position(TRIANGLE_VERTICES_DATA_UV_OFFSET);
GLES20.glVertexAttribPointer(maTextureHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
checkGlError("glVertexAttribPointer maTextureHandle");
GLES20.glEnableVertexAttribArray(maTextureHandle);
checkGlError("glEnableVertexAttribArray maTextureHandle");
Matrix.setIdentityM(mMVPMatrix, 0);
GLES20.glUniformMatrix4fv(muMVPMatrixHandle, 1, false, mMVPMatrix,
0);
GLES20.glUniformMatrix4fv(muSTMatrixHandle, 1, false, mSTMatrix, 0);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
checkGlError("glDrawArrays");
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glFinish();
}
--[EDIT]--
Here is my vertexShader :
uniform mat4 uMVPMatrix;
uniform mat4 uSTMatrix;
attribute vec4 aPosition;
attribute vec4 aTextureCoord;
varying vec2 vTextureCoord
void main() {
gl_Position = uMVPMatrix * aPosition;
vTextureCoord = (uSTMatrix * aTextureCoord).xy;
}
To improve my question here is some details.
Here is my source:
And here is my goal (alpha chanel):
when I let clear color with :
GLES20.glClearColor(1f, 1f, 0.9f, 0f);
There is also the yellow background as you can see ! How can I got a transparent view to see the blue one on this picture?
[ANSWER : HOW TO RESOLVE]
1.First, You need to set your GLSurfaceView to allow Transparency in your constructor as:
this.getHolder().setFormat(PixelFormat.RGB_565);
this.getHolder().setFormat(PixelFormat.TRANSPARENT);
setEGLConfigChooser(8,8,8,8,16,0);
setEGLContextClientVersion(2);// all this before setRenderer()
2.Ask for transparency in your onDrawFrame method :
GLES20.glClearColor(0.0f, 0.0f, 0.0f, .0f);//
GLES20.glClear(GLES20.GL_DEPTH_BUFFER_BIT|GLES20.GL_COLOR_BUFFER_BIT);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
For me only black transparent color worked rgba(0,0,0,0).
Thanks again for contributors who helped me.
--[EDIT : REMAINING PROBLEM]--
Here some exemple :
On top it's an GLSurfaceView with an alpha video and down a custom GLTextureView, both are in an horizontalView.
Top appears to be good as I want! But check this when I scroll to right:
Top still appear when bottom hide as it needed!
Just a quick look through your code seems to already have quite a few strange peaces. I wish you included your vertex data (the position and texture coordinates) at least. First please check the line
GLES20.glVertexAttribPointer(maTextureHandle, 3, GLES20.GL_FLOAT,
false, TRIANGLE_VERTICES_DATA_STRIDE_BYTES,
mTriangleVertices);
These look like texture coordinates and you are having 3 dimensions? Is that correct? Fortunately if the stride is correct the dimensions will make no effect but still.
So if I understand that the top part of the video frame represents the color and the bottom does the alpha then your shader makes little sense to me. Could you show the vertex shader as well? Anyway what I would expect you to do is use the texture coordinates in range [0, 1], representing the whole texture. Then in the vertex shader have 2 varying vec2 texture coordinates such that:
varying vec2 colorTextureCoordinates = vec2(textureCoordinates.x, textureCoordinates.y*.5); // representing the top part
varying vec2 alphaTextureCoordinates = vec2(textureCoordinates.x, textureCoordinates.y*.5 + .5); // representing the bottom part
Then in the fragment shader all you do is mix the 2:
vec3 color = texture2D(sTexture, colorTextureCoordinates).rgb;
float alpha = texture2D(sTexture, alphaTextureCoordinates).b; // Why .b?
gl_FragColor = vec4(color, alpha);
Or am I missing something? Why is there an IF statement in your fragment shader that suggest that the top of the texture would use the color and the bottom part would use the color plus the alpha (or the other way around).
I suggest you first try to draw the image you receive from the video with the most common texture shader to confirm the whole frame is being drawn correctly. Then you should check the alpha is working correctly by feeding the semitransparent shader gl_FragColor = vec4(color.rgb, .5). After you confirm both of these produce expected results you should try drawing only the top part of the frame (the color) by using colorTextureCoordinates to get the expected results (the transparent parts should be black). Then check the alpha part by using alphaTextureCoordinates and gl_FragColor = vec4(alpha, alpha, alpha, 1.0) which should produce a grayscale image where white is a full alpha and black is no alpha. After all of these are confirmed try putting it all together as written above.
To make the surface view blend with other views you need to set the transparency and you need to fill the transparent pixels with the transparent color.
Since you are using blend you will need to clear the background to clear color (.0f, .0f, .0f, .0f). Simply disabling the blend should produce the same result in your case.
After doing so you will still see a black background because the surface view will not implicitly blend with other views (the background) and only RGB part is used to fill the rendered view. To enable the transparency you should take a look at some other answers and also this one.

Moving texture OpenGL ES 2.0

I am trying to implement a sprite of 8 columns and 8 rows in OpenGL ES 2.0
I made appear the first imagen but I cant figure out how to translate the Texture matrix in OpenGL ES 2.0 , the equivalent of the code in OpenGL 1.0 that I am looking is
gl.glMatrixMode(GL10.GL_TEXTURE);
gl.glLoadIdentity();
gl.glPushMatrix();
gl.glTranslatef(0.0f, 0.2f, 0f);
gl.glPopMatrix();
This are the matrix that I am using atm
/**
* Store the model matrix. This matrix is used to move models from object space (where each model can be thought
* of being located at the center of the universe) to world space.
*/
private float[] mModelMatrix = new float[16];
/**
* Store the view matrix. This can be thought of as our camera. This matrix transforms world space to eye space;
* it positions things relative to our eye.
*/
private float[] mViewMatrix = new float[16];
/** Store the projection matrix. This is used to project the scene onto a 2D viewport. */
private float[] mProjectionMatrix = new float[16];
/** Allocate storage for the final combined matrix. This will be passed into the shader program. */
private float[] mMVPMatrix = new float[16];
/**
* Stores a copy of the model matrix specifically for the light position.
*/
private float[] mLightModelMatrix = new float[16];
My Vertex shader
uniform mat4 u_MVPMatrix; // A constant representing the combined model/view/projection matrix.
uniform mat4 u_MVMatrix; // A constant representing the combined model/view matrix.
attribute vec4 a_Position; // Per-vertex position information we will pass in.
attribute vec3 a_Normal; // Per-vertex normal information we will pass in.
attribute vec2 a_TexCoordinate; // Per-vertex texture coordinate information we will pass in.
varying vec3 v_Position; // This will be passed into the fragment shader.
varying vec3 v_Normal; // This will be passed into the fragment shader.
varying vec2 v_TexCoordinate; // This will be passed into the fragment shader.
// The entry point for our vertex shader.
void main()
{
// Transform the vertex into eye space.
v_Position = vec3(u_MVMatrix * a_Position);
// Pass through the texture coordinate.
v_TexCoordinate = a_TexCoordinate;
// Transform the normal's orientation into eye space.
v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0));
// gl_Position is a special variable used to store the final position.
// Multiply the vertex by the matrix to get the final point in normalized screen coordinates.
gl_Position = u_MVPMatrix * a_Position;
}
My Fragment shader:
precision mediump float; // Set the default precision to medium. We don't need as high of a
// precision in the fragment shader.
uniform vec3 u_LightPos; // The position of the light in eye space.
uniform sampler2D u_Texture; // The input texture.
varying vec3 v_Position; // Interpolated position for this fragment.
varying vec3 v_Normal; // Interpolated normal for this fragment.
varying vec2 v_TexCoordinate; // Interpolated texture coordinate per fragment.
// The entry point for our fragment shader.
void main()
{
// Will be used for attenuation.
float distance = length(u_LightPos - v_Position);
// Get a lighting direction vector from the light to the vertex.
vec3 lightVector = normalize(u_LightPos - v_Position);
// Calculate the dot product of the light vector and vertex normal. If the normal and light vector are
// pointing in the same direction then it will get max illumination.
float diffuse = max(dot(v_Normal, lightVector), 0.0);
// Add attenuation.
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance)));
// Add ambient lighting
diffuse = diffuse + 0.7;
// Multiply the color by the diffuse illumination level and texture value to get final output color.
gl_FragColor = (diffuse * texture2D(u_Texture, v_TexCoordinate));
}
You will need to perform transformations to the texture co-ordinates yourself, you could do this in one of four places:
Apply the transformation to your raw model data.
Apply the transformation in the CPU (not recommended unless you have good reason as this is what vertex shaders are for).
Apply the transformation in the vertex shader (recommended).
Apply the transformation in the fragment shader.
If you are going to apply a translation to the texture coordinates the most flexible way will be to use your maths library to create a translation matrix and pass the new matrix to your vertex shader as a uniform (the same way you pass the mMVPMatrix and mLightModelMatrix).
You can then multiply the translation matrix by the texture coordinate in the vertex shader and output the result as a varying vector.
Vertex Shader:
texture_coordinate_varying = texture_matrix_uniform * texture_coordinate_attribute;
Fragment Shader:
gl_FragColor = texture2D(texture_sampler, texture_coordinate_varying);
Please note: Your GLES 1.0 code does not actually perform a translation as you surrounded it with a push and pop.

Use Android 4x5 ColorMatrix in OpenGL ES 2 Shader

I am trying to use Android Color Matrix in OpenGL ES 2. I have been able to use a 4x4 Matrix using the following code in the Shader (this adds also an intensity parameter):
varying vec2 textureCoordinate;
uniform lowp mat4 colorMatrix;
uniform lowp float intensity;
void main()
{
vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
vec4 outputColor = textureColor * colorMatrix;
gl_FragColor = (intensity * outputColor) + ((1.0 - intensity) * textureColor);
}
But i am struggling on how i could convert Android 4x5 matrix to a vec4 matrix usable in the shader. I am not interested in Alpha channel.
The most direct approach is probably to split the ColorMatrix into a mat4 and a vec4, where the mat4 contains the multipliers for the input components, and the vec4 the constant offsets.
One detail to watch out for is the order of the matrix elements in memory. OpenGL uses column major storage, where the ColorMatrix appears to be in row major order. But it looks like you're already correcting for this discrepancy by multiplying the input vector from the right in the shader code.
The shader code will then look like this:
uniform lowp mat4 colorMatrix
uniform lowp vec4 colorOffset;
...
vec4 outputColor = textureColor * colorMatrix + colorOffset;
In the Java code, say you have a ColorMatrix named mat:
ColorMatrix mat = ...;
float[] matArr = mat.getArray();
float[] oglMat = {
matArr[0], matArr[1], matArr[2], matArr[3],
matArr[5], matArr[6], matArr[7], matArr[8],
matArr[10], matArr[11], matArr[12], matArr[13],
matArr[15], matArr[16], matArr[17], matArr[18]};
// Set value of colorMatrix uniform using oglMat.
float[] oglOffset = {matArr[4], matArr[9], matArr[14], matArr[19]};
// Set value of colorOffset uniform using oglOffset.

Categories

Resources