Android OpenGL ES 2.0:How to zoom texture with depth map? - android

Android OpenGL ES 2.0, I bind bitmap and depth map bitmap to fragment_shader.glsl,
precision mediump float;
uniform sampler2D sDepth;
uniform sampler2D sTexture;
uniform float time;
varying vec2 varyTexCoord;
void main() {
vec4 depth=texture2D(sDepth, varyTexCoord);
gl_FragColor=texture2D(sTexture, varyTexCoord);
}
sTexture : original bitmap
sDepth: depth map(ARGB_8888, 0~255, 0:far 255:near)
vertex_shader.glsl
attribute vec4 vPosition;
attribute vec2 vTexCoord;
uniform mat4 vMatrix;
varying vec2 varyTexCoord;
void main() {
gl_Position = vPosition;
varyTexCoord = vTexCoord;
}
now I want to make parallax effect :According to the depth of field value, zoom in the image, the zoom in of the near area is higher than that of the far area, creating parallax effect.
Can you give us some ideas? Thank you.

Create a mesh on a base of regular grid, let say 10x10 in (x,y) space.
Set z coordinate from depth map.
Make uv coordinates.
Render mesh with you color texture, and use gl_position = projectionMatrix * scale * vertex;
Try different dimensions for grid to find best one.

Related

Calculate mean of a row in the fragment shader (OpenGL ES 2.0)

I am currently programming an application for image processing. To achieve the needed performance, I have to use the GPU to compute the camera input, more specifically use OpenGL ES 2.0.
With the help of this project (https://github.com/yulu/ShaderCam) I achieved to pass the image to the pipeline and do simple operations with the fragment shader (like inverting colors etc).
My knowledge of GLSL, fragment shaders and vertex shaders is fairly limited but I am aware of pipeline constraints and what the two shaders do in the pipeline.
So - formulating the problem - I would like to calculate the average color of a row in my received image and return it (per row) to my application.
I read here https://stackoverflow.com/a/13866636/8038866 that this is generally possible, however I can't seem to find out the following things:
1 (edit: SOLVED by simply passing the w and h of my texture to the vertex and fragment shader):
Knowing where the row ends (and having that information in the fragment shader). For this I assume that I would have to pass the width of the picture to the vertex shader and from there to the fragment shader, right?
2.: How to calculate the average the color values of each row in the fragment shader and then pass them to the application. If I understand it correctly - the fragment shader only excutes the code per pixel, so I am not sure how to achieve this.
Here are the two very basic shaders
vertex shader:
uniform mat4 uTransformM;
uniform mat4 uOrientationM;
uniform vec2 ratios;
attribute vec2 aPosition;
varying vec2 vTextureCoord;
void main(){
gl_Position = vec4(aPosition, 0.0, 1.0);
vTextureCoord = (uTransformM * ((uOrientationM * gl_Position + 1.0)*0.5)).xy;
gl_Position.xy *= ratios;
}
fragment shader:
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES sTexture;
varying vec2 vTextureCoord;
void main(){
gl_FragColor = texture2D(sTexture, vTextureCoord);
//calc mean per row and pass it back
}
I am very thankful for every advice or help you can provide.
I found a way that does the trick for me. The idea is to calculate the mean for only one row of pixels and then later in the application to get this line with
glReadPixels( GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid * data);
Here is my fragment shader (notice that the width of the surface is required as well):
#extension GL_OES_EGL_image_external : require
precision mediump float;
uniform samplerExternalOES sTexture;
varying float width;
varying vec2 vTextureCoord;
void main(){
vec4 accumulatedRGB = texture2D(sTexture, vec2(0,vTextureCoord.y));
if(vTextureCoord.x < 0.50 && vTextureCoord.x > 0.499){ //small enough to only cover one line
for(float i=1.0;i<=width;++i)
{
float xPosOnTexture = i/width;
vec4 current = texture2D(sTexture, vec2(xPosOnTexture,vTextureCoord.y));
accumulatedRGB += current;
}
vec4 mean = accumulatedRGB/width;
gl_FragColor = vec4(mean.rgb , mean.a);//avg color for one line
}
else{
gl_FragColor = vec4(0.0,0.0,0.0,0.0);//rest of the screen
}
}

Why is there an offset when I render this overlay?

I use Vuforia SDK to render the video stream of my phone's camera on the screen.
So the texture is generated by the Vuforia library, not me.
The shaders used to render this background are:
// Vertex Shader
attribute vec4 a_position;
attribute vec2 a_textureCoords;
varying vec2 v_textureCoords;
uniform mat4 u_projectionMatrix;
void main()
{
gl_Position = u_projectionMatrix * a_position;
v_textureCoords = a_textureCoords;
}
// Fragment Shader
varying highp vec2 v_textureCoords;
uniform sampler2D u_currentTexture;
void main()
{
vec4 currentColor = texture2D(u_currentTexture, v_textureCoords);
gl_FragColor = currentColor;
}
Now, I want an overlay in the upper-left corner of the screen:
I don't want this overlay to display only a pink texture, but rather a multiply blend of the pink texture and the background texture. Note that the textures do not have the same coordinates.
But for now, let's forget about the blending and let's just render the background texture in the shader program of the pink texture. So in the end, yes, one should see no difference between the background-only version and the bacground with overlay version.
As you can see (look at the painting and the top of the chair), there is a small offset...
The shaders used to render the overlay are:
// Vertex Shader
attribute vec4 a_position;
attribute vec2 a_currentTextureCoords;
varying vec2 v_currentTextureCoords;
void main()
{
gl_Position = a_position;
v_currentTextureCoords = a_currentTextureCoords;
}
// Fragment Shader
varying highp vec2 v_currentTextureCoords;
uniform sampler2D u_currentTexture;
uniform sampler2D u_backgroundTexture;
void main()
{
vec2 screenSize = vec2(1080.0, 1920.0);
vec2 cameraResolution = vec2(720.0, 1280.0);
vec2 texelSize = vec2(1.0 / screenSize.x, 1.0 / screenSize.y);
vec2 scaleFactor = vec2(cameraResolution.x / screenSize.x, cameraResolution.y / screenSize.y);
vec2 uv = gl_FragCoord.xy * texelSize * scaleFactor;
uv = vec2(scaleFactor.y - uv.y, scaleFactor.x - uv.x);
vec4 backgroundColor = texture2D(u_backgroundTexture, uv);
gl_FragColor = backgroundColor;
}
Are my calculations wrong?
Why do you need this line?
uv = vec2(scaleFactor.y - uv.y, scaleFactor.x - uv.x);
Not sure what arithmetic relationship the absolute texture coordinates have with the scale factor which needs an addition or a subtraction ...
P.S. it's not related to your question, but your shaders will be shorter and easier to read if you just use the vector operations in the language. For example, replace:
vec2 scaleFactor = vec2(cameraResolution.x / screenSize.x, cameraResolution.y / screenSize.y);
... with ...
vec2 scaleFactor = cameraResolution / screenSize;
As long as the vector types are the same length, it will do exactly what you expect with a lot less typing ...

open gl es2: fragment shader optimization

I have following simple fragment shader
precision lowp float;
uniform sampler2D u_texture_image;
uniform sampler2D u_texture_mask;
uniform lowp float u_blink;
varying lowp vec2 v_texCoords_image;
varying lowp vec2 v_texCoords_mask;
varying lowp float v_shadow;
void main() {
lowp vec4 color= vec4(texture2D(u_texture_image, v_texCoords_image));
lowp vec4 mask_color= vec4(texture2D(u_texture_mask, v_texCoords_mask));
//masking image
color = vec4(color.xyz,mask_color.a);
//blink
color =mix(color,vec4(1.0,1.0,1.0,mask_color.a),u_blink);
//check if its a shadow
color=mix(color, mask_color*0.3, v_shadow);
gl_FragColor = color;
}
I draw with this code 4400 polygons and get 25 fps(~40ms between onDrawFrame calls).This is not enough since this is not whole scene.Can I somehow optimize this code?My target fps is 30.Also I wonder are there some profiling tools for fragment shader code?
ADD:How do I call fragment shader
private void drawPieces() {
//set my shader
piecesProgram.useProgram();
//set uniforms
piecesProgram.setUniforms(MVPMatrix, textureImageId, textureMaskId,new PointF(2f,2f),0f);
//load my vertcices with glVertexAttribPointer
piecesMesh.bindPieceData(piecesProgram,false);
//draw it with glDrawElements
piecesMesh.drawPieces(false);
piecesMesh.disableAttributes(piecesProgram);
}
This shader does not look that complex.
What hardware are you using?
Are you recompiling this shader every time you use it?
How are you calling this shader?
Apple has written a nice article about GLSL best prectices
https://developer.apple.com/library/content/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/BestPracticesforShaders/BestPracticesforShaders.html

GLSL Circle gets eliptical on Rendering on screen?

I am trying to render a circle on my mobile uisng farment shader. Also followed this as i got the best answer.
Vertex Shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Fragment Shader:
varying highp vec2 textureCoordinate;
const highp vec2 center = vec2(0.5, 0.5);
const highp float radius = 0.5;
void main()
{
highp float distanceFromCenter = distance(center, textureCoordinate);
lowp float checkForPresenceWithinCircle = step(distanceFromCenter, radius);
gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0) * checkForPresenceWithinCircle;
}
attribute vec4 position; is passed -1 to +1
and
attribute vec4 inputTextureCoordinate; is passed 0 to 1.
But while rendering I get a ellipse on Mobile Screen? I think this might be because of the screen aspect ratio. How to render perfect circle on screen?
I think this might be because of the screen aspect ratio.
Yes, this is exactly the problem.
The viewinf voulme iis [-1,1] in all 3 dimensions. That is mapped to the viewport for window space coordinates. Since you do not use any other transformations, you are direcly drawin in clip space, and your clip space is identical to the NDC space.
To get this right, you have to take the aspect ratio into account. You can either directly change your attribute values, or correct for it in the vertex shader, or still draw the full-screen quad and take account for it in the fragment shader.
The latter one would be the most inefficient way. I would actually recommend adding a 2D scale vector uniform to the vertex shader.
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
uniform vec2 scale;
void main()
{
gl_Position = vec4(scale, 1.0, 1.0) * position;
textureCoordinate = inputTextureCoordinate.xy;
}
On your client side, you can set the uniform to (1.0/aspect_ratio, 1.0) if aspect_ratio is >= 1.0, and to (1.0, aspect_ratio) if it is below 1. That way, no matter what screen orientation you use, the circle will always be a circle and fit to the screen.

libgdx - changing sprite color to white

I want to colorize a sprite so that RGB channels are all 1 and alpha remains unchanged.
I gather this should be done with shaders, but the two accepted answers on StackOverflow (Change sprite color into white and libgdx changing sprite color while hurt) don't work for me - the result is transparent, and they don't work on http://shdr.bkcore.com/ either
All you need is to replace the RGB each with 1.0 in the fragment shader.
Vertex shader-- This is like the one in SpriteBatch with vertex color removed since you aren't using it:
attribute vec4 a_position;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
void main()
{
v_texCoords = a_texCoord0;
gl_Position = u_projTrans * a_position;
}
Fragment shader-- grab just the alpha value from the texture.:
#ifdef GL_ES
precision lowp float; //since the only value we're storing is part of a color
#endif
varying vec2 v_texCoords;
uniform sampler2D u_texture;
void main()
{
float alpha = texture2D(u_texture, v_texCoords).a;
gl_FragColor = vec4(1.0, 1.0, 1.0, alpha);
}

Categories

Resources