Working on AR app and trying to replace plane texture.
I'm trying to render texture on vertical and horizontal planes. It's working fine for horizontal planes, but doesn't work well on vertical.
i found that something wrong with texture_coord calculations, but can't figure it out (new to OpenGL).
Here is my vertex shader
void main()
{
vec4 local_pos = vec4(a_position, 1.0);
vec4 world_pos = u_model * local_pos;
texture_coord = world_pos.sp * u_scale;
gl_Position = u_mvp * local_pos;
}
fragment shader
out vec4 outColor;
void main()
{
vec4 control = texture(u_texture, diffuse_coord);
float dotScale = 1.0;
float lineFade = 0.5;
vec3 newColor = (control.r * dotScale > u_gridControl.x) ? u_dotColor.rgb : control.g > u_gridControl.y ? u_lineColor.rgb * lineFade: u_lineColor.rgb * 0.25 * lineFade;
outColor = vec4(newColor, 1.0);
}
The important bit is texture_coord = world_pos.sp in your vertex shader.
There are 3 ways to refer to the components of a vector in GLSL. xyzw (the most common), rgba (more natural for colours), stpq (more natural for texture coordinates).
The line texture_coord = world_pos.sp would be clearer if it were written as texture_coord = world_pos.xz.
Once you realize that you're generating texture coordinates by ignoring the y-component it's obvious why vertical planes are not textured how you would like.
Unfortunately there's no simple one line fix. Perhaps tri-planar texturing might be an appropriate solution for you - this seems to be a good explanation of the technique.
Related
I applied cell shading effect to the object, like:
This works well, but there are many conditional checks ("if" statements) in the fragment shader:
#version 300 es
precision lowp float;
in float v_CosViewAngle;
in float v_LightIntensity;
const lowp vec3 defaultColor = vec3(0.1, 0.7, 0.9);
void main() {
lowp float intensity = 0.0;
if (v_CosViewAngle > 0.33) {
intensity = 0.33;
if (v_LightIntensity > 0.76) {
intensity = 1.0;
} else if (v_LightIntensity > 0.51) {
intensity = 0.84;
} else if (v_LightIntensity > 0.26) {
intensity = 0.67;
} else if (v_LightIntensity > 0.1) {
intensity = 0.50;
}
}
outColor = vec4(defaultColor * intensity, 1.0);
}
I guess so many checks in the fragment shader can ultimately affect performance. In addition, shader size is increasing. Especially if there will be even more cell shading levels.
Is there any other way to get this effect? Maybe some glsl-function can be used here?
Thanks in advance!
Store your color bands in a Nx1 texture, do a texture lookup using v_LightIntensity as your texture coordinate. Want a different shading level count then just change the texture.
EDIT Store an NxM texture, doing a lookup using vLightIntensity and v_CosViewAngle as a 2D coordinate, and you can kill branches completely.
All I need is to pass texture through fragment shader 1, get the result and pass it to fragment shader 2.
I know how to link vertex and fragment shader together into a program and get the shader object.
I don't know how to get the result of shader 1, switch the shaders (GLES20.glUseProgram ?) and pass the result of shader 1 to shader 2.
Any ideas how to do it?
UPDATE
This is an example what I want to achive
Effect 1:
Effect 2:
My goal is to combine Effect 1 and Effect 2.
UPDATE 2
effect 2 function:
...
uniform float effect2;
vec2 getEffect_() {
float mType = effect2;
vec2 newCoordinate = vec2(textureCoordinate.x, textureCoordinate.y);
vec2 res = vec2(textureCoordinate.x, textureCoordinate.y);
//case 1
if(mType==3.5) {
if (newCoordinate.x > 0.5) {
res = vec2(1.25 - newCoordinate.x, newCoordinate.y); }
}
else
//case 2
...
return res;
}
...
If you want to pass the result as a texture to another shader, you should use RTT(render to texture) so that you can get a texture to pass to another shader.
Yes, you should use glUseProgram(name) to switch another shader but not only this, you should render it at the original FBO(now you use)
make one FBO to make the result as a texture.
render to texture using the first shader then you can get a texture
Draw the texture with the second shader at the main fbo(now you use)
If you just want to combine two effect, just combine the two fragment shaders.
//At the end of the second frag shader
// skip this below
//gl_FragColor = result;
// put these codes
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
Only use one shader program with the second effect fragment shader.
I will assume you don't need to show those 30 effects at once.
define uniform float effect2 in the 10 fragments like effect1.
pass the effect2 like 0.5, 1.5 or 2.5
according to the value you pass, differently mix the effect.
For example,
if(effec2>2.0) {
float grayScale = dot(result.rgb, vec3(0.299, 0.587, 0.114));
gl_FragColor = vec4(grayScale, grayScale, grayScale, 1.0);
} else if(effect2>1.0) {
vec3 final_result_of_effec2_2 = fun_2(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_2, 1.0);
} else {
vec3 final_result_of_effec2_3 = fun_3(result.rgb);
gl_FragColor = vec4(final_result_of_effec2_3, 1.0);
}
As a starting point I use the Vuforia (version 4) sample called MultiTargets which tracks a 3d physical "cube" in the camera feed and augments it with yellow grid lines along the cube edges.
What I want to achieve is remove the textures and use diffuse lighting on the cube faces instead, by setting my own light position.
I want to do this on native Android and I do NOT want to use Unity.
It's been a hard journey of several days of work and learning. This is my first time working with OpenGL of any kind, and OpenGL ES 2.0 doesn't exactly make it easy for the beginner.
So I have a light source positioned slightly above the top face of my cube. I found that I can get the diffuse effect right if I compute the lambert factor in model space, everything remains in place regardless of my camera, and only the top face gets any light.
But when I move to using eye space, it becomes weird and the light seems to follow my camera around. Other faces get light, not only the top face. I don't understand why that is. For testing I have made sure that the light position is as expected by only using distance to lightsource for rendering pixel brightness in the fragment shader. Therefore, I'm fairly confident in the correctness of my "lightDirectionEyespace", and my only explanation is that something with the normals must be wrong. But I think I followed the explanations for creating the normal matrix correctly...
Help please!
Then there is of course the question whether those diffuse calculations SHOULD be performed in eye space? Will there be any disadvantages if I just do it in model space? I suspect that probably when I later use more models and lights and add specular and transparency, it will not work anymore, even though I don't see yet why.
My renderFrame method: (some variable names still contain "bottle", which is the object I want to light next after I get the cube right)
private void renderFrame()
{
ShaderFactory.checkGLError("Check gl errors prior render Frame");
// Clear color and depth buffer
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT | GLES20.GL_DEPTH_BUFFER_BIT);
// Get the state from Vuforia and mark the beginning of a rendering section
final State state=Renderer.getInstance().begin();
// Explicitly render the Video Background
Renderer.getInstance().drawVideoBackground();
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
// Did we find any trackables this frame?
if(0 != state.getNumTrackableResults())
{
// Get the trackable:
TrackableResult result=null;
final int numResults=state.getNumTrackableResults();
// Browse results searching for the MultiTarget
for(int j=0; j < numResults; j++)
{
result=state.getTrackableResult(j);
if(result.isOfType(MultiTargetResult.getClassType()))
break;
result=null;
}
// If it was not found exit
if(null == result)
{
// Clean up and leave
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
return;
}
final Matrix44F modelViewMatrix_Vuforia=Tool.convertPose2GLMatrix(result.getPose());
final float[] modelViewMatrix=modelViewMatrix_Vuforia.getData();
final float[] modelViewProjection=new float[16];
Matrix.scaleM(modelViewMatrix, 0, CUBE_SCALE_X, CUBE_SCALE_Y, CUBE_SCALE_Z);
Matrix.multiplyMM(modelViewProjection, 0, vuforiaAppSession
.getProjectionMatrix().getData(), 0, modelViewMatrix, 0);
GLES20.glUseProgram(bottleShaderProgramID);
// Draw the cube:
GLES20.glEnable(GLES20.GL_CULL_FACE);
GLES20.glCullFace(GLES20.GL_BACK);
GLES20.glVertexAttribPointer(vertexHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getVertices());
GLES20.glVertexAttribPointer(normalHandleBottle, 3, GLES20.GL_FLOAT, false, 0, cubeObject.getNormals());
GLES20.glEnableVertexAttribArray(vertexHandleBottle);
GLES20.glEnableVertexAttribArray(normalHandleBottle);
// add light position and color
final float[] lightPositionInModelSpace=new float[] {0.0f, 1.1f, 0.0f, 1.0f};
GLES20.glUniform4f(lightPositionHandleBottle, lightPositionInModelSpace[0], lightPositionInModelSpace[1],
lightPositionInModelSpace[2], lightPositionInModelSpace[3]);
GLES20.glUniform3f(lightColorHandleBottle, 0.9f, 0.9f, 0.9f);
// create the normalMatrix for lighting calculations
final float[] normalMatrix=new float[16];
Matrix.invertM(normalMatrix, 0, modelViewMatrix, 0);
Matrix.transposeM(normalMatrix, 0, normalMatrix, 0);
// pass the normalMatrix to the shader
GLES20.glUniformMatrix4fv(normalMatrixHandleBottle, 1, false, normalMatrix, 0);
// extract the camera position for lighting calculations (last column of matrix)
// GLES20.glUniform3f(cameraPositionHandleBottle, normalMatrix[12], normalMatrix[13], normalMatrix[14]);
// set material properties
GLES20.glUniform3f(matAmbientHandleBottle, 0.0f, 0.0f, 0.0f);
GLES20.glUniform3f(matDiffuseHandleBottle, 0.1f, 0.9f, 0.1f);
// pass the model view matrix to the shader
GLES20.glUniformMatrix4fv(modelViewMatrixHandleBottle, 1, false, modelViewMatrix, 0);
// pass the model view projection matrix to the shader
// the "transpose" parameter must be "false" according to the spec, anything else is an error
GLES20.glUniformMatrix4fv(mvpMatrixHandleBottle, 1, false, modelViewProjection, 0);
GLES20.glDrawElements(GLES20.GL_TRIANGLES,
cubeObject.getNumObjectIndex(), GLES20.GL_UNSIGNED_SHORT, cubeObject.getIndices());
GLES20.glDisable(GLES20.GL_CULL_FACE);
// disable the enabled arrays after everything has been rendered
GLES20.glDisableVertexAttribArray(vertexHandleBottle);
GLES20.glDisableVertexAttribArray(normalHandleBottle);
ShaderFactory.checkGLError("MultiTargets renderFrame");
}
GLES20.glDisable(GLES20.GL_BLEND);
GLES20.glDisable(GLES20.GL_DEPTH_TEST);
Renderer.getInstance().end();
}
My vertex shader:
attribute vec4 vertexPosition;
attribute vec3 vertexNormal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 normalMatrix;
// lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
// material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
// pass to fragment shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
// we can just take vec3() of a vec4 and it will take the first 3 entries
vNormalEyespace = vec3(normalMatrix * vec4(vertexNormal, 1.0));
vNormal = vertexNormal;
vVertexEyespace = vec3(modelViewMatrix * vertexPosition);
vVertex = vertexPosition;
// light position
vLightPositionEyespace = modelViewMatrix * uLightPosition;
gl_Position = modelViewProjectionMatrix * vertexPosition;
}
And my fragment shader:
precision highp float; //apparently necessary to force same precision as in vertex shader
//lighting
uniform vec4 uLightPosition;
uniform vec3 uLightColor;
//material
uniform vec3 uMatAmbient;
uniform vec3 uMatDiffuse;
//from vertex shader
varying vec3 vNormalEyespace;
varying vec3 vVertexEyespace;
varying vec4 vLightPositionEyespace;
varying vec3 vNormal;
varying vec4 vVertex;
void main()
{
vec3 normalModel = normalize(vNormal);
vec3 normalEyespace = normalize(vNormalEyespace);
vec3 lightDirectionModel = normalize(uLightPosition.xyz - vVertex.xyz);
vec3 lightDirectionEyespace = normalize(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
vec3 ambientTerm = uMatAmbient;
vec3 diffuseTerm = uMatDiffuse * uLightColor;
// calculate the lambert factor via cosine law
float diffuseLambert = max(dot(normalEyespace, lightDirectionEyespace), 0.0);
// Attenuate the light based on distance.
float distance = length(vLightPositionEyespace.xyz - vVertexEyespace.xyz);
float diffuseLambertAttenuated = diffuseLambert * (1.0 / (1.0 + (0.01 * distance * distance)));
diffuseTerm = diffuseLambertAttenuated * diffuseTerm;
gl_FragColor = vec4(ambientTerm + diffuseTerm, 1.0);
}
I finally solved all problems.
There were 2 issues that might be of interest for future readers.
Vuforia CubeObject class from the official sample (current Vuforia version 4) has wrong normals. They do not all correspond with the vertex definition order. If you're using the CubeObject from the sample, make sure that the normal definitions are correctly corresponding with the faces. Vuforia fail...
As suspected, my normalMatrix was wrongly built. We cannot just invert-transpose the 4x4 modelViewMatrix, we need to first extract the top left 3x3 submatrix from it and then invert-transpose that.
Here is the code that works for me:
final Mat3 normalMatrixCube=new Mat3();
normalMatrixCube.SetFrom4X4(modelViewMatrix);
normalMatrixCube.invert();
normalMatrixCube.transpose();
This code by itself is not that useful though, because it relies on a custom class Mat3 which I randomly imported from this guy because neither Android nor Vuforia seem to offer any matrix class that can invert/transpose 3x3 matrices. This really makes me question my sanity - the only code that works for such a basic problem has to rely on a custom matrix class? Maybe I'm just doing it wrong, I don't know...
thumbs up for not using the fixed functions on this! I found your example quite useful for understanding that one needs to also translate the light to a position in eyespace. All the questions i've found just recommend using glLight.
While this helped me solve using a static light source, something which is missing from your code if you wish to also make transformations on your model(s) while keeping the light source static(e.g rotating the object) is to keep track of the original modelview matrix until the view is changed, or until you're drawing another object which has a different model. So something like:
vLightPositionEyespace = fixedModelView * uLightPosition;
where fixedModelView can be updated in your renderFrame() method.
This thread on opengl discussion boards helped :)
Sorry for the broad question, but I didn't know quite how to word it. I am creating an app that manipulates the camera's pixels. I am new to OpenGl and my problem could be in how I link textures to the shaders, or somewhere in my actual shader code.
I have an RGB look up table that I turn into a texture and pass into the shader to use as the manipulation table. I believe my texture is of the proper size and setting, but I am not 100% sure. In my shader I have this:
uniform sampler2D data_Texture // The RGB look up table texture
uniform samplerExternalOES u_Texture; // The camera's texture
And this is in my main loop in the shader:
// Color changing Algorithm
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel.x, texel.y, texel.z, 1.0);
float rr = gl_FragColor.r * 255.0;
float gg = gl_FragColor.g * 255.0;
float bb = gl_FragColor.b * 255.0;
int r = int(rr);
int g = int(gg);
int b = int(bb);
int index = ((r/4) * 4096) + ((g/4) * 64) + (b/4);
int x = int(mod(float(index), 512.0));
int y = index / 512;
vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
We take the camera's RGB pixels to get an index for the look up table. Then we try to get the rgb data out of the look up table to replace the camera's pixel with. This is where the problem occurs. As you can probably tell from the code above, we don't actually change the FragColor with our data. This is because we were testing and found an interesting occurrence. When we comment out the last line in the main loop,
//vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
the camera just displays like normal, because we don't do any manipulations on the actual FragColor. But when we leave the last line in, the pixels turn green for dark colors and pink/orange for light colors.
Why does filling this data variable, and not explicitly changing the FragColor, change the camera's pixels??
I'm using OpenSceneGraph in Android with GLES2.
I have two cameras: one for the scene and the other for the HUD. The scene camera is the main camera in my osgViewer. My trouble is with the HUD's camera. I am trying to add an OSG geometry to the 2D scene (HUD camera) but can't seem to get it to show up.
I'm using custom shaders to manipulate the geometry's position. To my understanding the projection matrix is correct but I think the problem may be with the view matrix since I'm using two cameras. I've tried setting the HUD's view to be the scene's initial view but to no avail. Please correct me if I am wrong in this.
// New camera for the HUD
hud_camera = new osg::Camera;
// Set the camera's view properties
hud_camera->setProjectionMatrix(
osg::Matrix::ortho2D(0, 1280, 0, 696));
hud_camera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
hud_camera->setViewMatrix(osg::Matrix::identity());
hud_camera->setViewport(0, 0, 1280, 696);
// Clear the depth buffer
hud_camera->setClearMask(GL_DEPTH_BUFFER_BIT);
// Draw the subgraph after the main camera view and don't allow this camera
// to grab evente focurs from the main camera
hud_camera->setRenderOrder(osg::Camera::POST_RENDER);
hud_camera->setAllowEventFocus(false);
osg::Geode * geode;
osg::Geometry * geom;
osg::Vec3Array * vertices;
osg::Program * program;
// Create the geode and geometry to hold the crosshair image
geode = new osg::Geode;
geom = new osg::Geometry;
geode->addDrawable(geom);
hud_camera->addChild(geode);
// Set the vertices
vertices = new osg::Vec3Array;
vertices->push_back(osg::Vec3(0.0, 0.0, 0.0));
vertices->push_back(osg::Vec3(1.0, 0.0, 0.0));
vertices->push_back(osg::Vec3(1.0, 0.0, 1.0));
vertices->push_back(osg::Vec3(0.0, 0.0, 1.0));
geom->setVertexArray(vertices);
// set colors
osg::Vec4Array* colors = new osg::Vec4Array;
colors->push_back(osg::Vec4(1.0, 0.0, 0.0, 1.0));
colors->push_back(osg::Vec4(0.0, 1.0, 0.0, 1.0));
colors->push_back(osg::Vec4(0.0, 0.0, 1.0, 1.0));
colors->push_back(osg::Vec4(1.0, 0.0, 1.0, 1.0));
geom->setVertexAttribArray(7, colors);
geom->setVertexAttribBinding(7, osg::Geometry::BIND_PER_VERTEX);
// Set primitive set
geom->addPrimitiveSet(new osg::DrawArrays(GL_QUADS, 0, 4));
geom->setUseVertexBufferObjects(true);
// Vertex Shader
char vertSource[] =
"attribute vec4 osg_Vertex;\n"
"attribute vec4 a_col;"
"uniform mat4 osg_ModelViewProjectionMatrix;\n"
"uniform mat4 proj_matrix;\n"
"uniform mat4 view_matrix;\n"
"varying vec4 v_col;"
"void main(void)\n"
"{\n"
"gl_Position = view_matrix * proj_matrix * osg_Vertex;\n"
"v_col = a_col;\n"
"}\n";
// Fragment shader
char fragSource[] =
"precision mediump float;\n"
"varying vec4 v_col;"
"void main(void)\n"
"{\n"
"gl_FragColor = v_col;\n"
"}\n";
// Set the shaders
program = new osg::Program;
program->setName("Crosshair Shader");
program->addShader(new osg::Shader(osg::Shader::VERTEX, vertSource));
program->addShader(new osg::Shader(osg::Shader::FRAGMENT, fragSource));
program->addBindAttribLocation("a_col", 7);
osg::StateSet* state = geode->getOrCreateStateSet();
state->addUniform(
new osg::Uniform("proj_matrix", hud_camera->getProjectionMatrix()));
state->addUniform(
new osg::Uniform("view_matrix", hud_camera->getViewMatrix());
// Add the program shaders
geode->getOrCreateStateSet()->setAttributeAndModes(
program, osg::StateAttribute::ON );
// Needs no lighting or any depth for a 2D image
geode->getOrCreateStateSet()->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
state->setMode(GL_DEPTH_TEST, osg::StateAttribute::OFF);
Afterwards I attach the hud_camera as a child to the scene data. Also the scene camera is doing nothing special other than looking at a particular point in the scene.
I do know for a fact that the geometry is being created. By using this in the vertex shader:
gl_Position = osg_ModelViewProjectionMatrx * osg_Vertex
I can see it off to the left in the scene.
First of all, you shouldn't be performing view * proj * vertex, it should be proj * view * vertex.
For a hud I think don't really need to have a view matrix at all. My understanding is that you just want to draw a flat quad on the screen?
If your projection matrix maps (0,0) to (1280,696), then you can just draw objects in this coordinate system (a quad from (0,0) to (640,696) will cover the left half of the screen, etc).
Or if you want to draw things in the space from (0,0) to (1,1) then just override the projection matrix to map this space. It doesn't make much sense to me to involve a view matrix for something as simple as a HUD.