I am developing for Android with OpenGL ES 2 and two devices: Google Nexus 3 and Archos 80 G9 (8" tablet). On these devices my shaders works fine.
Now, i have bought a Samsung Galaxy Tab 2, and a single vertex shader does not work on this device. This shader (from OSGEarth) remove the problem of z-fighting for tracks on a 3D terrain, modulating the z value of the vertex. On Samsung Galaxy Tab 2 this modulation does not work (!)
This is the shader
std::string vertexShaderSource =
"#version 100\n\n"
"float remap( float val, float vmin, float vmax, float r0, float r1 ) \n"
"{ \n"
" float vr = (clamp(val, vmin, vmax)-vmin)/(vmax-vmin); \n"
" return r0 + vr * (r1-r0); \n"
"} \n\n"
"mat3 castMat4ToMat3Function(in mat4 m) \n"
"{ \n"
" return mat3( m[0].xyz, m[1].xyz, m[2].xyz ); \n"
"} \n\n"
"uniform mat4 osg_ViewMatrix; \n"
"uniform mat4 osg_ViewMatrixInverse; \n"
"uniform mat4 osg_ModelViewMatrix; \n"
"uniform mat4 osg_ProjectionMatrix; \n"
"uniform mat4 osg_ModelViewProjectionMatrix; \n"
"uniform float osgearth_depthoffset_minoffset; \n"
"attribute vec4 osg_Vertex;\n"
"attribute vec4 osg_Color;\n"
"varying vec4 v_color; \n\n"
"varying vec4 adjV; \n"
"varying float simRange; \n\n"
"void main(void) \n"
"{ \n"
// transform the vertex into eye space:
" vec4 vertEye = osg_ModelViewMatrix * osg_Vertex; \n"
" vec3 vertEye3 = vertEye.xyz/vertEye.w; \n"
" float range = length(vertEye3); \n"
// vec3 adjVecEye3 = normalize(vertEye3); \n"
// calculate the "up" vector, that will be our adjustment vector:
" vec4 vertWorld = osg_ViewMatrixInverse * vertEye; \n"
" vec3 adjVecWorld3 = -normalize(vertWorld.xyz/vertWorld.w); \n"
" vec3 adjVecEye3 = castMat4ToMat3Function(osg_ViewMatrix) * adjVecWorld3; \n"
// remap depth offset based on camera distance to vertex. The farther you are away,
// the more of an offset you need.
" float offset = remap( range, 1000.0, 10000000.0, osgearth_depthoffset_minoffset, 10000.0); \n"
// adjust the Z (distance from the eye) by our offset value:
" vertEye3 -= adjVecEye3 * offset; \n"
" vertEye.xyz = vertEye3 * vertEye.w; \n"
// Transform the new adjusted vertex into clip space and pass it to the fragment shader.
" adjV = osg_ProjectionMatrix * vertEye; \n"
// Also pass along the simulated range (eye=>vertex distance). We will need this
// to detect when the depth offset has pushed the Z value "behind" the camera.
" simRange = range - offset; \n"
" gl_Position = osg_ModelViewProjectionMatrix * osg_Vertex; \n"
" v_color = osg_Color; \n"
// transform clipspace depth into [0..1] for FragDepth:
" gl_Position.z = max(0.0, (adjV.z/adjV.w)*gl_Position.w ); \n"
// if the offset pushed the Z behind the eye, the projection mapping will
// result in a z>1. We need to bring these values back down to the
// near clip plan (z=0). We need to check simRange too before doing this
// so we don't draw fragments that are legitimently beyond the far clip plane.
" float z = 0.5 * (1.0+(adjV.z/adjV.w)); \n"
" if ( z > 1.0 && simRange < 0.0 ) { gl_Position.z = 0.0; } \n"
"} \n" ;
With this code the track on the terrain is not displayed at all on SGT2.
If a comment the instruction
gl_Position.z = max(0.0, (adjV.z/adjV.w)*gl_Position.w );
the shader works, and the track is displayed, but of course the z-figthing is not removed.
My Google Nexus and Samsung Galaxy Tab 2 both use PowerVR SGX 540, so maybe the problem is in the Floating Point Math precision of the different CPU's? (Or some bug?)
Any ideas?
Thanks
[SOLVED]
I removed from the vertex shader
"varying vec4 adjV; \n"
"varying float simRange; \n\n"
and declared them ad vec4/float.
I think they where assigned mediump precision as varyings.
Related
I'm trying to get the depth buffer on my Android device (nexus 7).
Because I cannot attach the depth buffer (opengl ES), I've tried to use shaders in order to encode the depth values in the G and B components of the color buffer.
The shaders :
static const char gVertexShader[] =
"varying vec2 texcoord; \n"
"void main() \n"
"{ \n"
" gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; \n"
" texcoord = gl_MultiTexCoord0.xy/gl_MultiTexCoord0.w; \n"
"} \n";
static const char gFragmentShader[] =
#ifdef __ANDROID__
"precision highp float; \n"
#endif
"uniform sampler2D modelTexture; \n"
"varying vec2 texcoord; \n"
"void main() \n"
"{ \n"
" gl_FragColor.r = texture2D(modelTexture, texcoord).r; \n"
" float depth = gl_FragCoord.z; \n"
" gl_FragColor.g = depth; \n"
" gl_FragColor.b = depth; \n"
" gl_FragColor.a = 1.0; \n"
"} \n";
On the pc, the results are ok (it should be city buildings faces, same depth each), image is the G channel:
But on the Android, the results has a texture of "dots" on it, meaning closer pixels :
The Android Configuration is :
EGL_BUFFER_SIZE: 32
EGL_ALPHA_SIZE: 8
EGL_BLUE_SIZE: 8
EGL_GREEN_SIZE: 8
EGL_RED_SIZE: 8
EGL_DEPTH_SIZE: 16
EGL_STENCIL_SIZE: 0
EGL_CONFIG_CAVEAT: 12344
EGL_CONFIG_ID: 25
EGL_LEVEL: 0
EGL_MAX_PBUFFER_HEIGHT: 2048
EGL_MAX_PBUFFER_PIXELS: 4194304
EGL_MAX_PBUFFER_WIDTH: 2048
EGL_NATIVE_RENDERABLE: 0
EGL_NATIVE_VISUAL_ID: 1
EGL_NATIVE_VISUAL_TYPE: -1
EGL_SAMPLES: 0
EGL_SAMPLE_BUFFERS: 0
EGL_SURFACE_TYPE: 3175
EGL_TRANSPARENT_TYPE: 12344
EGL_TRANSPARENT_RED_VALUE: 0
EGL_TRANSPARENT_GREEN_VALUE: 0
EGL_TRANSPARENT_BLUE_VALUE: 0
EGL_BIND_TO_TEXTURE_RGB: 0
EGL_BIND_TO_TEXTURE_RGBA: 0
EGL_MIN_SWAP_INTERVAL: 1
EGL_MAX_SWAP_INTERVAL: 1
EGL_LUMINANCE_SIZE: 0
EGL_ALPHA_MASK_SIZE: 8
EGL_COLOR_BUFFER_TYPE: 12430
EGL_RENDERABLE_TYPE: 4
EGL_CONFORMANT: 4
What could cause this ?
Is there another way to read the Depth buffer on Android device ?
Try disabling dithering:
GLES20.glDisable(GLES20.GL_DITHER);
I am working on a video player for android device, in which I am using ffmpeg for decoding and opengl es for rendering. I am stuck at one point where I am using opengl es shaders for YUV to RGB conversion. Application is able to display image but its not displaying correct colors. After conervting from YUV to RGB, image only displays green and pink colors. I did searched on google, but no solution found. Can any one please help me on this topic?
I am getting three different buffers (y, u, v) from ffmpeg and then I am passing these buffers to 3 textures as it is.
Here are shaders I am using.
static const char kVertexShader[] =
"attribute vec4 vPosition; \n"
"attribute vec2 vTexCoord; \n"
"varying vec2 v_vTexCoord; \n"
"void main() { \n"
"gl_Position = vPosition; \n"
"v_vTexCoord = vTexCoord; \n"
"} \n";
static const char kFragmentShader[] =
"precision mediump float; \n"
"varying vec2 v_vTexCoord; \n"
"uniform sampler2D yTexture; \n"
"uniform sampler2D uTexture; \n"
"uniform sampler2D vTexture; \n"
"void main() { \n"
"float y=texture2D(yTexture, v_vTexCoord).r;\n"
"float u=texture2D(uTexture, v_vTexCoord).r - 0.5;\n"
"float v=texture2D(vTexture, v_vTexCoord).r - 0.5;\n"
"float r=y + 1.13983 * v;\n"
"float g=y - 0.39465 * u - 0.58060 * v;\n"
"float b=y + 2.03211 * u;\n"
"gl_FragColor = vec4(r, g, b, 1.0);\n"
"}\n";
static const GLfloat kVertexInformation[] =
{
-1.0f, 1.0f, // TexCoord 0 top left
-1.0f,-1.0f, // TexCoord 1 bottom left
1.0f,-1.0f, // TexCoord 2 bottom right
1.0f, 1.0f // TexCoord 3 top right
};
static const GLshort kTextureCoordinateInformation[] =
{
0, 0, // TexCoord 0 top left
0, 1, // TexCoord 1 bottom left
1, 1, // TexCoord 2 bottom right
1, 0 // TexCoord 3 top right
};
static const GLuint kStride = 0;//COORDS_PER_VERTEX * 4;
static const GLshort kIndicesInformation[] =
{
0, 1, 2,
0, 2, 3
};
Here is another person who had asked same question: Camera frame yuv to rgb conversion using GL shader language
Thank You.
UPDATE:
ClayMontgomery's shaders.
const char* VERTEX_SHADER = "\
attribute vec4 a_position;\
attribute vec2 a_texCoord;\
varying vec2 gsvTexCoord;\
varying vec2 gsvTexCoordLuma;\
varying vec2 gsvTexCoordChroma;\
\
void main()\
{\
gl_Position = a_position;\
gsvTexCoord = a_texCoord;\
gsvTexCoordLuma.s = a_texCoord.s / 2.0;\
gsvTexCoordLuma.t = a_texCoord.t / 2.0;\
gsvTexCoordChroma.s = a_texCoord.s / 4.0;\
gsvTexCoordChroma.t = a_texCoord.t / 4.0;\
}";
const char* YUV_FRAGMENT_SHADER = "\
precision highp float;\
uniform sampler2D y_texture;\
uniform sampler2D u_texture;\
uniform sampler2D v_texture;\
varying vec2 gsvTexCoord;\
varying vec2 gsvTexCoordLuma;\
varying vec2 gsvTexCoordChroma;\
\
void main()\
{\
float y = texture2D(y_texture, gsvTexCoordLuma).r;\
float u = texture2D(u_texture, gsvTexCoordChroma).r;\
float v = texture2D(v_texture, gsvTexCoordChroma).r;\
u = u - 0.5;\
v = v - 0.5;\
vec3 rgb;\
rgb.r = y + (1.403 * v);\
rgb.g = y - (0.344 * u) - (0.714 * v);\
rgb.b = y + (1.770 * u);\
gl_FragColor = vec4(rgb, 1.0);\
}";
Here is output:
The pink and green colors indicate that your input luma and chroma values are not being normalized correctly before the CSC matrix calculations and some of your constants look wrong. Also, you should probably be scaling your texture coordinates for the chroma because it's typically sub-sampled with respect to the luma. There is a good guide to this here:
http://fourcc.org/fccyvrgb.php
Here is an implementation in GLSL that I coded and tested on the Freescale i.MX53. I recommend you try this code.
const char* pVertexShaderSource = "\
uniform mat4 gsuMatModelView;\
uniform mat4 gsuMatProj;\
attribute vec4 gsaVertex;\
attribute vec2 gsaTexCoord;\
varying vec2 gsvTexCoord;\
varying vec2 gsvTexCoordLuma;\
varying vec2 gsvTexCoordChroma;\
\
void main()\
{\
vec4 vPosition = gsuMatModelView * gsaVertex;\
gl_Position = gsuMatProj * vPosition;\
gsvTexCoord = gsaTexCoord;\
gsvTexCoordLuma.s = gsaTexCoord.s / 2.0;\
gsvTexCoordLuma.t = gsaTexCoord.t / 2.0;\
gsvTexCoordChroma.s = gsaTexCoord.s / 4.0;\
gsvTexCoordChroma.t = gsaTexCoord.t / 4.0;\
}";
const char* pFragmentShaderSource = "\
precision highp float;\
uniform sampler2D gsuTexture0;\
uniform sampler2D gsuTexture1;\
uniform sampler2D gsuTexture2;\
varying vec2 gsvTexCoordLuma;\
varying vec2 gsvTexCoordChroma;\
\
void main()\
{\
float y = texture2D(gsuTexture0, gsvTexCoordLuma).r;\
float u = texture2D(gsuTexture1, gsvTexCoordChroma).r;\
float v = texture2D(gsuTexture2, gsvTexCoordChroma).r;\
u = u - 0.5;\
v = v - 0.5;\
vec3 rgb;\
rgb.r = y + (1.403 * v);\
rgb.g = y - (0.344 * u) - (0.714 * v);\
rgb.b = y + (1.770 * u);\
gl_FragColor = vec4(rgb, 1.0);\
}";
I am doing yuv to rgb conversion using opengl shaders. But its only show green and pink colors. I am using ffmpeg to decode movie. I am beginner in this, so have no idea how to solve it. ffmpeg give me three buffers of yuv. I am directly assigning these buffers to three textures.
Here are shaders I am using.
static const char* VERTEX_SHADER =
"attribute vec4 vPosition; \n"
"attribute vec2 a_texCoord; \n"
"varying vec2 tc; \n"
"uniform mat4 u_mvpMat; \n"
"void main() \n"
"{ \n"
" gl_Position = u_mvpMat * vPosition; \n"
" tc = a_texCoord; \n"
"} \n";
static const char* YUV_FRAG_SHADER =
"#ifdef GL_ES \n"
"precision highp float; \n"
"#endif \n"
"varying vec2 tc; \n"
"uniform sampler2D TextureY; \n"
"uniform sampler2D TextureU; \n"
"uniform sampler2D TextureV; \n"
"uniform float imageWidth; \n"
"uniform float imageHeight; \n"
"void main(void) \n"
"{ \n"
"float nx, ny; \n"
"vec3 yuv; \n"
"vec3 rgb; \n"
"nx = tc.x; \n"
"ny = tc.y; \n"
"yuv.x = texture2D(TextureY, tc).r; \n"
"yuv.y = texture2D(TextureU, vec2(nx/2.0, ny/2.0)).r - 0.5; \n"
"yuv.z = texture2D(TextureV, vec2(nx/2.0, ny/2.0)).r - 0.5; \n"
// Using BT.709 which is the standard for HDTV
"rgb = mat3( 1, 1, 1, \n"
"0, -0.18732, 1.8556, \n"
"1.57481, -0.46813, 0)*yuv;\n"
// BT.601, which is the standard for SDTV is provided as a reference
//"rgb = mat3( 1, 1, 1, \n"
// "0, -0.34413, 1.772, \n"
// "1.402, -0.71414, 0) * yuv; \n"
"gl_FragColor = vec4(rgb, 1.0); \n"
"} \n";
Output:
What wrong am I doing? Please help me out with this.
Thank You.
UPDATE:
While debugging ffmpeg decoding, I found that ffmpeg decoder give PIX_FMT_YUV420P output format. Do I have to make some tweaks to get correct image colors?
I'm not sure about this transformation:
"rgb = mat3( 1, 1, 1, \n"
"0, -0.18732, 1.8556, \n"
"1.57481, -0.46813, 0)*yuv;\n"
In refreshing my memory on matrix * vector operations in GLSL using this page as reference, I think you either need to transpose the coefficient matrix, or move yuv to the front of the operation, i.e., yuv * mat3(...). Performing the operation as mat3(...) * yuv means:
r = y * 1 + u * 1 + v * 1
g = y * 0 + u * -0.18732 + v * 1.8556
b = y * 1.57481 + u * -0.46813 + v * 0
And these conversions are very incorrect.
As another reference, here's a small, complete, sample GLSL shader program that converts YUV -> RGB that may be of some guidance: http://www.fourcc.org/source/YUV420P-OpenGL-GLSLang.c
How can I pass a float value to fragment shader ?
This is my code on android:
int aUseTexture = GLES20.glGetAttribLocation(program, "uUseTexture");
GLES20.glUniform1f(aUseTexture, 1.0f);
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
Here is my shader:
String verticesShader =
"uniform mat4 uScreen;\n" +
"attribute vec2 aPosition;\n" +
"attribute vec3 aColor;\n" +
"attribute vec2 aTexPos; \n" +
"varying vec2 vTexPos; \n" +
"varying vec3 vColor;\n" +
"void main() {\n" +
" vTexPos = aTexPos; \n" +
" gl_Position = uScreen * vec4(aPosition.xy, 0.0, 1.0);\n" +
" vColor = aColor;\n" +
"}";
// Our fragment shader. Just return vColor.
// If you look at this source and just said 'WTF?', remember
// that all the attributes are defined in the VERTEX shader and
// all the 'varying' vars are considered OUTPUT of vertex shader
// and INPUT of the fragment shader. Here we just use the color
// we received and add a alpha value of 1.
String fragmentShader =
"uniform float uUseTexture; \n" +
"uniform sampler2D uTexture;\n" +
"precision mediump float;\n"+
"varying vec2 vTexPos; \n" +
"varying vec3 vColor;\n" +
"void main(void)\n" +
"{\n" +
" if ( uUseTexture != 1.0 ) \n" +
" gl_FragColor = vec4(vColor.xyz, 1); \n" +
" else \n" +
" gl_FragColor = texture2D(uTexture, vTexPos); \n" +
//" gl_FragColor = vec4(vColor.xyz, 1);\n" +
"}";
You can see the if statement in the fragment shader , that is the one i tried to check if i pass in 1.0 it should do texture else use color.
You are probably using the wrong function call for "uniform variable". Try glGetUniformLocation() as follow:
int aUseTexture = GLES20.glGetUniformLocation(program, "uUseTexture");
Also, the floating point testing (uUseTexture != 1.0) may not be always reliable all the time. You may want to use an integer type.
As far as I know you have to pass the value through the vertex shader before it can get to the fragment shader. e.g. add "uniform float uUseTexture_in; \n" and "varying float uUseTexture; \n" at the top of the vertex shader, in the main function add "uUseTexture = uUseTexture_in;". And your shader should work
This vertex shader code works on every device except for the Galaxy Note 2.
gl_Position = uMVPMatrix * vPosition;
where if I reverse the matrix multiplication to:
gl_Position = vPosition * uMVPMatrix; I can actually get things to appear.
Unfortunately, the reverse would require me to completely rewrite my transformations library.
Does anyone have any insight on what could be causing this, is this an opengl driver error with the device?
Shader code
private final String vertexShaderCode =
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
"attribute vec2 a_TexCoordinate;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
"v_TexCoordinate = a_TexCoordinate;" +
"gl_Position = uMVPMatrix * vPosition;" +
"}";
private final String fragmentShaderCode =
"precision mediump float;" +
"uniform sampler2D u_Texture;" +
"varying vec2 v_TexCoordinate;" +
"void main() {" +
" gl_FragColor = texture2D(u_Texture, v_TexCoordinate);" +
//" gl_FragColor = vec4(v_TexCoordinate, 0, 1);" +
"}";
This is not about only a galaxy note 2 platform.
This is a mathematical question. Because both glsl/hlsl uses column major order,
It is a right way to multiply MATRIX x VECTOR
or
you can transpose the matrix using a option in
glUniformMatrix4fv( h_Uniforms[UNIFORMS_PROJECTION], 1, GL_FALSE or GL_TRUE, g_proxtrans.s);
Apparently, It can be a problem not to call this function with the option ( GL_TRUE is to use transposing ) each every frame just try .