Texture coordinates inconsistent - android

I tried tiling texture coordinates on a quad in unity for some debugging.
On desktop it had no problems but on android(old tablet) I see inconsistency in texture coordinates(Image links below show the results).
As we go from 0.0,0.0 to 1.0,1.0 the texture coordinates keeps becoming inconsistent.
the inconsistency increases as the tiling values go up, Results in images are for 60,160.
my device's android version is 5.0 and GPU supports opengl es 2.0 not 3.0.
So I would like to know why is this happening on my tablet but not on my desktop ,would appreciate information about what is happening in the gpu that is causing it.
Shader I used-
Shader "Unlit/uvdisplay"
{
Properties
{
}
SubShader
{
Tags { "RenderType"="Opaque" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = fixed4(frac(i.uv*fixed2(60.0,160.0)),0.0,1.0);
return col;
}
ENDCG
}
}
}
texcoord at origin
texcoord at the opposit end

Not all Android devices support highp in the fragment shader, some older devices are limited to mediump.
In Unity terms, based on these definitions, it's as if you ask for float precision in your fragment shader, but you get half precision.
Not too much you can do about it except try to keep values near the origin to minimize error.
Also, on some of the devices that only support half precision, as long as you do nothing except sample the texture (i.e. don't modify or swizzle the texture coordinates before using it), the driver/GPU is able to use the full precision for the texture sampling.

Related

Cell shading effect in OpenGL ES 2.0/3.0

I applied cell shading effect to the object, like:
This works well, but there are many conditional checks ("if" statements) in the fragment shader:
#version 300 es
precision lowp float;
in float v_CosViewAngle;
in float v_LightIntensity;
const lowp vec3 defaultColor = vec3(0.1, 0.7, 0.9);
void main() {
lowp float intensity = 0.0;
if (v_CosViewAngle > 0.33) {
intensity = 0.33;
if (v_LightIntensity > 0.76) {
intensity = 1.0;
} else if (v_LightIntensity > 0.51) {
intensity = 0.84;
} else if (v_LightIntensity > 0.26) {
intensity = 0.67;
} else if (v_LightIntensity > 0.1) {
intensity = 0.50;
}
}
outColor = vec4(defaultColor * intensity, 1.0);
}
I guess so many checks in the fragment shader can ultimately affect performance. In addition, shader size is increasing. Especially if there will be even more cell shading levels.
Is there any other way to get this effect? Maybe some glsl-function can be used here?
Thanks in advance!
Store your color bands in a Nx1 texture, do a texture lookup using v_LightIntensity as your texture coordinate. Want a different shading level count then just change the texture.
EDIT Store an NxM texture, doing a lookup using vLightIntensity and v_CosViewAngle as a 2D coordinate, and you can kill branches completely.

Incorrect texture display on vertical plane in OpenGL

Working on AR app and trying to replace plane texture.
I'm trying to render texture on vertical and horizontal planes. It's working fine for horizontal planes, but doesn't work well on vertical.
i found that something wrong with texture_coord calculations, but can't figure it out (new to OpenGL).
Here is my vertex shader
void main()
{
vec4 local_pos = vec4(a_position, 1.0);
vec4 world_pos = u_model * local_pos;
texture_coord = world_pos.sp * u_scale;
gl_Position = u_mvp * local_pos;
}
fragment shader
out vec4 outColor;
void main()
{
vec4 control = texture(u_texture, diffuse_coord);
float dotScale = 1.0;
float lineFade = 0.5;
vec3 newColor = (control.r * dotScale > u_gridControl.x) ? u_dotColor.rgb : control.g > u_gridControl.y ? u_lineColor.rgb * lineFade: u_lineColor.rgb * 0.25 * lineFade;
outColor = vec4(newColor, 1.0);
}
The important bit is texture_coord = world_pos.sp in your vertex shader.
There are 3 ways to refer to the components of a vector in GLSL. xyzw (the most common), rgba (more natural for colours), stpq (more natural for texture coordinates).
The line texture_coord = world_pos.sp would be clearer if it were written as texture_coord = world_pos.xz.
Once you realize that you're generating texture coordinates by ignoring the y-component it's obvious why vertical planes are not textured how you would like.
Unfortunately there's no simple one line fix. Perhaps tri-planar texturing might be an appropriate solution for you - this seems to be a good explanation of the technique.

Swap out MainTex pixels with other textures' via surface Shader (Unity)

The main texture of my surface shader is a Google Maps image tile, similar to this:
.
I want to replace pixels that are close to a specified color with that from a separate texture. What is working now is the following:
Shader "MyShader"
{
Properties
{
_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
_GrassTexture("Grass Texture", 2D) = "white" {}
_RoadTexture("Road Texture", 2D) = "white" {}
_WaterTexture("Water Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha approxview halfasview noforwardadd nometa
uniform sampler2D _MainTex;
uniform sampler2D _GrassTexture;
uniform sampler2D _RoadTexture;
uniform sampler2D _WaterTexture;
struct Input
{
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o)
{
fixed4 ct = tex2D(_MainTex, IN.uv_MainTex);
// if the red (or blue) channel of the pixel is within a
// specific range, get either a 1 or a 0 (true/false).
int grassCond = int(ct.r >= 0.45) * int(0.46 >= ct.r);
int waterCond = int(ct.r >= 0.14) * int(0.15 >= ct.r);
int roadCond = int(ct.b >= 0.23) * int(0.24 >= ct.b);
// if none of the above conditions is a 1, then we want to keep our
// current pixel's color:
half defaultCond = 1 - grassCond - waterCond - roadCond;
// get the pixel from each texture, multiple by their check condition
// to get:
// fixed4(0,0,0,0) if this isn't the right texture for this pixel
// or fixed4(r,g,b,1) from the texture if it is the right pixel
fixed4 grass = grassCond * tex2D(_GrassTexture, IN.uv_MainTex);
fixed4 water = waterCond * tex2D(_WaterTexture, IN.uv_MainTex);
fixed4 road = roadCond * tex2D(_RoadTexture, IN.uv_MainTex);
fixed4 def = defaultCond * ct; // just used the MainTex pixel
// then use the found pixels as the Albedo
o.Albedo = (grass + road + water + def).rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "None"
}
This is the first shader I've ever written, and it probably isn't very performant. It seems counter intuitive to me to call tex2D on each texture for every pixel to just throw that data away, but I couldn't think of a better way to do this without if/else (which I read were bad for GPUs).
This is a Unity Surface Shader, and not a fragment/vertex shader. I know there is a step that happens behind the scenes that will generate the fragment/vertex shader for me (adding in the scene's lighting, fog, etc.). This shader is applied to 100 256x256px map tiles (2560x2560 pixels in total). The grass/road/water textures are all 256x256 pixels as well.
My question is: is there a better, more performant way of accomplishing what I'm doing here? The game runs on Android and iOS.
I'm not a specialist in Shader performance, but assuming you have a relatively small number of source tiles that you wish to render in the same frame it might make more sense to store the result of the pixel replacement and reuse it.
As you are stating that the resulting image is going to be the same size as your source tile, just render the source tile using your surface shader (without any lighting though, you may want to consider using a simple, flat pixel shader!) into a RenderTexture once and then use that RenderTexture as source for your world rendering. That way you are doing the expensive work only once per source tile and thus it isn't even important anymore whether your shader is well optimized.
If all textures are static, you might even consider not doing this at runtime, but just translate them once in the Editor.

Android opengl es 2.0 timely dependant operation in shader becomes slower over time

I've written some shader code for my android application. It has some time dependant animation which work totaly fine on webgl version, shader code is below, but full version could be found here
vec3 bip(vec2 uv, vec2 center)
{
vec2 diff = center-uv; //difference between center and start coordinate
float r = length(diff); //vector length
float scale = mod(u_ElapsedTime,2.); //it is equal 1 every 2 seconds and trigerring function
float circle = smoothstep(scale, scale+cirleWidth, r)
* smoothstep(scale+cirleWidth,scale, r)*4.;
return vec3(circle);
}
Return of the function is used in Fragcolor as a base for color.
u_ElapsedTime is sent to shader via uniform:
glUniform1f(uElapsedTime,elapsedTime);
Time data sent to shader from "onDrawFrame":
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
elapsedTime = (SystemClock.currentThreadTimeMillis()-startTime)/100f;
//Log.d("KOS","time " + elapsedTime);
scannerProgram.useProgram(); //initialize shader
scannerProgram.setUniforms(resolution,elapsedTime,rotate); //send uniforms to shader
scannerSurface.bindData(scannerProgram); //get attribute location
scannerSurface.draw(); //draw vertices with given attributes
}
So everything looks totaly fine. Nevertheless, after some amount of time it looks like there is some lags and amount of frames is lesser then from the beginning. In the end it could be like only one-two frames per cicle for that function. In the same time it doesn't seems like opengl by itself have some lags, because i can for example rotate the picture and don't see any lags.
What could be the reason of that lags??
upd:
code of binddata:
public void bindData(ScannerShaderProgram scannerProgram) {
//getting location of each attribute for shader program
vertexArray.setVertexAttribPointer(
0,
scannerProgram.getPositionAttributeLocation(),
POSITION_COMPONENT_COUNT,
0
);
Sounds to me like precision issues. Try taking this line from your shader:
float scale = mod(u_ElapsedTime,2.);
And perform it on the CPU instead. e.g.
elapsedTime = ((SystemClock.currentThreadTimeMillis()-startTime)%200)/100f;

How to use experimental meshing API with Project Tango

I apologize in advance for my long post.
My purpose is to create a meshing app for a Project Tango Yellowstone device to create a 3D map of building interiors. I intend to make use of the experimental meshing API added in recent versions of the tango-examples-c code.
I'm using point-cloud-jni-example (turing) as a starting point and so far have done the following:
Set config_experimental_enable_scene_reconstruction tango config parameter in point_cloud_app.cc (see docs)
// Enable scene reconstruction
ret = TangoConfig_setBool(tango_config_,
config_experimental_enable_scene_reconstruction", true);
if (ret != TANGO_SUCCESS) {
LOGE("PointCloudApp: config_experimental_enable_scene_reconstruction() failed"
"with error code: %d", ret);
return ret;
}
Added extractMesh native method in TangoJNINative.java
// Extracts the full mesh from the scene reconstruction.
public static native float extractMesh();
Added matching extractMesh function to the jni_interface.cc
JNIEXPORT void JNICALL
Java_com_projecttango_experiments_nativepointcloud_TangoJNINative_extractMesh(
JNIEnv*, jobject) {
app.ExtractMesh();
}
Added ExtractMesh method in point_cloud_app.cc
void PointCloudApp::ExtractMesh() {
// see line 1245 of tango_client_api.h
mesh_ptr = new TangoMesh_Experimental();
TangoService_Experimental_extractMesh(mesh_ptr);
mesh = *mesh_ptr;
LOGE("PointCloudApp: num_vertices: %d", mesh.num_vertices);
float float1, float2, float3;
float1 = mesh.vertices[1][0];
float2 = mesh.vertices[1][1];
float3 = float1 + float2; // these lines show I can use the vertex data
LOGE("PointCloudApp: First vertex, x: %f", mesh.vertices[1][0]); // this line causes app to crash; printing the vertex data seems to be the problem
}
Added TangoMesh_Experimental declaration to point_cloud_app.h
// see line 1131 of tango_client_api.h
TangoMesh_Experimental* mesh_ptr;
TangoMesh_Experimental mesh;
Added an additional button to call the extractMesh native method. (not showing this one as it is pretty straightforward)
For reference, here is the TangoMesh_Experimental Struct from the API:
// A mesh, described by vertices and face indices, with optional per-vertex
// normals and colors.
typedef struct TangoMesh_Experimental {
// Index into a three-dimensional fixed grid.
int32_t index[3];
// Array of vertices. Each vertex is an {x, y, z} coordinate triplet, in
// meters.
float (*vertices)[3];
// Array of faces. Each face is an index triplet into the vertices array.
uint32_t (*faces)[3];
// Array of per-vertex normals. Each normal is a normalized {x, y, z} vector.
float (*normals)[3];
// Array of per-vertex colors. Each color is a 4-tuple of 8-bit {R, G, B, A}
// values.
uint8_t (*colors)[4];
// Number of vertices, describing the size of the vertices array.
uint32_t num_vertices;
// Number of faces, describing the size of the faces array.
uint32_t num_faces;
// If true, each vertex will have an associated normal. In that case, the
// size of the normals array will be equal to num_vertices. Otherwise, the
// size of the normals array will be 0.
bool has_normals;
// If true, each vertex will have an associated color. In that case, the size
// of the colors array will be equal to num_vertices. Otherwise, the size of
// the colors array will be 0.
bool has_colors;
} TangoMesh_Experimental;
My current understanding of this struct is:
The three pointers in float (*vertices)[3]; point to the addresses at the start of three chuncks of memory for the x, y, and z coordinates for the vertices for the mesh (the same is true for normals and colors colors). A specific vertex is composed of an x, y, and z component found at a specific index in the three arrays.
Similarly, the uint32_t (*faces)[3] array has three pointers to the beginning of three chunks of memory, but a specific set of three elements here instead contains index numbers that indicate which three vertices (from the vertices array (each with three coordinates)) make up that face.
The current status is I am able to extract the mesh, and print some of it to the console, then crashes without errors
PointCloudApp: PointCloudApp: num_vertices: 8044
If I omit the last line I added in point_cloud_app.cc (#4, above), the app doesn't crash. I am able to access the vertex data and do something with it, but printing it using LOGE causes a crash 9 times out of 10. Occasionally, it does print the value correctly without crashing. Could the vertex data have holes or invalid values?
I have tried returning test_float from JNI back to java, but it crashes again when I try to do so.
Suggestions?
vertices is a dynamic array of points, where each point is a float[3]. Try this example:
for (int i = 0; i < mesh.num_vertices; ++i) {
printf("%d: x=%f y=%f z=%f\n", i, mesh.vertices[i][0],
mesh.vertices[i][1], mesh.vertices[i][2]);
}
If you look at the memory layout, it would be x0 y0 z0 x1 y1 z1 etc, each of those a float.

Categories

Resources