Swap out MainTex pixels with other textures' via surface Shader (Unity) - android

The main texture of my surface shader is a Google Maps image tile, similar to this:
.
I want to replace pixels that are close to a specified color with that from a separate texture. What is working now is the following:
Shader "MyShader"
{
Properties
{
_MainTex("Base (RGB) Trans (A)", 2D) = "white" {}
_GrassTexture("Grass Texture", 2D) = "white" {}
_RoadTexture("Road Texture", 2D) = "white" {}
_WaterTexture("Water Texture", 2D) = "white" {}
}
SubShader
{
Tags{ "Queue" = "Transparent-1" "IgnoreProjector" = "True" "ForceNoShadowCasting" = "True" "RenderType" = "Opaque" }
LOD 200
CGPROGRAM
#pragma surface surf Lambert alpha approxview halfasview noforwardadd nometa
uniform sampler2D _MainTex;
uniform sampler2D _GrassTexture;
uniform sampler2D _RoadTexture;
uniform sampler2D _WaterTexture;
struct Input
{
float2 uv_MainTex;
};
void surf(Input IN, inout SurfaceOutput o)
{
fixed4 ct = tex2D(_MainTex, IN.uv_MainTex);
// if the red (or blue) channel of the pixel is within a
// specific range, get either a 1 or a 0 (true/false).
int grassCond = int(ct.r >= 0.45) * int(0.46 >= ct.r);
int waterCond = int(ct.r >= 0.14) * int(0.15 >= ct.r);
int roadCond = int(ct.b >= 0.23) * int(0.24 >= ct.b);
// if none of the above conditions is a 1, then we want to keep our
// current pixel's color:
half defaultCond = 1 - grassCond - waterCond - roadCond;
// get the pixel from each texture, multiple by their check condition
// to get:
// fixed4(0,0,0,0) if this isn't the right texture for this pixel
// or fixed4(r,g,b,1) from the texture if it is the right pixel
fixed4 grass = grassCond * tex2D(_GrassTexture, IN.uv_MainTex);
fixed4 water = waterCond * tex2D(_WaterTexture, IN.uv_MainTex);
fixed4 road = roadCond * tex2D(_RoadTexture, IN.uv_MainTex);
fixed4 def = defaultCond * ct; // just used the MainTex pixel
// then use the found pixels as the Albedo
o.Albedo = (grass + road + water + def).rgb;
o.Alpha = 1;
}
ENDCG
}
Fallback "None"
}
This is the first shader I've ever written, and it probably isn't very performant. It seems counter intuitive to me to call tex2D on each texture for every pixel to just throw that data away, but I couldn't think of a better way to do this without if/else (which I read were bad for GPUs).
This is a Unity Surface Shader, and not a fragment/vertex shader. I know there is a step that happens behind the scenes that will generate the fragment/vertex shader for me (adding in the scene's lighting, fog, etc.). This shader is applied to 100 256x256px map tiles (2560x2560 pixels in total). The grass/road/water textures are all 256x256 pixels as well.
My question is: is there a better, more performant way of accomplishing what I'm doing here? The game runs on Android and iOS.

I'm not a specialist in Shader performance, but assuming you have a relatively small number of source tiles that you wish to render in the same frame it might make more sense to store the result of the pixel replacement and reuse it.
As you are stating that the resulting image is going to be the same size as your source tile, just render the source tile using your surface shader (without any lighting though, you may want to consider using a simple, flat pixel shader!) into a RenderTexture once and then use that RenderTexture as source for your world rendering. That way you are doing the expensive work only once per source tile and thus it isn't even important anymore whether your shader is well optimized.
If all textures are static, you might even consider not doing this at runtime, but just translate them once in the Editor.

Related

Cell shading effect in OpenGL ES 2.0/3.0

I applied cell shading effect to the object, like:
This works well, but there are many conditional checks ("if" statements) in the fragment shader:
#version 300 es
precision lowp float;
in float v_CosViewAngle;
in float v_LightIntensity;
const lowp vec3 defaultColor = vec3(0.1, 0.7, 0.9);
void main() {
lowp float intensity = 0.0;
if (v_CosViewAngle > 0.33) {
intensity = 0.33;
if (v_LightIntensity > 0.76) {
intensity = 1.0;
} else if (v_LightIntensity > 0.51) {
intensity = 0.84;
} else if (v_LightIntensity > 0.26) {
intensity = 0.67;
} else if (v_LightIntensity > 0.1) {
intensity = 0.50;
}
}
outColor = vec4(defaultColor * intensity, 1.0);
}
I guess so many checks in the fragment shader can ultimately affect performance. In addition, shader size is increasing. Especially if there will be even more cell shading levels.
Is there any other way to get this effect? Maybe some glsl-function can be used here?
Thanks in advance!
Store your color bands in a Nx1 texture, do a texture lookup using v_LightIntensity as your texture coordinate. Want a different shading level count then just change the texture.
EDIT Store an NxM texture, doing a lookup using vLightIntensity and v_CosViewAngle as a 2D coordinate, and you can kill branches completely.

Count pixels on the render target

Using OpenGL ES 2.0 and Galaxy S4 phone, I have a Render Target 1024x1024 RGBA8888 where some textures are rendered each frame. I need to calculate how much red RGBA(1, 0, 0, 1) pixels was rendered on the render target (twice a second).
The main problem is that getting the texture from the GPU is very performance-expensive (~300-400 ms), and freezes are not applicable for my application.
I know about OES_shader_image_atomic extension for atomic counters (simply to increment some value when frag shader works), but it's available only in OpenGL ES 3.1 (and later), I have to stick to ES 2.0.
Is there any common solution I missed?
What you can try is to "reduce" texture in question to a significantly smaller one and read back to CPU that one (which should be less expensive performance-wise). For example, you can split your texture into squares N by N (where N is preferably is a power of two), then render a "whole screen" quad into a 1024/N by 1024/N texture with a fragment shader that sums number of red pixels in corresponding square:
sampler2D texture;
void main(void) {
vec2 offset = N * gl_FragCoord.xy;
int cnt = 0;
for (float x = 0.; x < float(N); x += 1) {
for(float y = 0.; y < float(N); y += 1) {
if (texture2D(texture, (offset + vec2(x, y)) / 1024.) == vec4(1, 0, 0, 1)) {
cnt += 1;
}
}
}
gl_FragColor = vec4((cnt % 256) / 255., ((cnt / 256) % 256) / 255., /* ... */);
}
Also remember that readPixels synchronously wait till GPU is done with all previously issued draws to the texture. So it may be beneficial to have two textures,
on each frame one is being rendered to, and the other is being read from. The next frame you swap them. That will somewhat delay obtaining the desired data, but should eliminate some freezes.

Texture coordinates inconsistent

I tried tiling texture coordinates on a quad in unity for some debugging.
On desktop it had no problems but on android(old tablet) I see inconsistency in texture coordinates(Image links below show the results).
As we go from 0.0,0.0 to 1.0,1.0 the texture coordinates keeps becoming inconsistent.
the inconsistency increases as the tiling values go up, Results in images are for 60,160.
my device's android version is 5.0 and GPU supports opengl es 2.0 not 3.0.
So I would like to know why is this happening on my tablet but not on my desktop ,would appreciate information about what is happening in the gpu that is causing it.
Shader I used-
Shader "Unlit/uvdisplay"
{
Properties
{
}
SubShader
{
Tags { "RenderType"="Opaque" }
Pass
{
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata
{
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float2 uv : TEXCOORD0;
float4 vertex : SV_POSITION;
};
v2f vert (appdata v)
{
v2f o;
o.vertex = UnityObjectToClipPos(v.vertex);
o.uv = v.uv;
return o;
}
fixed4 frag (v2f i) : SV_Target
{
// sample the texture
fixed4 col = fixed4(frac(i.uv*fixed2(60.0,160.0)),0.0,1.0);
return col;
}
ENDCG
}
}
}
texcoord at origin
texcoord at the opposit end
Not all Android devices support highp in the fragment shader, some older devices are limited to mediump.
In Unity terms, based on these definitions, it's as if you ask for float precision in your fragment shader, but you get half precision.
Not too much you can do about it except try to keep values near the origin to minimize error.
Also, on some of the devices that only support half precision, as long as you do nothing except sample the texture (i.e. don't modify or swizzle the texture coordinates before using it), the driver/GPU is able to use the full precision for the texture sampling.

Android opengl es 2.0 timely dependant operation in shader becomes slower over time

I've written some shader code for my android application. It has some time dependant animation which work totaly fine on webgl version, shader code is below, but full version could be found here
vec3 bip(vec2 uv, vec2 center)
{
vec2 diff = center-uv; //difference between center and start coordinate
float r = length(diff); //vector length
float scale = mod(u_ElapsedTime,2.); //it is equal 1 every 2 seconds and trigerring function
float circle = smoothstep(scale, scale+cirleWidth, r)
* smoothstep(scale+cirleWidth,scale, r)*4.;
return vec3(circle);
}
Return of the function is used in Fragcolor as a base for color.
u_ElapsedTime is sent to shader via uniform:
glUniform1f(uElapsedTime,elapsedTime);
Time data sent to shader from "onDrawFrame":
public void onDrawFrame(GL10 gl) {
glClear(GL_COLOR_BUFFER_BIT);
elapsedTime = (SystemClock.currentThreadTimeMillis()-startTime)/100f;
//Log.d("KOS","time " + elapsedTime);
scannerProgram.useProgram(); //initialize shader
scannerProgram.setUniforms(resolution,elapsedTime,rotate); //send uniforms to shader
scannerSurface.bindData(scannerProgram); //get attribute location
scannerSurface.draw(); //draw vertices with given attributes
}
So everything looks totaly fine. Nevertheless, after some amount of time it looks like there is some lags and amount of frames is lesser then from the beginning. In the end it could be like only one-two frames per cicle for that function. In the same time it doesn't seems like opengl by itself have some lags, because i can for example rotate the picture and don't see any lags.
What could be the reason of that lags??
upd:
code of binddata:
public void bindData(ScannerShaderProgram scannerProgram) {
//getting location of each attribute for shader program
vertexArray.setVertexAttribPointer(
0,
scannerProgram.getPositionAttributeLocation(),
POSITION_COMPONENT_COUNT,
0
);
Sounds to me like precision issues. Try taking this line from your shader:
float scale = mod(u_ElapsedTime,2.);
And perform it on the CPU instead. e.g.
elapsedTime = ((SystemClock.currentThreadTimeMillis()-startTime)%200)/100f;

Android fragment shader having problems

Sorry for the broad question, but I didn't know quite how to word it. I am creating an app that manipulates the camera's pixels. I am new to OpenGl and my problem could be in how I link textures to the shaders, or somewhere in my actual shader code.
I have an RGB look up table that I turn into a texture and pass into the shader to use as the manipulation table. I believe my texture is of the proper size and setting, but I am not 100% sure. In my shader I have this:
uniform sampler2D data_Texture // The RGB look up table texture
uniform samplerExternalOES u_Texture; // The camera's texture
And this is in my main loop in the shader:
// Color changing Algorithm
vec3 texel = texture2D(u_Texture, v_TexCoordinate).rgb;
gl_FragColor = vec4(texel.x, texel.y, texel.z, 1.0);
float rr = gl_FragColor.r * 255.0;
float gg = gl_FragColor.g * 255.0;
float bb = gl_FragColor.b * 255.0;
int r = int(rr);
int g = int(gg);
int b = int(bb);
int index = ((r/4) * 4096) + ((g/4) * 64) + (b/4);
int x = int(mod(float(index), 512.0));
int y = index / 512;
vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
We take the camera's RGB pixels to get an index for the look up table. Then we try to get the rgb data out of the look up table to replace the camera's pixel with. This is where the problem occurs. As you can probably tell from the code above, we don't actually change the FragColor with our data. This is because we were testing and found an interesting occurrence. When we comment out the last line in the main loop,
//vec4 data = texture2D(data_Texture, vec2(float(x)/512.0, float(y)/512.0));
the camera just displays like normal, because we don't do any manipulations on the actual FragColor. But when we leave the last line in, the pixels turn green for dark colors and pink/orange for light colors.
Why does filling this data variable, and not explicitly changing the FragColor, change the camera's pixels??

Categories

Resources