OpenGLES: How to get the lower 8 bits of an int - android

I'm trying to split a 16 bit integer into two unsigned bytes and then recombine them inside a shader to use as a position. However, OpenGLES reserves & and <<, so I've had to come up with a couple of tricks in order to retrieve each section.
precision highp float;
uniform sampler2D u_Positions;
uniform sampler2D u_Velocities;
// uniform vec2 vpSize;
void main() {
vec2 coord = gl_FragCoord.xy / vec2(256.0, 256.0);
vec4 position = texture2D(u_Positions, coord);
vec4 velocity = texture2D(u_Velocities, coord);
vec2 real_Position = vec2((position.r * 256.0 * 256.0) + (position.g * 256.0),(position.b * 256.0 * 256.0) + (position.a * 256.0));
real_Position.x += ((velocity.r * 256.0) + velocity.g) * 256.0;
real_Position.y += ((velocity.b * 256.0) + velocity.a) * 256.0;
vec4 color = vec4(1.0);
color.r = float(int(real_Position.x) / 256) / 256.0;
color.g = mod(real_Position.x, 256.0) / 256.0;
color.b = float(int(real_Position.y) / 256) / 256.0;
color.a = 1.0;
gl_FragColor = color;
}
So I take the red component multiply by 256 to put it in the range of 1-256 and then shift it to left 8 bits by multiplying by 256. Then I do the same for the lower half without shifting then add them together.
The problem I have however, is converting back into the [0,1] range. The top half im just shifting to the right 8 bits and then dividing by 256. However for the bottom half since there is no and I use modulo 256 to get the lower 8. But for some reason I end up with random 1's attached to it.
So I want to use 960 for my x component I would end up with 3 red 195 green instead of 192 green. If I want to use 0 I end up with my position being 1. The lower 8 bits off. What am I doing wrong with this?

i think you'd have to multiply with 255 instead of 256 (in some cases). why? because if you multiply a [0,1] value range with 256 you get a [0,256] value range -> 257 individual values -> overflow (largest 0-based 8bit number is 255)
EDIT: seems like a classical 1-off error. if you divide a [0,256] range through 256, you get a [1/256,1] range. eg.
color.r = float(int(real_Position.x) / 256) / 255.0;
should be the correct calculation in this case. also, if you multiply a [0,1] value with 256 and assign this value to a byte, the result would be the same until you actually hit 1, where the byte would be 0 again. why that? (254/255) * 256 = 254.99 = 254 (because of int rounding), but 1*256=256 (byte range is [0,255]). so everywhere you are converting between color [0,1] floats and bytes, you have to multiply/divide with 255 while everywhere you convert between 8-bit and 16-bit values you have to multiply/divide with 256.
in general equations:
lb=w%256 //lower byte from 16bit
hb=w/256 //higher byte from 16bit
w=lb+hb*256 //16bit from bytes
b=f*255 //byte from [0,1] float (0->0, 1->255)
f=b/255.0 //[0,1] float from byte (0->0, 255->1)

Related

How to make smoother borders using Fragment shader in OpenGL?

I have been trying to draw border of an Image with transparent background using OpenGL in Android. I am using Fragment Shader & Vertex Shader. (From the GPUImage Library)
Below I have added Fig. A & Fig B.
Fig A.
Fig B.
I have achieved Fig A. With the customised Fragment Shader. But Unable to make the border smoother as in Fig B. I am attaching the Shader code that I have used (to achieve rough border). Can someone here help me on how to make the border smoother?
Here is my Vertex Shader :
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
Here is my Fragment Shader :
I have calculated 8 pixels around the current pixel. If any one pixel of those 8 is opaque(having alpha greater than 0.4), It it drawn as a border color.
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform lowp float thickness;
uniform lowp vec4 color;
void main() {
float x = textureCoordinate.x;
float y = textureCoordinate.y;
vec4 current = texture2D(inputImageTexture, vec2(x,y));
if ( current.a != 1.0 ) {
float offset = thickness * 0.5;
vec4 top = texture2D(inputImageTexture, vec2(x, y - offset));
vec4 topRight = texture2D(inputImageTexture, vec2(x + offset,y - offset));
vec4 topLeft = texture2D(inputImageTexture, vec2(x - offset, y - offset));
vec4 right = texture2D(inputImageTexture, vec2(x + offset, y ));
vec4 bottom = texture2D(inputImageTexture, vec2(x , y + offset));
vec4 bottomLeft = texture2D(inputImageTexture, vec2(x - offset, y + offset));
vec4 bottomRight = texture2D(inputImageTexture, vec2(x + offset, y + offset));
vec4 left = texture2D(inputImageTexture, vec2(x - offset, y ));
if ( top.a > 0.4 || bottom.a > 0.4 || left.a > 0.4 || right.a > 0.4 || topLeft.a > 0.4 || topRight.a > 0.4 || bottomLeft.a > 0.4 || bottomRight.a > 0.4 ) {
if (current.a != 0.0) {
current = mix(color , current , current.a);
} else {
current = color;
}
}
}
gl_FragColor = current;
}
You were almost on the right track.
The main algorithm is:
Blur the image.
Use pixels with opacity above a certain threshold as outline.
The main problem is the blur step. It needs to be a large and smooth blur to get the smooth outline you want. For blurring, we can use convolution filter Kernel. And to achieve a large blur, we should use a large kernel. And I suggest using the Gaussian Blur distribution, as it is very well known and used.
The Overview of he Algorithm is:
For each fragment, we sample many locations around it. The samples are made in an N by N grid. We average them together using weights that follow a 2D Gaussian Distribution.
This results in a blurred image.
With the blurred image, we paint the fragments that have alpha greater than a threshold with our outline color. And, of course, any opaque pixels in the original image should also appear in the result.
On a sidenote, your solution is almost a blur with a 3 x 3 kernel (you sample locations around the fragment in a 3 by 3 grid). However, a 3 x 3 kernel won't give you the amount of blur you need. You need more samples (e.g. 11 x 11). Also, the weights closer to the center should have a greater impact on the result. Thus, uniform weights won't work very well.
Oh, and one more important thing:
One single shader to acomplish this is NOT the fastest way to implement this. Usually, this would be acomplished with 2 separate renders. The first one would render the image as usual and the second render would blur and add the outline. I assumed that you want to do this with 1 single render.
The following is a vertex and fragment shader that accomplish this:
Vertex Shader
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
void main() {
vecUV = uv * 3.0 - 1.0;
vecPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
vecNormal = (modelViewMatrix * vec4(normal, 0.0)).xyz;
gl_Position = projectionMatrix * vec4(vecPos, 1.0);
}
Fragment Shader
precision highp float;
varying vec2 vecUV;
varying vec3 vecPos;
varying vec3 vecNormal;
uniform sampler2D inputImageTexture;
float normalProbabilityDensityFunction(in float x, in float sigma)
{
return 0.39894*exp(-0.5*x*x/(sigma*sigma))/sigma;
}
vec4 gaussianBlur()
{
// The gaussian operator size
// The higher this number, the better quality the outline will be
// But this number is expensive! O(n2)
const int matrixSize = 11;
// How far apart (in UV coordinates) are each cell in the Gaussian Blur
// Increase this for larger outlines!
vec2 offset = vec2(0.005, 0.005);
const int kernelSize = (matrixSize-1)/2;
float kernel[matrixSize];
// Create the 1-D kernel using a sigma
float sigma = 7.0;
for (int j = 0; j <= kernelSize; ++j)
{
kernel[kernelSize+j] = kernel[kernelSize-j] = normalProbabilityDensityFunction(float(j), sigma);
}
// Generate the normalization factor
float normalizationFactor = 0.0;
for (int j = 0; j < matrixSize; ++j)
{
normalizationFactor += kernel[j];
}
normalizationFactor = normalizationFactor * normalizationFactor;
// Apply the kernel to the fragment
vec4 outputColor = vec4(0.0);
for (int i=-kernelSize; i <= kernelSize; ++i)
{
for (int j=-kernelSize; j <= kernelSize; ++j)
{
float kernelValue = kernel[kernelSize+j]*kernel[kernelSize+i];
vec2 sampleLocation = vecUV.xy + vec2(float(i)*offset.x,float(j)*offset.y);
vec4 sample = texture2D(inputImageTexture, sampleLocation);
outputColor += kernelValue * sample;
}
}
// Divide by the normalization factor, so the weights sum to 1
outputColor = outputColor/(normalizationFactor*normalizationFactor);
return outputColor;
}
void main()
{
// After blurring, what alpha threshold should we define as outline?
float alphaTreshold = 0.3;
// How smooth the edges of the outline it should have?
float outlineSmoothness = 0.1;
// The outline color
vec4 outlineColor = vec4(1.0, 1.0, 1.0, 1.0);
// Sample the original image and generate a blurred version using a gaussian blur
vec4 originalImage = texture2D(inputImageTexture, vecUV);
vec4 blurredImage = gaussianBlur();
float alpha = smoothstep(alphaTreshold - outlineSmoothness, alphaTreshold + outlineSmoothness, blurredImage.a);
vec4 outlineFragmentColor = mix(vec4(0.0), outlineColor, alpha);
gl_FragColor = mix(outlineFragmentColor, originalImage, originalImage.a);
}
This is the result I got:
And for the same image as yours, with matrixSize = 33, alphaTreshold = 0.05
And to try to get crispier results we can tweak the parameters. Here is an example with matrixSize = 111, alphaTreshold = 0.05, offset = vec2(0.002, 0.002), alphaTreshold = 0.01, outlineSmoothness = 0.00. Note that increasing matrixSize will heavily impact performance, which is a limitation of rendering this outline with only one shader pass.
I tested the shader on this site. Hopefully you will be able to adapt it to your solution.
Regarding references, I have used quite a lot of this shadertoy example as basis for the code I wrote for this answer.
In my filters, the smoothness is achieved by a simple boxblur on the border.. You have decided that alpha > 0.4 is a border. The value of alpha between 0-0.4 in surrounding pixels gives an edge. Just blur this edge with a 3x3 window to get the smooth edge.
if ( current.a != 1.0 ) {
// other processing
if (current.a > 0.4) {
if ( (top.a < 0.4 || bottom.a < 0.4 || left.a < 0.4 || right.a < 0.4
|| topLeft.a < 0.4 || topRight.a < 0.4 || bottomLeft.a < 0.4 || bottomRight.a < 0.4 )
{
// Implement 3x3 box blur here
}
}
}
You need to tweak which edge pixels you blur. The basic issue is that it drops from an opaque to a transparent pixel - what you need is a gradual transition.
Another options is quick anti-aliasing. From my comment below - Scale up 200% and then scale down 50% to original method. Use nearest neighbour scaling. This technique is used for smooth edges on text sometimes.

OpenGL ES: Bad performance when calculating vertex position in vertex shader

I'm a beginner at OpenGL and I´m trying to animate a numer of "objects" from one position to another every 5 second. If I calculate the position in the vertex shader, the fps drops drastically, shouldn't these type of calculations be done on the GPU?
This is the vertex shader code:
#version 300 es
precision highp float;
precision highp int;
layout(location = 0) in vec3 vertexData;
layout(location = 1) in vec3 colourData;
layout(location = 2) in vec3 normalData;
layout(location = 3) in vec3 personPosition;
layout(location = 4) in vec3 oldPersonPosition;
layout(location = 5) in int start;
layout(location = 6) in int duration;
layout(std140, binding = 0) uniform Matrices
{ //base //offset
mat4 projection; // 64 // 0
mat4 view; // 64 // 0 + 64 = 64
int time; // 4 // 64 + 64 = 128
bool shade; // 4 // 128 + 4 = 132 two empty slots after this
vec3 midPoint; // 16 // 128 + 16 = 144
vec3 cameraPos; // 16 // 144 + 16 = 160
// size = 160+16 = 176. Alligned to 16, becomes 176.
};
out vec3 vertexColour;
out vec3 vertexNormal;
out vec3 fragPos;
void main() {
vec3 scalePos;
scalePos.x = vertexData.x * 3.0;
scalePos.y = vertexData.y * 3.0;
scalePos.z = vertexData.z * 3.0;
vertexColour = colourData;
vertexNormal = normalData;
float startFloat = float(start);
float durationFloat = float(duration);
float timeFloat = float(time);
// Wrap around catch to avoid start being close to 1M but time has wrapped around to 0
if (startFloat > timeFloat) {
startFloat = startFloat - 1000000.0;
}
vec3 movePos;
float elapsedTime = timeFloat - startFloat;
if (elapsedTime > durationFloat) {
movePos = personPosition;
} else {
vec3 moveVector = personPosition - oldPersonPosition;
float moveBy = elapsedTime / durationFloat;
movePos = oldPersonPosition + moveVector * moveBy;
}
fragPos = movePos;
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
}
Every 5 second the buffers are updated:
glBindBuffer(GL_ARRAY_BUFFER, this->personPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->positions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->personOldPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->oldPositions, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeStartVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationStart, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, this->timeDurationVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(int) * this->persons.size(), animiationDuration, GL_STATIC_DRAW);
I did a test calculating the positions on the CPU, and updating the positions buffer every draw call, and that doesn't give me a performance drop, but feels fundamentally wrong?
void PersonView::animatePositions() {
float duration = 1500;
double currentTime = now_ms();
double elapsedTime = currentTime - animationStartTime;
if (elapsedTime > duration) {
return;
}
for (int i = 0; i < this->persons.size() * 3; i++) {
float moveDistance = this->positions[i] - this->oldPositions[i];
float moveBy = (float)(elapsedTime / duration);
this->moveByPositions[i] = this->oldPositions[i] + moveDistance * moveBy;
}
glBindBuffer(GL_ARRAY_BUFFER, this->personMoveByPositionsVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * this->persons.size() * 3, this->moveByPositions, GL_STATIC_DRAW);
}
On devices with better SOC:s (Snapdragon 835 etc) the framedrop isn't as drastically as on devices with midrange SOC:s (Snapdragon 625)
Right off the bat, I can see that you're multiplying the projection and view matrices in the vertex shader, but there are no places where you rely on the view or projection matrix independently.
Multiplying two 4x4 matrices results in a very large amount of arithmetic calculations which are done for every vertex you're drawing. In your case - it seems you can avoid this all-together.
Instead of your current implementation - try multiplying the view and proj matrix outside of the shader, then bind the resulting matrix as a single viewProjection matrix:
Old:
gl_Position = projection * view * vec4(scalePos + movePos, 1.0);
New:
gl_Position = projectionView * vec4(scalePos + movePos, 1.0);
This way, the proj and view matrix are multiplied once per frame, instead of once per vertex. This change should drastically improve performance - especially if you have a large amount of vertices.
Generally speaking, the GPU is indeed a lot more efficient then the CPU at performing arithmetic calculations like this, but you should also consider the amount of calculations. The vertex shader is executed per vertex - and should only calculate things that differ between vertices.
Performing a 1-time calculation on the CPU is always better than performing the same calculation on the GPU n-times (n = total vertices).

How to use OpenGL to emulate OpenCV's warpPerspective functionality (perspective transform)

I've done image warping using OpenCV in Python and C++, see the Coca Cola logo warped in place in the corners I had selected:
Using the following images:
and this:
Full album with transition pics and description here
I need to do exactly this, but in OpenGL. I'll have:
Corners inside which I've to map the warped image
A homography matrix that maps the transformation of the logo image
into the logo image you see inside the final image (using OpenCV's
warpPerspective), something like this:
[[ 2.59952324e+00, 3.33170976e-01, -2.17014066e+02],
[ 8.64133587e-01, 1.82580111e+00, -3.20053715e+02],
[ 2.78910149e-03, 4.47911310e-05, 1.00000000e+00]]
Main image (the running track image here)
Overlay image (the Coca Cola image here)
Is it possible ? I've read a lot and started OpenGL basics tutorials, but can it be done from just what I have? Would the OpenGL implementation be faster, say, around ~10ms?
I'm currently playing with this tutorial here:
http://ogldev.atspace.co.uk/www/tutorial12/tutorial12.html
Am I going in the right direction? Total OpenGL newbie here, please bear. Thanks.
After trying a number of solutions proposed here and elsewhere, I ended solving this by writing a fragment shader that replicates what 'warpPerspective' does.
The fragment shader code looks something like:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
// NOTE: you will need to pass the INVERSE of the homography matrix, as well as
// the width and height of your image as uniforms!
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
// Texture coordinates will run [0,1],[0,1];
// Convert to "real world" coordinates
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
// Determine what 'z' is
highp vec3 m = inverseHomographyMatrix[2] * frameCoordinate;
highp float zed = 1.0 / (m.x + m.y + m.z);
frameCoordinate = frameCoordinate * zed;
// Determine translated x and y coordinates
highp float xTrans = inverseHomographyMatrix[0][0] * frameCoordinate.x + inverseHomographyMatrix[0][1] * frameCoordinate.y + inverseHomographyMatrix[0][2] * frameCoordinate.z;
highp float yTrans = inverseHomographyMatrix[1][0] * frameCoordinate.x + inverseHomographyMatrix[1][1] * frameCoordinate.y + inverseHomographyMatrix[1][2] * frameCoordinate.z;
// Normalize back to [0,1],[0,1] space
highp vec2 coords = vec2(xTrans / width, yTrans / height);
// Sample the texture if we're mapping within the image, otherwise set color to black
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
}
Note that the homography matrix we are passing in here is the INVERSE HOMOGRAPHY MATRIX! You have to invert the homography matrix that you would pass into 'warpPerspective'- otherwise this code will not work.
The vertex shader does nothing but pass through the coordinates:
// Vertex shader
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main() {
// Nothing happens in the vertex shader
textureCoordinate = inputTextureCoordinate.xy;
gl_Position = position;
}
Pass in unaltered texture coordinates and position coordinates (i.e. textureCoordinates = [(0,0),(0,1),(1,0),(1,1)] and positionCoordinates = [(-1,-1),(-1,1),(1,-1),(1,1)], for a triangle strip), and this should work!
You can do perspective warping of the texture using texture2DProj(), or alternatively using texture2D() by dividing the st coordinates of the texture (which is what texture2DProj does).
Have a look here: Perspective correct texturing of trapezoid in OpenGL ES 2.0.
warpPerspective projects the (x,y,1) coordinate with the matrix and then divides (u,v) by w, like texture2DProj(). You'll have to modify the matrix so the resulting coordinates are properly normalised.
In terms of performance, if you want to read the data back to the CPU your bottleneck is glReadPixels. How long it will take depends on your device. If you're just displaying, the OpenGL ES calls will take much less than 10ms, assuming that you have both textures loaded to GPU memory.
[edit] This worked on my Galaxy S9 but on my car's Android it had an issue that the whole output texture was white. I've sticked to the original shader and it works :)
You can use mat3*vec3 ops in the fragment shader:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp mat3 inverseHomographyMatrix;
uniform highp float width;
uniform highp float height;
void main()
{
highp vec3 frameCoordinate = vec3(textureCoordinate.x * width, textureCoordinate.y * height, 1.0);
highp vec3 trans = inverseHomographyMatrix * frameCoordinate;
highp vec2 coords = vec2(trans.x / width, trans.y / height) / trans.z;
if (coords.x >= 0.0 && coords.x <= 1.0 && coords.y >= 0.0 && coords.y <= 1.0) {
gl_FragColor = texture2D(inputImageTexture, coords);
} else {
gl_FragColor = vec4(0.0,0.0,0.0,0.0);
}
};
If you want to have transparent background don't forget to add
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
GLES20.glBlendEquation(GLES20.GL_FUNC_ADD);
And set transpose flag (in case you use the above shader):
GLES20.glUniformMatrix3fv(H_P2D, 1, true, homography, 0);

GLSL ES fragment shader produces very different results on different devices

I am developing a game for Android using OpenGL ES 2.0 and have a problem with a fragment shader for drawing stars in the background. I've got the following code:
precision mediump float;
varying vec2 transformed_position;
float rand(vec2 co) {
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main(void) {
float distance = 10.0;
float quantization_strength = 4.0;
vec3 background_color = vec3(0.09, 0.0, 0.288);
vec2 zero = vec2(0.0, 0.0);
vec2 distance_vec = vec2(distance, distance);
vec2 quantization_vec = vec2(quantization_strength, quantization_strength);
vec2 new_vec = floor(transformed_position / quantization_vec) * quantization_vec;
if(all(equal(mod(new_vec, distance_vec), zero))) {
float rand_val = rand(new_vec);
vec3 current_color = background_color * (1.0 + rand_val);
gl_FragColor = vec4(current_color.x, current_color.y, current_color.z, 1.0);
} else {
gl_FragColor = vec4(background_color.x, background_color.y, background_color.z, 1.0 );
}
}
My aim is to 'quantize' the fragment coordinates, so 'stars' are not 1px in size, and then light up quantized pixels that are distant enough by a random amount. This code, however, produces different results depending on where it is executed. I have used GLSL Sandbox (http://glsl.heroku.com), Nexus 7 and HTC Desire S to create comparison:
As you can see, GLSL Sandbox produces dense grid with many stars visible. On Nexus 7 stars are much fewer and distributed along lines (which may be not obvious on this small image) - the rand function does not work as expected. Desire S draws no stars at all.
Why does the rand function work so strangely on Nexus 7 (if I modify the vector used for dot product, stars are distributed along lines at different angle)? And what might cause Desire S not to render the stars?
I would also appreciate any optimization tips for this shader, as I am very inexperienced with GLSL. Or perhaps there is better way to draw 'stars' via fragment shader?
UPDATE
I changed the code to this (I used http://glsl.heroku.com/e#9364.0 as reference):
precision mediump float;
varying highp vec2 transformed_position;
highp float rand(vec2 co) {
highp float a = 1e3;
highp float b = 1e-3;
highp float c = 1e5;
return fract(sin((co.x+co.y*a)*b)*c);
}
void main(void) {
float size = 15.0;
float prob = 0.97;
lowp vec3 background_color = vec3(0.09, 0.0, 0.288);
highp vec2 world_pos = transformed_position;
vec2 pos = floor(1.0 / size * world_pos);
float color = 0.0;
highp float starValue = rand(pos);
if(starValue > prob) {
vec2 center = size * pos + vec2(size, size) * 0.5;
float xy_dist = abs(world_pos.x - center.x) * abs(world_pos.y - center.y) / 5.0;
color = 0.6 - distance(world_pos, center) / (0.5 * size) * xy_dist;
}
if(starValue < prob || color < 0.0) {
gl_FragColor = vec4(background_color, 1.0);
} else {
float starIntensity = fract(100.0 * starValue);
gl_FragColor = vec4(background_color * (1.0 + color * 3.0 * starIntensity), 1.0);
}
}
Desire S now gives me very nice, uniformly distributed stars. But the problem with Nexus 7 is still there. With prob = 0.97, no stars are displayed, and with very low prob = 0.01, they appear very sparsely placed along horizontal lines. Why does Tegra 3 behave so strangely?

Why does my openGL shader program for points have banding artifacts?

For each point my OpenGL shader program takes it creates a red ring that smoothly transitions between opaque, and totally transparent. My shader program works, but has banding artifacts.
The fragment shader is below.
#version 110
precision mediump float;
void main() {
float dist = distance(gl_PointCoord.xy, vec2(0.5, 0.5));
// Edge value is 0.5, it should be 1.
// Inner most value is 0 it should stay 0.
float inner_circle = 2.0 * dist;
float circle = 1.0 - inner_circle;
vec4 pixel = vec4(1.0, 0.0, 0.0, inner_circle * circle );
gl_FragColor = pixel;
}
Here's the less interesting vertex shader that I don't think is the cause of the problem.
#version 110
attribute vec2 aPosition;
uniform float uSize;
uniform vec2 uCamera;
void main() {
// Square the view and map the top of the screen to 1 and the bottom to -1.
gl_Position = vec4(aPosition, 0.0, 1.0);
gl_Position.x = gl_Position.x * uCamera.y / uCamera.x;
// Set point size
gl_PointSize = (uSize + 1.0) * 100.0;
}
Please help my figure out, why does my OpenGL shader program have banding artifacts?
P.S. Incidentally this is for an Android Acer Iconia tablet.
Android's GLSurfaceView uses an RGB565 surface by default. Either enable dithering (glEnable(GL_DITHER)) or install a custom EGLConfigChooser to choose an RGBA or RGBX surface configuration with 8 bits per channel.

Categories

Resources