I have started reading about RenderScript in android. I am following this particular blog.
While reading I come across the part called Setting floating point precision
It might seem as a noob question, but why do we need to change floating point precision? What benefit we get? Anything related to RenderScript in particular?
These precisions are for the compute part of renderscript. Generally they will not affect rendering in which you will get GL precision which is much lower than IEEE 754 in general, but you shouldn't use that since the graphics part of renderscript is deprecated.
Essentially, you should use rs_fp_relaxed since that will get you onto the highest range of mobile GPU and SIMD-supporting CPU devices.
rs_fp_relaxed enables flush to zero for denorms and round to zero operation. This affects the answer when you do math on half, float and double types. Although you should also avoid double if you want to be accelerated by mobile gpus and also not take a speed hit even on devices which natively support doubles.
I recommend checking out the wiki pages on floats: https://en.wikipedia.org/wiki/Single-precision_floating-point_format
The gist is floats are stored in two parts the exponent and the significand similar to scientific notation of 1.23 * 10^13. When the exponent is all 0s, then your number is denormal. So if your calculation results in a value where the exponent is 0, then the significand will also end up being zero instead of the actual value. For float32 the specific values are 1.1754942E-38 (0x7ffff) to 1.4E-45 (0x1) and the corresponding negative values.
Round to zero comes in when you do math with two floating point numbers an implementation will not calculate the extra digit of precision to know which way to round the last bit so you can be off by 1 ulp from a round-to-even implementation. Generally 1 ulp is quite small but the absolute difference depends on where your value lies in the real number space. For example 1.0 is encoded as 0x3f800000. A 1 ulp error could give you 0x3f800001 which is converted to 1.0000001.
Precision is basically what it is. It tells how precise things on screen will be drawn. In some cases floating point precision might be insignificant on its own or in comparison to other thing like memory or performance. If you have a device with small screen and low memory you don't need double precision to draw a model.
Related
I have three different OBJs in my scene, a body, and a shirt and pant simulated over that body (rendered in that order).
Rendering was showing inner shirt outside the pants at some points in the form of some 'holes' on my test android devices while it works just fine on the desktops.
I guessed that some points are very close to each other and hence tried highp for precision and it started working fine on some of my devices (Surprisingly it doesn't work on an year old Nexus!)
Q. Have I identified the correct problem or it could be because of any other possible reason as well. Is there any way I can solve this issue on all devices ?
Q. Can I somehow at least get to know which GPUs will have this problem so that I can target my APK accordingly ?
Using :
Android 5.0
OpenGL ES 3.0
Edit:
Just in case its of any help, when rotating the scene, or zooming in-out, these holes show a 'twinkling behavior'.
highp support is not mandatory, and AFAICT it's also not an 'extension', so you can't query it and support for it is also not recorded in GLES capability databases like http://opengles.gpuinfo.org/
You can query the precision of the shader using glGetShaderPrecisionFormat ( http://docs.gl/es3/glGetShaderPrecisionFormat )
Of course it's up to your application to know what actual precision is actually needed. And this is at runtime, no way to know in advance.
Alright, so I seem to have solved this.
I quote from the opengl archives :
12.040 Depth buffering seems to work, but polygons seem to bleed through polygons that are in front of them. What's going on?
You may have configured your zNear and zFar clipping planes in a way
that severely limits your depth buffer precision. Generally, this is
caused by a zNear clipping plane value that's too close to 0.0. As the
zNear clipping plane is set increasingly closer to 0.0, the effective
precision of the depth buffer decreases dramatically. Moving the zFar
clipping plane further away from the eye always has a negative impact
on depth buffer precision, but it's not one as dramatic as moving the
zNear clipping plane.
Same page has lots of details about bounding boxes and related issues. Nice read.
I had a huge size BB. I reduced the size and moved camera nearer to the objects, the issue is resolved from all the devices.
I'm using SDL 2.0 with OpenGL 2.1 on android (5.1 lollipop on a Moto G).
I have essentially this code:
vec3 norm = normalize(var_normal);
vec3 sun = normalize(sun_position-var_position);
float diff = max(dot(norm,sun),0.);
in my fragment shader.
diff should always be a value between 0. and 1., right?
unfortunately, due to the difficulties of debugging specifics of glsl I'm not sure exactly the value that diff gets, but it seems to be vastly blown out. if I set gl_FragColor = vec4(diff,diff,diff,1.); for debugging, I get straight white for 90% of pixels- the only ones that aren't white are those with normals just shy of orthogonal to the camera.
Here's the interesting bit- when I run it on my computer (OSX El Capitan), it works as expected.
So, instead of maxing the dot with 0., I clamp it to 0. and 1.. Then everything starts to look more reasonable, but with a visibly small diffuse range.
Any ideas moving forward regarding the differences?
Shucks. Should have known.
My problem was floating point precision. My "sun_position" variable was set veeeerry far away. I just brought it closer and everything works fine.
I would have expected errors in precision to result in a less precise direction (as in, the sun_pos-frag_pos vector would "snap" to the various allowances given by such low precision at such a distance), but normalize should have brought that vector down to unit size? right? so I'd expect any precision errors to result in a "wrong" direction, not a blown out normal? (Clearly I'm wrong about this. I guess my understanding of IEEE floating point still needs some work...)
Issue partially resolved, leaving previous post & code here for reference, new issue (stated in title) after the strong text at the bottom
I am trying to use colour picking to identify objects in OpenGL on Android. My code works fine for the most part, all objects are assigned unique colours in a 0.00f, 0.00f, 0.00f format (Alpha is always 0), and when clicking on the objects they are identified (Most of the time) using glreadpixels and converting/comparing the values.
The problem only occurs when using certain colour values. For instance, if an object is given the colour 0.0f, 0.77f, 1.0f (RGB), it will not colour solidly. It will colour parts of the object 0.0,0.76,1.0 or 0.0,0.78.1.0. I thought it might be colour blending so I coloured every object in the scene this colour but the same thing happened, this also eliminates any lighting issues which I thought might be another cause (despite not implementing light explicitly to my knowledge). This issue occurs on a few colours, not just the one stated.
How can I tell the object or the renderer to colour these objects solidly exactly as stated, instead of a blend of the colours either side of it?
The colours are not coming through as stated, if a color of R:0.0f G:0.77f B1.0f is passed to glUniform4v & glDrawArrays, it is drawn (and read by glreadpixels) as R:0.0f G:0.78f B1.0f. This is not the only value with which this happens, this is just an example.
Any suggestions for a fix are appreciated
There are at least two aspects that can come in your way of getting exactly the expected color:
Dithering
Dithering for color output is enabled by default. Based on what I've seen, it doesn't typically seem to come into play (or at least not in a measurable way) if you're rendering to targets with 8 bits per component. But it's definitely very noticeable when you're rendering to a lower color depth, like RGB565.
The details of what exactly happens as the result of dithering appears to be very vendor dependent.
Dithering is enabled by default. For typical use, that's probably a good thing, because you only care about the visual appearance. And the whole idea of dithering is obviously to enhance the visual quality. But if you rely on getting controllable and predictable values for your output colors, like it's necessary in your picking case, you should always disable dithering:
glDisable(GL_DITHER);
Precision
As you're already aware based on your code, precision is a big concern here. You obviously can't expect to get exactly the same floating point value back as the one you originally specified for the color.
The primary loss of precision comes from the conversion of the color value to a normalized value in the color buffer. With 8 bits/component color depth, the precision of that value is 1.0/255.0. Which means that you should be fine with generating and comparing values with a precision of 0.01.
Another source of precision loss is the shader processing. Since you specify mediump for the precision in the shader code, which gives you at least about 10 bits of precision, that also looks like it should not be harmful.
One possibility is that you didn't actually get a configuration with 8-bit color components. This would also be consistent with the visual dithering effect. Say if you got a RGB565 surface, your observed precision starts to make sense.
For example, with RGB565, if you pass in 0.77 for the green component, the value is multiplied with 63 (2^6 - 1) during fixed point conversion, which gives 48.51. Now, the spec says:
Values are converted (by rounding to nearest) to a fixed-point value with m bits, where m is the number of bits allocated to the corresponding R, G, B, A, or depth buffer component.
The nearest value for 48.51 is 49. But if you lose any kind of precision somewhere on the way, it could very easily become 48.
Now, when these values are converted back to float while you read them back, they are divided by 63.0. If the value in the framebuffer was 48, the result is 0.762, which you would round to in your code 0.76. If it was 49, the result is 0.777, which rounds to 0.78.
So in short:
Be very careful about what kind of precision you can expect.
I think you might have an RGB565 framebuffer.
Also, using multiples of 0.01 for the values does not look like an ideal strategy because it does not line up with the representation in the framebuffer. I would use multiples of 2^b - 1, where b is the number of bits in the color components. Use those values when specifying colors, and apply the matching quantization when you compare the values you read back with the expected value.
I've been attempting to find a library that would enable to perform FFT (Fast Fourier Transform) on some EEG signals in Android.
with help of Geobits, I've finally found the code that might help me do FFT on an EEG signal. But I am having a hard time figuring out how does the code actually work. I want to know what float array x and y are for and maybe an example that might help me a little more.
An fft should return a series of complex numbers (could either be rectangular coordinates, or polar: phase and magnitude) for a specific range of frequencies...
I'm still working through the expressions, but I'll bet dollars to donuts that the x and y arrays are the real (x) and imaginary (y) components of the complex numbers that are the result of the transformation.
The absolute value of the sum of the squares of these two components should be the magnitude of the harmonic component at each frequency (conversion to polar).
If the phase is important for your application, keep in mind that the the FFT (as with any phasor) can either be sine referenced or cosine referenced. I beleive sine is the standard, however.
see:
http://www.mathworks.com/help/matlab/ref/fft.html
http://mathworld.wolfram.com/FastFourierTransform.html
Since the FFT gives a truncated approximation to an infinite series created by a harmonic decomposition of a periodic waveform any periodic waveform can be used to test the functionality of your code.
For an example, a square wave should be easy to replicate, and has very well known harmonic coefficients. The resolution of the data set will determine the number of harmonics that you can calculate (most fft algorithms do best with a data set that has a length equal to a power of two, and is a number of integral wavelengths of the longest frequency that you want to use).
The square wave coefficients should be at odd multiples of the fundamental frequency and have magnitudes that vary inversely with the order of the harmonic.
http://en.wikipedia.org/wiki/Square_wave
I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.