AndEngine: Use of PIXEL_TO_METER_RATIO - android

I am new to AndEngine. I have seen use of PIXEL_TO_METER_RATIO in lot of places but not able to understand when and how this constant is used. Can anyone guide to the right direction?

Box2d, the underlying physics engine used by AndEngine, uses meters as standard units. PIXEL_TO_METER_RATIO describes how many pixels in AndEngine are equivalent to one meter in the physics engine. For example, if you get a position of the Body, it will be in meters. You would multiply it by the ratio to get a position on the Scene.

the PTM_RATIO is defined because Box2D uses meters as standard units.
for example, a screen of 480*320 pixels usually equals to a 15*10 square meters box2d world, if PTM_RATIO is defined as 32.
Box2D works with floating point numbers and tolerances have to be used to make Box2D perform well. These tolerances have been tuned to work well with meters-kilogram-second (MKS) units. In particular, Box2D has been tuned to work well with moving objects between 0.1 and 10 meters. So this means objects between soup cans and buses in size should work well. Static objects may be up to 50 meters big without too much trouble.
reference: http://www.box2d.org/manual.html

Related

Need for changing floating point precision

I have started reading about RenderScript in android. I am following this particular blog.
While reading I come across the part called Setting floating point precision
It might seem as a noob question, but why do we need to change floating point precision? What benefit we get? Anything related to RenderScript in particular?
These precisions are for the compute part of renderscript. Generally they will not affect rendering in which you will get GL precision which is much lower than IEEE 754 in general, but you shouldn't use that since the graphics part of renderscript is deprecated.
Essentially, you should use rs_fp_relaxed since that will get you onto the highest range of mobile GPU and SIMD-supporting CPU devices.
rs_fp_relaxed enables flush to zero for denorms and round to zero operation. This affects the answer when you do math on half, float and double types. Although you should also avoid double if you want to be accelerated by mobile gpus and also not take a speed hit even on devices which natively support doubles.
I recommend checking out the wiki pages on floats: https://en.wikipedia.org/wiki/Single-precision_floating-point_format
The gist is floats are stored in two parts the exponent and the significand similar to scientific notation of 1.23 * 10^13. When the exponent is all 0s, then your number is denormal. So if your calculation results in a value where the exponent is 0, then the significand will also end up being zero instead of the actual value. For float32 the specific values are 1.1754942E-38 (0x7ffff) to 1.4E-45 (0x1) and the corresponding negative values.
Round to zero comes in when you do math with two floating point numbers an implementation will not calculate the extra digit of precision to know which way to round the last bit so you can be off by 1 ulp from a round-to-even implementation. Generally 1 ulp is quite small but the absolute difference depends on where your value lies in the real number space. For example 1.0 is encoded as 0x3f800000. A 1 ulp error could give you 0x3f800001 which is converted to 1.0000001.
Precision is basically what it is. It tells how precise things on screen will be drawn. In some cases floating point precision might be insignificant on its own or in comparison to other thing like memory or performance. If you have a device with small screen and low memory you don't need double precision to draw a model.

Android Libgdx render performance

I have a shaperenderer and some lines in it. In my renderer I have to change the lines alpha. What is a better way to do this instead of setColor(r, g, b, calculated_alpha)
I read that this always create a new object with new Color() what is not the best.
I have to do some calculation formula. Lets say distance between points. Is it also a good way to calculate those in all render cycle? Better way?
I am new in shaders but there is a lowp, mediump, highp precision. I have a Nexus6 and a Samsung g7. I cant see any different by the way on those precisions. What is those for? On a low end device may I have to add a lowp?
I just created a simple live wallpaper and my device sometimes a little hot. Can you help me on this?
1 . That's wrong. Look at the source code if you are in doubt. The method just sets the values for its current Color object and reuses it. No problem to set the color like this.
2 . Depends on where you need it. If the points are static and do not change then you want to calculate the distance once and reuse the result. If the points change position over time then you need to calculate the current distance within the render() method.
For calculating Pythagorean Theorem is usually used: http://www.mathwarehouse.com/algebra/distance_formula/index.php
If you use the Vector2 class to represent your points then you can just do:
float distance = point1.dst(point2);
dst() used the PT behind the scenes.
3 . You probably never will see a difference between them with your eyes. It is just how precise floating point numbers in your shader are. mediump is usually used.

AndEngine Box2D Extension - Scaling

I'm new to AndEngine and Box2D. So bear with me please.
I created a new project, set up a 480x800 camera, added a 32x32 stripe, created a physics world at Earth gravity and dropped the stripe. Lo and behold, it DID drop. But it didn't seem "natural" to me; it was too slow.
Then I realized that the gravity is in meters (m/s2) whereas the environment is in pixels. Where does the conversion between meters and pixels tak place? Somewhere there should be an assumption behind the scenes. Do I have any control over it?
How does Box2D know whether it's dropping the stripe from 100 meters above the ground (and viewing it from a distance which would appear a very slow drop) or 1 meter above the ground (and viewing it from up close which would appear very fast)?
To test that the conversion is the real problem, I multiplied the gravity by 10 and it improved the "naturalness". But I think there should be a more sophisticated way to convert pixels to meters.
Thanks in advance. I really appreciate your comments.
It's as #iforce2d said it the comment. In AndEngine the default value is 32, therefore 32 pixels is considered one meter. When converting pixels to meter, divide the pixels by this value. When converting from meters to pixels, multiply with this value. You can find this value in org.andengine.extension.physics.box2d.util.constants.PhysicsConstants class.
The ratio is then used in PhysicsFactory.create... methods if you don't specify your own. These methods create the physics body for you, measuring your sprite size in pixels and passing meters to Box2D. It's also used in the PhysicsConnector class constructor. Use your own value if 32 doesn't suit you, but then you will have to be consistent and use it every time.

How to calculate standard deviation between 2 dimensionals points

I have a list of locations(longitude,latitude) to which i would like to calculate the standard deviation but i'm clueless how to do it in two dimensions. any ideas ?
The concept of a single standard deviation does not generalize well to two dimensions. You can take the standard deviation of each component separately and you get two standard deviations. Then you can combine these into a single number taking sqrt(X^2 + Y^2). This gives you a measure of how far, on average, the points are from the center of the point cloud.
The concept of variance, which is the square of the standard deviation, can be generalized, and it becomes the covariance matrix.

Implementing Anti-Aliasing (FSAA?) on Android, OpenGL

I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.

Categories

Resources