I have a shaperenderer and some lines in it. In my renderer I have to change the lines alpha. What is a better way to do this instead of setColor(r, g, b, calculated_alpha)
I read that this always create a new object with new Color() what is not the best.
I have to do some calculation formula. Lets say distance between points. Is it also a good way to calculate those in all render cycle? Better way?
I am new in shaders but there is a lowp, mediump, highp precision. I have a Nexus6 and a Samsung g7. I cant see any different by the way on those precisions. What is those for? On a low end device may I have to add a lowp?
I just created a simple live wallpaper and my device sometimes a little hot. Can you help me on this?
1 . That's wrong. Look at the source code if you are in doubt. The method just sets the values for its current Color object and reuses it. No problem to set the color like this.
2 . Depends on where you need it. If the points are static and do not change then you want to calculate the distance once and reuse the result. If the points change position over time then you need to calculate the current distance within the render() method.
For calculating Pythagorean Theorem is usually used: http://www.mathwarehouse.com/algebra/distance_formula/index.php
If you use the Vector2 class to represent your points then you can just do:
float distance = point1.dst(point2);
dst() used the PT behind the scenes.
3 . You probably never will see a difference between them with your eyes. It is just how precise floating point numbers in your shader are. mediump is usually used.
Related
I want to create a magic wand tool in Android like it is implemented in Photoshop. Is there an opensource library to perform this work? And if not can anyone guide me on the right way?
OpenCV has floodFill that with a little work can give you the magic wand functionality.
basically you need acccess to the pixels of the image. You can do that in numerous ways -> Canvas. Then, your algorithm is a bit like the A* Pathfinding algorithm (but not really);
set color-diff threshold
define starting point
check every pixel arround the starting point if it passes threshold. if yes -> save coords
for every pixel that passed threshold, go to 2
the pixel-color difference that should pass the threshold is in essence the pythagoras theorem between the original starting point and the pixel you are comparing; d=SQRT((x2-x1)^2+(y2-y1)^2+(z2-z1)^2)
of course photoshop has a number extreme efficient algorithms, but essentially it boils down to above
I'm new to AndEngine and Box2D. So bear with me please.
I created a new project, set up a 480x800 camera, added a 32x32 stripe, created a physics world at Earth gravity and dropped the stripe. Lo and behold, it DID drop. But it didn't seem "natural" to me; it was too slow.
Then I realized that the gravity is in meters (m/s2) whereas the environment is in pixels. Where does the conversion between meters and pixels tak place? Somewhere there should be an assumption behind the scenes. Do I have any control over it?
How does Box2D know whether it's dropping the stripe from 100 meters above the ground (and viewing it from a distance which would appear a very slow drop) or 1 meter above the ground (and viewing it from up close which would appear very fast)?
To test that the conversion is the real problem, I multiplied the gravity by 10 and it improved the "naturalness". But I think there should be a more sophisticated way to convert pixels to meters.
Thanks in advance. I really appreciate your comments.
It's as #iforce2d said it the comment. In AndEngine the default value is 32, therefore 32 pixels is considered one meter. When converting pixels to meter, divide the pixels by this value. When converting from meters to pixels, multiply with this value. You can find this value in org.andengine.extension.physics.box2d.util.constants.PhysicsConstants class.
The ratio is then used in PhysicsFactory.create... methods if you don't specify your own. These methods create the physics body for you, measuring your sprite size in pixels and passing meters to Box2D. It's also used in the PhysicsConnector class constructor. Use your own value if 32 doesn't suit you, but then you will have to be consistent and use it every time.
I have a list of locations(longitude,latitude) to which i would like to calculate the standard deviation but i'm clueless how to do it in two dimensions. any ideas ?
The concept of a single standard deviation does not generalize well to two dimensions. You can take the standard deviation of each component separately and you get two standard deviations. Then you can combine these into a single number taking sqrt(X^2 + Y^2). This gives you a measure of how far, on average, the points are from the center of the point cloud.
The concept of variance, which is the square of the standard deviation, can be generalized, and it becomes the covariance matrix.
I'm working with qglwidget and various gestures for an android app and the topic of Quaternions is thoroughly confusing so its been mostly guess and check. I've been able to make a rotation about one axis by some number of degrees using:
rotation=QQuaternion::fromAxisAndAngle(QVector3D(1,0,0),delta.y())*rotation;
This has the desired results as does the same statement in the x direction.
My question is, for one, is the the correct way of doing a rotation? And two, if I want to rotate on two axes do I just do:
rotation=QQuaternion::fromAxisAndAngle(QVector3D(1,0,0),delta.y())*rotation;
rotation=QQuaternion::fromAxisAndAngle(QVector3D(0,1,0),delta.x())*rotation;
Or is there a one line statement that will work just as well?
Yes, you are doing it the right way, there are no one-line statement :-)
It is very common in 3D applications to create a quaternion from a set of Euler angles, and we do this simply by multiplying together the most basic rotations, since it is anyway pretty cheap to compute (unless you are doing a lot of them, and determined by profiling that this part was critical for performance). For instance, if you are using the convention Z-X-Z (as illustrated in the first picture here ), then you would write:
QQuaternion rotation =
QQuaternion::fromAxisAndAngle(QVector3D(0,0,1), alpha) *
QQuaternion::fromAxisAndAngle(QVector3D(1,0,0), beta) *
QQuaternion::fromAxisAndAngle(QVector3D(0,0,1), gamma);
where alpha, beta and gamma are double values representing the angles in degrees (be careful, not in radians).
Note: you can create the one-liner yourself by wrapping it in your own method:
static QQuaternion fromEuler(double alpha, double beta, double gamma);
I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.