I'm currently using OpenGL on Android to draw set width lines, which work great except for the fact that OpenGL on Android does not natively support the anti-aliasing of such lines. I have done some research, however I'm stuck on how to implement my own AA.
FSAA
The first possible solution I have found is Full Screen Anti-Aliasing. I have read this page on the subject but I'm struggling to understand how I could implement it.
First of all, I'm unsure on the entire concept of implementing FSAA here. The article states "One straightforward jittering method is to modify the projection matrix, adding small translations in x and y". Does this mean I need to be constantly moving the same line extremely quickly, or drawing the same line multiple times?
Secondly, the article says "To compute a jitter offset in terms of pixels, divide the jitter amount by the dimension of the object coordinate scene, then multiply by the appropriate viewport dimension". What's the difference between the dimension of the object coordinate scene and the viewport dimension? (I'm using a 800 x 480 resolution)
Now, based on the information given in that article the 'jitter' coordinates should be relatively easy to compute. Based on my assumptions so far, here is what I have come up with (Java)...
float currentX = 50;
float currentY = 75;
// I'm assuming the "jitter" amount is essentially
// the amount of anti-aliasing (e.g 2x, 4x and so on)
int jitterAmount = 2;
// don't know what these two are
int coordSceneDimensionX;
int coordSceneDimensionY;
// I assume screen size
int viewportX = 800;
int viewportY = 480;
float newX = (jitterAmount/coordSceneDimensionX)/viewportX;
float newY = (jitterAmount/coordSceneDimensionY)/viewportY;
// and then I don't know what to do with these new coordinates
That's as far as I've got with FSAA
Anti-Aliasing with textures
In the same document I was referencing for FSAA, there is also a page that briefly discusses implementing anti-aliasing with the use of textures. However, I don't know what the best way to go about implementing AA in this way would be and whether it would be more efficient than FSAA.
Hopefully someone out there knows a lot more about Anti-Aliasing than I do and can help me achieve this. Much appreciated!
The method presented in the articles predates the time, when GPUs were capable of performing antialiasing themself. This jittered rendering to a accumulation buffer is not really state of the art with realtime graphics (it is a widely implemented form of antialiasing for offline rendering though).
What you do these days is requesting an antialiased framebuffer. That's it. The keyword here is multisampling. See this SO answer:
How do you activate multisampling in OpenGL ES on the iPhone? – although written for the iOS, doing it for Android follows a similar path. AFAIK On Android this extension is used instead http://www.khronos.org/registry/gles/extensions/ANGLE/ANGLE_framebuffer_multisample.txt
First of all the article you refer to uses the accumulation buffer, whose existence I really doubt in OpenGL ES, but I might be wrong here. If the accumulation buffer is really supported in ES, then you at least have to explicitly request it when creating the GL context (however this is done in Android).
Note that this technique is extremely inefficient and also deprecated, since nowadays GPUs usually support some kind of multisampling atialiasing (MSAA). You should research if your system/GPU/driver supports multi-sampling. This may require you to request a multisample framebuffer during context creation or something similar.
Now back to the article. The basic idea of this article is not to move the line quickly, but to render the line (or actually the whole scene) multiple times at very slightly different (at sub-pixel accuracy) locations (in image space) and average these multiple renderings to get the final image, every frame.
So you have a set of sample positions (in [0,1]), which are actually sub-pixel positions. This means if you have a sample positon (0.25, 0.75) you move the whole scene about a quarter of a pixel in the x direction and 3 quarters of a pixel in the y direction (in screen space, of course) when rendering. When you have done this for each different sample, you average all these renderings together to gain the final antialiased rendering.
The dimension of the object coordinate scene is basically the dimension of the screen (actually the near plane of the viewing volume) in object space, or more practically, the values you passed into glOrtho or glFrustum (or a similar function, but with gluPerspective it is not that obvious). For modifying the projection matrix to realize this jittering, you can use the functions presented in the article.
The jitter amount is not the antialiasing factor, but the sub-pixel sample locations. The antialiasing factor in this context is the number of samples and therfore the number of jittered renderings you perform. And your code won't work, if I assume correctly and you try to only jitter the line end points. You have to draw the whole scene multiple times using this jittered projection and not just this single line (it may work with a simple black background and appropriate blending, though).
You might also be able to achieve this without an accum buffer using blending (with glBlendFunc(GL_CONSTANT_COLOR, GL_ONE) and glBlendColor(1.0f/n, 1.0f/n, 1.0f/n, 1.0f/n), with n being the antialiasing factor/sample count). But keep in mind to render the whole scene like this and not just this single line.
But like said this technique is completely outdated and you should rather look for a way to enable MSAA on your ES platform.
Related
I am working on an AR app that needs to move an image depending on device's position and orientation.
It seems that Game Rotation Vector should provide the necessary data to achieve this.
However I cant seem to understand what the values that I get from GRV sensor show. For instance in order to reach the same value on the Z axis I have to rotate the device 720 degrees. This seems odd.
If I could somehow convert these numbers to angles from the reference frame of the device towards the x,y,z coordinates my problem would be solved.
I have googled this issue for days and didn't find any sensible information on the meaning of GRV coordinates, and how to use them.
TL:DR What do the numbers of the GRV sensor show? And how to convert them to angles?
As the docs state, the GRV sensor gives back a 3D rotation vector. This is represented as three component numbers which make this up, given by:
x axis (x * sin(θ/2))
y axis (y * sin(θ/2))
z axis (z * sin(θ/2))
This is confusing however. Each component is a rotation around that axis, so each angle (θ which is pronounced theta) is actually a different angle, which isn't clear at all.
Note also that when working with angles, especially in 3D, we generally use radians, not degrees, so theta is in radians. This looks like a good introductory explanation.
But the reason why it's given to us in the format is that it can easily be used in matrix rotations, especially as a quaternion. In fact, these are the first three components of a quaternion, the components which specify rotation. The 4th component specifies magnitude, i.e. how far away from the origin (0, 0) a point it. So a quaternion turns general rotation information into an actual point in space.
These are directly usable in OpenGL which is the Android (and the rest of the world's) 3D library of choice. Check this tutorial out for some OpenGL rotations info, this one for some general quaternion theory as applied to 3D programming in general, and this example by Google for Android which shows exactly how to use this information directly.
If you read the articles, you can see why you get it in this form and why it's called Game Rotation Vector - it's what's been used by 3D programmers for games for decades at this point.
TLDR; This example is excellent.
Edit - How to use this to show a 2D image which is rotated by this vector in 3D space.
In the example above, SensorManage.getRotationMatrixFromVector converts the Game Rotation Vector into a rotation matrix which can be applied to rotate anything in 3D. To apply this rotation a 2D image, you have to think of the image in 3D, so it's actually a segment of a plane, like a sheet of paper. So you'd map your image, which in the jargon is called a texture, onto this plane segment.
Here is a tutorial on texturing cubes in OpenGL for Android with example code and an in depth discussion. From cubes it's a short step to a plane segment - it's just one face of a cube! In fact that's a good resource for getting to grips with OpenGL on Android, I'd recommend reading the previous and subsequent tutorial steps too.
As you mentioned translation also. Look at the onDrawFrame method in the Google code example. Note that there is a translation using gl.glTranslatef and then a rotation using gl.glMultMatrixf. This is how you translate and rotate.
It matters the order in which these operations are applied. Here's a fun way to experiment with that, check out Livecodelab, a live 3D sketch coding environment which runs inside your browser. In particular this tutorial encourages reflection on the ordering of operations. Obviously the command move is a translation.
Drawing 2D graphics request only two coordinates and by default Z coordinate is 0. Is it possible to use that Z coordinate to adjust graphics sizes. Lets say for larger screens I set Z to be 0 but when screen is small (ldpi) i set z to be lets say -5 units and whole graphics fits into the screen. Is it good practice? Is it even possible to do like that?
To adjust your graphics to the screen size (and rotation) you should adjust the opengl viewport size.
Not sure what you exactly plan to do with the z-coordinate, but it doesn't look like a good way for me.
Looks like you plan to use the z coordinate to zoom in or zoom out so that the scene fits correctly into the screen. It is valid point, you can easily do that by "hacking" the projection matrix that way. The only drawback I truly see is that you need to send down your pipeline one more coordinate for each vertex. Would be much easier to just set a global scaling factor which is stored either in the modelview-projection matrix or passed to the vertex shaders.
My guess is (and i mean no disrespect) that you know little about 2D rendering and came up with this idea. Actually is not that bad, it's a good first approach, but things are quite polished in the area. You should stick to the standard way of dealing with it, unless you really know what you are doing.
Standard way is to use projection matrices (or cameras in a higher level of abstraction). When using projections you define your "world coordinates". The projection maps your world to the GL viewport (usually the hole screen), so no matter the device screen size, you always show the same portion of the world. Note you'll have to deal with stretching.
I don't know if I'm really answering your question. This is not really what you asked, but what i think you wanted to ask. You shouldn´t bother with z-components if used an Orthographic projection (which is typical for 2D).
So you'd want to add a "fake" depth to your 2D app ?
With an orthographic projection (used in most of the 2D rendering world), it would be completely useless.
With a perspective projection, it would surely lead to many subpixel glitches when the texture minification will occur, or blurring in case of magnification.
You could resize your sprites or - better - you could create a set of baked sprites of different sizes.
I need pixel-perfect collision detection for my Android game. I've written some code to detect collision with "normal" bitmaps (not rotated); works fine. However, I don’t get it for rotated bitmaps. Unfortunately, Java doesn’t have a class for rotated rectangles, so I implemented one myself. It holds the position of the four corners in relation to the screen and describes the exact location/layer of its bitmap; called "itemSurface". My plan for solving the detection was to:
Detect intersection of the different itemSurfaces
Calculating the overlapping area
Set these areas in relation to its superior itemSurface/bitmap
Compare each single pixel with the corresponding pixel of the other bitmap
Well, I’m having trouble with the first one and the second one. Does anybody has an idea or got some code? Maybe there is already code in Java/Android libs and I just didn’t find it.
I understand that you want a collision detection between rectangles (rotated in different way). You don't need to calculate the overlapping area. Moreover, comparing every pixel will be ineffective.
Implement a static boolean isCollision function which will tell you is there a collision between one rectangle and another. Before you should take a piece of paper do some geometry to find out the exact formulas. For performance reasons do not wrap a rectangle in some Rectangle class, just use primitive types like doubles etc.
Then (pseudo code):
for (every rectangle a)
for (every rectangle b)
if (a != b && isCollision(a, b))
bounce(a, b)
This is O(n^2), where n is number of rectangles. There are better algorithms if you need more performance. bounce function changes vectors of moving rectangles so that imitates a collision. If the weight of objects was the same (you can aproximate weight with size of the rectangles), you just need to swap two speed vectors.
To bounce elements correctly you could need to store auxiliary table boolean alreadyBounced[][] to determine which rectangles do not need a change of their vectors after bounce (collision), because they were already bounced.
One more tip:
If you are making a game under Android you have to watch out to not allocate memory during gameplay, because it will faster invoke GC, which takes a long time and slow downs your game. I recommend you watching this video and related. Good luck.
External requirements --- you have to hate them...
I have an OpenGL ES game, which uses EGL and OpenGL ES to draw on the screen. I don't have source to this; it's supplied as a binary blob. I'm implementing the interface layer that mediates between the game's calls to EGL and OpenGL and the platform's implementation.
It works fine. But I now have the unexpected external requirement that I need to be able to rotate the entire game's output 90 degrees.
Can anyone suggest any good (easy, fast) ways to do this? Off the top of my head, I can think of:
insert the appropriate transformation into the game's projection matrix. This seems to me to be the fastest solution; but I don't think I have enough knowledge of the game's manipulation of the projection matrix to do this reliably. Plus it'll confuse the game if it uses any OpenGL calls to access the screen which don't go through the projection matrix. (glReadPixels(), for example.)
give the game a rendering context to an off-screen buffer; it renders there, and then when the game calls eglSwapBuffers() I copy the result onto the screen. Render-to-texture would help here. Problems: this will affect performance as I'm effectively doing two drawing passes instead of one; and render-to-texture isn't standardised in OpenGL ES. (My target platform, Android, doesn't even reliably support shared contexts.)
render into the colour buffer, then use glReadPixels() to copy the data out and do a software rotate onto the screen. Problems: dead slow, and I have no control of the size of the buffer (i.e. if the screen is 640x480 and we're drawing 90° rotated, I really want to give the game a 480x640 colour buffer).
other?
Game-specific hacks aren't an option here because I need to be able to swap out the game binary with another one; this has to be a generic fix. Changing the game isn't an option because we don't have control of the game source code.
Any suggestions? Other than the non-technical one of trying to persuade the requirement to go away?
What is the issue with you have to use glRotate along the z axis ??
Approach 1 is the way to go.
Pixel operations are heavy and it is possible, that you could be messing up with the aspect ratio, etc etc.
The steps which go into drawing are
1. Set the transformation matrix (the model/ projection)
If landscape, apply the glRotate
2. Set the view port (this might change each time you rotate the screen)
if landscape - set a b as height/widht respectively
if landscape - set b a as height/widht respectively
3. Draw the matrix
When you rotate the screen, the objects are rendered again. So glRotate is the best way to go.
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.