I am writing a small app that at the moment generates a random map of textures.
I am drawing this map as a 10 x 15 group of "quads" which are infact all triangle strips. I use the "map" to grab an int which I then take as the location of the texture for this square in the textureAtlas. so for example 0 is the bottom left "tile". The atlas is 128 x 128 and split into 32 pixel tiles.
However I seem to be getting some odd artifacts where the texture from the one tile is creeping in to the next tile. I wondered if it was the image itself but as far as I can tell the pixels are exactly where they should be. I then looked at the texture coords I was specifying but they all look exact (0.0, 0.25, 0.5, 0.75, 1.0 - splitting it into the 4 rows and columns I would expect).
The odd thing is if I run it on the emulator I do not get any artifacts.
Is there a setting I am missing which would cause bleeding of 1 pixel? It seemed to only be vertical too - this could be related to on the phone I am "stretching" the image in that direction as the phone's screen is larger than normal in that direction.
I load the texture like so:
//Get a new ID
int id = newTextureID(gl);
//We will need to flip the texture vertically
Matrix flip = new Matrix();
flip.postScale(1f, -1f);
//Load up and flip the texture
Bitmap temp = BitmapFactory.decodeResource(context.getResources(), resource);
//Store the widths for the texturemap
int width = temp.getWidth();
int height = temp.getHeight();
Bitmap bmp = Bitmap.createBitmap(temp, 0, 0, width, height, flip, true);
temp.recycle();
//Bind
gl.glBindTexture(GL10.GL_TEXTURE_2D, id);
//Set params
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR);
//Push onto the GPU
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0);
TextureAtlas atlas = new TextureAtlas(id, width, height, tileSize);
return atlas;
I then render it like so:
gl.glBindTexture(GL10.GL_TEXTURE_2D, currentAtlas.textureID);
//Enable the vertices buffer for writing and to be used during our rendering
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
//Specify the location and data format of an array of vertex coordinates to use
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
//Enable the texture buffer
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GL10.GL_TRIANGLES, indices.length, GL10.GL_UNSIGNED_SHORT, indexBuffer);
I would take a picture, but I am unsure how to get a screen cap from the phone...
If anyone knows of how I can capture the current frame and perhaps put it out into a file I will do that if it helps explain what is going on!
Look forward to your response.
Edit: Here is a screencap - note I run the app in landscape but cap is in portrait. Also excuse the horrible textures :D they were merely a place holder / messing around.
ScreenCap
Well, after speaking to a friend I managed to solve this little problem.
It turns out that if you wish to have exact pixel perfect textures you have to specify the edge of the texture to be halfway into the pixel.
To do this I simply added half a pixel or subtracted half a pixel to the measurement for the texture coords.
Like so:
//Top Left
textureCoords[textPlace] = xAdjust*currentAtlas.texSpaceWidth + currentAtlas.halfPixelAdjust; textPlace++;
textureCoords[textPlace] = (yAdjust+1)*currentAtlas.texSpaceHeight - currentAtlas.halfPixelAdjust; textPlace++;
This was simply calculated when loading the texture atlas:
(float)(0.5 * ((1.0f / numberOfTilesInAtlasRow) / pixelsPerTile));
Although if the height was different to the width of each tile (which could happen) you would need to calculate them individually.
This has solved all the artifacts so I can continue on. Hope it helps someone else!
Related
I am drawing an image based texture using opengl in android and trying to rotate it about its center.
But the result is not as expected and it appears skewed.
First screen grab is the texture drawn without rotation and the second one is the one drawn with 10 degree rotation.
Code snippet is as below:
mViewWidth = viewWidth;//View port width
mViewHeight = viewHeight;//View port height
float ratio = (float) viewWidth / viewHeight;
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
.....
Matrix.setLookAtM(mViewMatrix, 0, 0, 0, 5, 0f, 0f, 0f, 0.0f, 1.0f, 0.0f);
Matrix.setRotateM(mRotationMatrix, 0, 10, 0, 0, 1.0f);
Matrix.multiplyMM(temp, 0, mProjectionMatrix, 0, mViewMatrix, 0);
Matrix.multiplyMM(mMVPMatrix, 0, temp, 0, mRotationMatrix, 0);
GLES20.glUniformMatrix4fv(mRotationMatrixHandle , 1, false, mRotationMatrix, 0);
And in shader:
....
" gl_Position = uMVPMatrix*a_position;\n"
....
The black area in the first screen grab is the area of GLSurfaceView and the grey area is the area where I am trying to draw the image.
The image is already at origin and I think there is no need to translate before rotating it.
The basic problem is that you're scaling your geometry to adjust for the screen aspect ratio before you apply the rotation.
It might not be obvious that you're actually scaling the geometry. But by calculating the coordinates you use for drawing to adjust for the aspect ratio, you are effectively applying a non-uniform scaling transformation to the geometry. And if you then rotate the result, it will get distorted.
What you need to do is apply the rotation before you scale. This will require some reorganization of your current code. Since you apply the scaling before you pass the coordinates to OpenGL, and then do the rotation in the shader, you can't easily change the order. You either have to:
Apply both transformations, in the proper order, to the input coordinates before you pass them to OpenGL, and remove the rotation from the shader code.
Apply both transformations, in the proper order, in the shader code. To do this, you would not modify the input coordinates to adjust to the aspect ratio, and pass a scaling factor into the shader instead.
For the first option, applying a 2D rotation in your own code is easy enough, and it looks like you only have 4 vertices, so there is no efficiency concern. Still the second options is certainly more elegant. So instead of scaling the coordinates in your client code, pass a scaling factor as a uniform into the shader. Then, in the GLSL code, apply the rotation first, and scale the resulting coordinates.
Another option is that you build the complete transformation matrix (again based on applying the individual transformations in the correct order), and pass that matrix into the shader.
When I'm applying a texture to a shape, I keep seeing it mirrored. The GLU.gluLookAt is set to be 5 units up, so it's GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 2, 0);. If it would be 5 units down, the x axis would be reversed, and that would be an even bigger problem.
Can you please tell me how to mirror the bitmap that is being load to be a texture? I want to maintain the position of the axis and the shapes being drawn the way they are, I just want to automatically mirror the bitmap.
Can you please tell me how to do that? Perhaps give me a code sequence that mirrors the bitmap on the x axis?
You should be able to reverse the texture mapping coordinates you're using. For horizontal mirror, reverse the u values. For vertical, reverse v.
I'm developing a game for android and this is my first experience with OpenGL.
When the application loads I create my vertices and texture buffers, I load images from drawable resources; using GLUtils.tex2Image2D to bind the image to a texture array.
I was wondering if glBindTexture() was the correct function to use when changing the texture to produce animation.
public void onDraw(GL10 gl){
sprite.animate();
gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[sprite.frameNumber]);
sprite.draw(gl);
}
Code Explanation
sprite.animate() - changes the frame number depending on System.uptimeMillis()
sprite.draw() - does the actual drawing:
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3);
//Disable the client state before leaving
gl.glDisableClientState(GL10.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
The function does work, but I wanted to confirm it was the correct function to use, or if there is an alternative way to do this.
Binding a different texture to animate is one way to do what you want.
A more popular way of doing this is to have all your animations frames in a big texture (pack all the individual frames in a huge rectangle): to draw a different frame, just change the texture coordinates.
For example, pack four frames of animation in a big 2x2 square
1|2
3|4
Then you'll use for texture coordinates (0,0) (0.5,0) (0.5,0.5) (0,0.5) to display frame 1, and the rest should be obvious.
I have an app that I've been repeatedly playing with in android, it uses opengl-es.
Currently I load textures from a bitmap like so:
//Load up and flip the texture - then dispose the temp
Bitmap temp = BitmapFactory.decodeResource(Deflecticon.getContext().getResources(), resourceID);
Bitmap bmp = Bitmap.createBitmap(temp, 0, 0, temp.getWidth(), temp.getHeight(), flip, true);
temp.recycle();
//Bind the texture in memory
gl.glBindTexture(GL10.GL_TEXTURE_2D, id);
//Set the parameters of the texture.
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR);
//On to the GPU
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0);
The obvious issue is that the texture I'm using has to be a power of 2. At the moment I'm pre-editing the textures in photoshop to be a power of 2 and simply have empty borders. However this is a little tedious and I want to be able to load them as they are .. recognise they aren't a power of 2 and load them into a texture that is.
I know I could scale the bitmap to become a power of 2 size and simply stretch the texture but I do not wish to stretch the texture and in some cases may want to put several textures into one "atlas".
I know I can use glTexSubImage2D() to paste into the texture the data I want at the origin I want. This is great!
However I do not know how in Android to initialise a texture with no data?
In this question previously asked the suggestion was to call glTexImage2D() with no data and to then fill it.
However in android when you call "GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0);" you do not specify a width / height. It reads this from the bitmap I assume.
What is the best way to do this? Can I create a new bitmap of the right power of 2 size only blank and not filled with any data and use this to initialise the texture then paste into it using subImage? Or should I make a new bitmap somehow copy the pixels I want (not sure if you can do this easily) into this new bitmap (leaving borders) and then just use this?
Edit: clarified that I'm using opengl.
I think if you tried creating a bitmap with the power of 2 axis sizes and then add your bitmap it should work just fine. maybe something like
Bitmap.createBitmap(notPowerOf2Bitmap, offx, offy, xsize, ysize, bitmapFlag)
other than that, I would say suffer through the photoshop process. How many pictures you got?
Non-power-of-two (NPOT) bitmaps are supported on some GLES platforms, but you have to check to see if the appropriate extension exists. Note, however, that at least on PowerVR SGX, even though NPOT is supported, there are still some other fairly arbitrary restrictions; for example, your texture width must be a multiple of 2 (if not a power of 2). Also, NPOT rendering tends to be a bit slower on many GPUs.
One thing you can do is just create a wrapper texture which is a power-of-two size and then use glTexSubimage2D to upload the texture to cover only part of that, and then adjust your texture coordinates accordingly. The obvious drawback to this is that you can't use texture wrapping in that circumstance. If you absolutely must support wrapping, you could just scale your texture to the nearest power-of-two size before you call glTexImage2D, although this usually introduces sampling artifacts and makes things blurry, especially if you're trying to do pixel-precise 2D work.
Another thing you might consider, if you don't need to support wrapping, is to make a "texture atlas," in which you condense all of your textures into a few big textures, and have your polygons map to just some portions of the texture atlas(es). You have to be careful when generating MIPmaps, but other than that it usually provides a pretty nice performance benefit, as well as making more efficient use of texture memory since you're not wasting so much on padded or scaled images.
I have 2 solutions I have employed for this problem. I can be more specific if necessary, but conceptually you can: -
Make the image a power of 2, and the section to crop you fill with 100% alpha channel and load the images with alpha enabled.
Tweak your texture vector / buffer so it doesn't load that section. So instead of using the default
float texture[] = {
0.0f, 1.0f, //
1.0f, 1.0f, //
0.0f, 0.0f, //
1.0f, 0.0f, //
};
as the matrix (obviously this is for loading an image to a 2 triangled square), factor back by ratio the area to crop, eg.
float texture[] = {
0.0f, 0.75f, //
0.9f, 0.75f, //
0.0f, 0.0f, //
0.9f, 0.0f, //
};
of course, be precise with your math or the unwanted bit may bleed in, or you'll cut out some of the real image. Obviously this array is calculated on the fly and not hard-coded as I have demonstrated here.
Uh why don't you create two bitmaps. Load the first one as you're doing then use createBitmapScaled to turn that bitmap into a power of two. Performance-wise I don't know if it is the fastest method possible but it works.
YOu can use GLES20.glTexImage2D() to create a empty texture with specified width and height. The example code is
public static int genTexture(int texWidth, int texHeight) {
int[] textureIds = new int[1];
GLES20.glGenTextures(1, textureIds, 0);
assertNoError();
int textureId = textureIds[0];
texWidth = Utils.nextPowerOf2(texWidth);
texHeight = Utils.nextPowerOf2(texHeight);
GLES20.glBindTexture(GL11.GL_TEXTURE_2D, textureId);
GLES20.glTexParameteri(
GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(
GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA,
texWidth, texHeight, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
return textureId;
}
I'm writing my first 2D app for Android using OpenGL. I'm writing it on my Desire, so my screen coords should be 0,0 to 799,479 in landscape mode. I'm trying to get OpenGL to use this range in world coordinates.
The app, such as it is, is working fine so far, but I've had to tweak numbers to get stuff to appear on the screen and I'm frustrated by my inability to understand the relationship between the projection matrix, and the rendering of textures in this regard.
Setting the projection matrix:
gl.glViewport(0, 0, width, height);
float ratio = (float) width / height;
float size = .01f * (float) Math.tan(Math.toRadians(45.0) / 2);
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-size, size, -size / ratio, size / ratio, 0.01f, 100.0f);
// GLU.gluOrtho2D(gl, 0,width, 0, height);
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
Also, I'm not sure what is 'best' - using glFrustumF or GLU.gluOrtho2D The latter has simpler parameters - just the dimensions of the viewport - but I've not got anywhere with that. (Some sites have height and 0 the other way around but that makes no difference.) But shouldn't this be the natural choice for 2D usage of OpenGL? Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D - please disregard the third dimension everywhere, in the interests of speed"?
Drawing my textures:
I'm drawing stuff using 2 textured triangles. The relevant parts of my init (let me know if I need to edit my question with more detail) are:
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
gl.glTranslatex(nXpos, nYpos, nZoomin);
gl.glRotatef(nRotZ, 0, 0, 1);
gl.glScalef((float)nScaleup,(float)nScaleup, 0.0f);
...
...
gl.glVertexPointer(2, GL10.GL_FIXED, 0, mVertexBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTextureBuffer);
gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, 4);
mVertexBuffer is an IntBuffer and contains:
int vertices[] =
{
-1, -1,
1, -1,
-1, 1,
1, 1
};
I don't intend, ultimately, to have to pass in nZoomin - I've done it this way because it was how I found the 'magic numbers' needed to actually see anything! Currently I need to use -1000 there, with smaller numbers resulting in smaller images. Am I right in thinking there must be some way of having a value of zero for nZoomin when the projection matrix is set correctly?
My textures are currently 128x128 (but may end up being different sizes, perhaps always square though). I have no way of knowing when they're being displayed at actual size currently. I'd like to be able to pass in a value of, say, 128 for nScaleup to have it plotted at actual size. Is this related to the projection matrix, or do I have two separate issues?
If you're working in 2D, you don't need glFrustum, just use glOrtho. Something like this:
void glOrthof(0, 800, 0, 480, -1, 1);
That'll put the origin at the bottom left. If you want it at the top left, use:
void glOrthof(0, 800, 480, 0, -1, 1);
For 480 and 800, you should obviously substitute the actual size of your view, so your app will be portable to different screen sizes and configurations.
I'm passing -1 and 1 for the z range, but these don't really matter, because the orthogonal projection puts (x, y, z) on the same place on the screen, no matter the value of z. (near and far must not be equal, though.) This is the only way to tell OpenGL to ignore the z coordinate; there is no specific "2D" mode, your matrices are still 4x4, and 2-dimensional vertices will receive a z coordinate of 0.
Note that your coordinates do not range from 0 to 799, but really from 0 to 800. The reason is that OpenGL interprets coordinates as lying between pixels, not on them. Think of it like a ruler of 30 cm: there are 30 intervals of a centimetre on it, and the ticks are numbered 0-30.
The vertex buffer you're using doesn't work, because you're using GL_FIXED format. That means 16 bits before the decimal point, and 16 bits after it, so to specify a 2x2 square around the origin, you need to multiply each value by 0x10000:
int vertices[] =
{
-0x10000, -0x10000,
0x10000, -0x10000,
-0x10000, 0x10000,
0x10000, 0x10000
};
This is probably the reason why you need to scale it so much. If you use this array, without the scaling, you should get a 2x2 pixel square. Turning this into a 1x1 square, so the size can be controlled directly by the scale factor, is left as an exercise to the reader ;)
Do I have to set something somewhere to say to OpenGL "I'm doing this in 2D
I think the problem is that you're using a projection matrix for perspective projection.
Instead you should use parallel projection.
To get this matrix you can use the glOrtho() function.
gl.glMatrixMode(GL10.GL_PROJECTION);
...
gl.glOrtho(0, width, 0, height, 0, 128);
Now the z-value have no influence over an object's size anymore.
I want to understand 0.01f and 100.0f here. What do I use to describe a 2D world of 0,0 -> 799,479 with a z value of zero?
It's right that in a 2D world, you don't really have about z-values. But you have to decide
which of your objects you want to draw at first.
There are two ways to decide that:
Deactivate GL_DEPTH_TEST and everything is drawn in the order you choose
Activate GL_DEPTH_TEST and let OpenGL decide