Redimensioning textures in opengles - android

I'm developing a videogame in Android using OpenGL ES. I'm having some issues with the redimensioning of the textures.
I would like that my game could be compatible with any resolution, and for this, I created a constant with the relation between the game resolution and the screen resolution, like this:
Display display = getWindowManager().getDefaultDisplay();
KTE.SCREEN_WIDTH = display.getWidth();
KTE.SCREEN_HEIGHT = display.getHeight();
KTE.REDIMENSION_X = KTE.SCREEN_WIDTH/KTE.GAME_WIDTH;
KTE.REDIMENSION_Y = KTE.SCREEN_HEIGHT/KTE.GAME_HEIGHT;
Using this constant, I get the same result using differents screens sizes (with the redimensioning of all of the textures using the constant I calculated in the code before).
The problem is that I wanted to reduce the GAME resolution to make bigger all the textures, and now I get black pixels around the textures because my redimension constants are floats with a lot of decimals, and I guess all those black pixels are positions that are left during this calculations...
Anyone got this problem before? Any tip to redimensioning the game? I have tried a lot of things and I'm really stuck. Thanks.

It sounds like the "redimensioning" of the textures isn't working as expected. For instance, perhaps you are only resizing the data of the texture, but the texture itself is still the same size as before. This would account for black pixels at the boundary. Be sure you're creating your textures with your KTE.REDIMENSION_X/Y factor, and be sure when you're writing to your textures you're writing to the edges of them.
As for redimensioning the game, do you mean the screen size you render to? For this, it should be a simple change to glViewport(...) and perhaps the perspective frustrum's or orthos you create to view your scene. Changes to both of these are typically done when a screen size changes - changes to textures generally are not needed, except perhaps to bump up resolution (for instance for iOS retina displays that have 2x pixels).

Related

How to handle different device sizes

I know how to handle screen sizes, but it is quite a different matter when using OpenGL ES. The thing is that normally I would just get the size of the screen in pixels and using the numbers given I would align items to be displayed.
But, as I mentioned before, in OpenGL ES it is quite different. I want to draw a simple grid on the screen, but I want all the squares on all the devices to be the same size. That means that with bigger screens I would have more columns and rows instead of bigger squares. So the real question is how to convert screen size in pixels into OpenGL vertex system.
Actually, I was very stupid when asking this. After a bit of experimenting it turned out that the size of the squares of the grid will stay of the same size, because of the way opengl es vertices work. On bigger devices is the size the same as on smaller ones, because the scale stays the same, only the corner indices change accordingly.

When should I scale Bitmaps in my game

I've just started developing my first game with Android using SurfaceView.
I scale my Bitmaps to support different screen sizes. But I don't know if it's better to do the scaling when loading the bitmaps or when drawing them to Canvas using Matrix.
I guess the first one would occupy more memory but it's better in performance. But I don't know how really things work here so any suggestion from experts would be appreciated.
The best thing to do is to not scale the Bitmaps at all. You can just scale the SurfaceView's surface instead.
See the "hardware scaler exerciser" in Grafika for an example, and this post for an explanation. The basic idea is to call something like surfaceView.getHolder().setFixedSize(1280, 720) to set the surface's size to 1280x720, and then always render as if the display were 720p. The hardware will scale it to whatever the current screen dimensions are.
It's slightly more complicated than that -- you want to pick a size that matches the display aspect ratio so your stuff doesn't look stretched (which is something you have to deal with on Android anyway). Grafika does this to ensure that the square remains square.
This approach is much more efficient in both CPU and memory consumption than scaling individual bitmaps.

scaling images in libgdx only once

In my android game, I am using images of fixed resolution lets say 256x256. Now for different device screens, I am rendering them by calculating their width and height as appropriate for that device.
Assume that on galaxy note2 I calculated width=128 and height=128... similarly for different devices, width and height will vary.
This is how I created texture..
....
imageTexture = new Texture(...);
....
in render()..
....
spriteBatch.draw(imageTexture,x,y,width,height);
....
So, every time when I call draw() method, does libgdx/opengl scale image from 256x256 to 128x128, which I think, yes!
Is there any way to tell opengl/libgdx to calculate all scaling only once ?
I have no idea how images were rendered? loaded into memory? scaled etc ?
How does Sprite in libgdx work? I tried understanding the code of Sprite and looks to me like they are also getting image width and height and then scale it every time, even though they have setScale() method.
First rule of optimizing: get some numbers. Premature optimization is the root of many problems. That said, there are still some good rules of thumb to know.
The texture data will be uploaded by libgdx/OpenGL to the GPU when you invoke new Texture. When you actually draw the texture with spriteBatch.draw instructions are uploaded to the GPU by OpenGL that tell the hardware to use your existing texture and to fit it to the bounds. The draw call just uploads coordinates (the corners of the box that defines the Sprite) and a pointer to the texture. The actual texture data is not uploaded.
So, in practice your image is "scaled" on every frame. However, this is not that bad, as this is exactly what GPUs are designed to do very, very well. You only really need to worry about uploading so many textures that the GPU has trouble keeping track of them all, you do not need to worry much about scaling the textures beforehand.
The costs of scaling and transforming the four corners of the sprite are relatively trivial next to the costs of sending the data to the GPU and the cost of refreshing the screen, so they probably are not worth worrying about too much. The "batch" in SpriteBatch is all about "batching up" (or gathering together) a lot of coordinates to send up to the GPU at once, as roughly, each call out to the GPU can be expensive. So, its always good to do as much work within a single batch's begin/end as you can.
Again, though, modern machines are stupidly fast, and you should be able to do whatever is easiest to get your app running first. Then once you have something working correctly, you can figure out which parts are actually slow and fix those. The parts that are "inefficient" but are not actually measurably impacting your application can be left alone.

How to set my view in OpenGL ES 2.0 to show exactly the right number of coordinates

I'm writing a simple 2D game for Android with a 300x200 play area with coords running from (0,0 to 299,199). I'd want this area to fill the screen as best as possible while maintaining its aspect ratio. e.g. if the GL view fills the full 800x480 of a device I could scale the area by 2.4x to 720x480 leaving 40 pixels of space either side.
I don't expect many devices would exactly scale in both dimensions so the code has to cope with a gap either in the horizontal or vertical.
So the question is how do I do this. My play area is 2D so I can use an orthgraphic projection. I just don't understand what values I need to plug in to set this up. I also suspect that because ES 2.0 has a heavy reliance on shaders that I might need to propagate some kind of scaling matrix to a vector shader to ensure objects are rendered to the right size.
Does anyone know of a good tutorial which perhaps talks in terms that make sense for my needs? Most tutorials I've seen seem content to dump a cube or square into the middle of the screen rather than rendering an area of exact dimensions.
These problem should be easy using the old and familiar Opengl functions, like glViewport and glProjection. GLM offers that for enviroments like Opengl ES, have a look
http://glm.g-truc.net/

Can OpenGL ES render textures of non base 2 dimensions?

This is just a quick question before I dive deeper into converting my current rendering system to openGL. I heard that textures needed to be in base 2 sizes in order to be stored for rendering. Is this true?
My application is very tight on memory, but most of the bitmaps are not powers of two. Does storing non-base 2 textures consume more memory?
It's true depending on the OpenGL ES version, OpenGL ES 1.0/1.1 have the power of two restriction. OpenGL ES 2.0 doesn't have the limitation, but it restrict the wrap modes for non power of two textures.
Creating bigger textures to match POT dimensions does waste texture memory.
Suresh, the power of 2 limitation was built into OpenGL back in the (very) early days of computer graphics (before affordable hardware acceleration), and it was done for performance reasons. Low-level rendering code gets a decent performance boost when it can be hard-coded for power-of-two textures. Even in modern GPU's, POT textures are faster than NPOT textures, but the speed difference is much smaller than it used to be (though it may still be noticeable on many ES devices).
GuyNoir, what you should do is build a texture atlas. I just solved this problem myself this past weekend for my own Android game. I created a class called TextureAtlas, and its constructor calls glTexImage2D() to create a large texture of any size I choose (passing null for the pixel values). Then I can call add(id, bitmap), which calls glTexSubImage2D(), repeatedly to pack in the smaller images. The TextureAtlas class tracks the used and free space within the larger texture and the rectangles each bitmap is stored in. Then the rendering code can call get(id) to get the rectangle for an image within the atlas (which it can then convert to texture coordinates).
Side note #1: Choosing the best way to pack in various texture sizes is NOT a trivial task. I chose to start with simple logic in the TextureAtlas class (think typewriter + carriage return + line feed) and make sure I load the images in the best order to take advantage of that logic. In my case, that was to start with the smallest square-ish images and work my way up to the medium square-ish images. Then I load any short+wide images, force a CR+LF, and then load any tall+skinny images. I load the largest square-ish images last.
Side note #2: If you need multiple texture atlases, try to group images inside each that will be rendered together to minimize the number of times you need to switch textures (which can kill performance). For example, in my Android game I put all the static game board elements into one atlas and all the frames of various animation effects in a second atlas. That way I can bind atlas #1 and draw everything on the game board, then I can bind atlas #2 and draw all the special effects on top of it. Two texture selects per frame is very efficient.
Side note #3: If you need repeating/mirroring textures, they need to go into their own textures, and you need to scale them (not add black pixels to fill in the edges).
No, it must be a 2base. However, you can get around this by adding black bars to the top and/or bottom of your image, then using the texture coordinates array to restrict where the texture will be mapped from your image. For example, lets say you have a 13 x 16 pixel texture. You can add 3 pixels of black to the right side then do the following:
static const GLfloat texCoords[] = {
0.0, 0.0,
0.0, 13.0/16.0,
1.0, 0.0,
1.0, 13.0/16.0
};
Now, you have a 2base image file, but a non-2base texture. Just make sure you use linear scaling :)
This is a bit late but Non-power of 2 textures are supported under OpenGL ES 1/2 through extensions.
The main one is GL_OES_texture_npot. There is also GL_IMG_texture_npot and GL_APPLE_texture_2D_limited_npot for iOS devices
Check for these extensions by calling glGetString(GL_EXTENSIONS) and searching for the extension you need.
I would also advise keeping your textures to sizes that are multiples of 4 as some hardware stretches textures if not.

Categories

Resources