I've just started developing my first game with Android using SurfaceView.
I scale my Bitmaps to support different screen sizes. But I don't know if it's better to do the scaling when loading the bitmaps or when drawing them to Canvas using Matrix.
I guess the first one would occupy more memory but it's better in performance. But I don't know how really things work here so any suggestion from experts would be appreciated.
The best thing to do is to not scale the Bitmaps at all. You can just scale the SurfaceView's surface instead.
See the "hardware scaler exerciser" in Grafika for an example, and this post for an explanation. The basic idea is to call something like surfaceView.getHolder().setFixedSize(1280, 720) to set the surface's size to 1280x720, and then always render as if the display were 720p. The hardware will scale it to whatever the current screen dimensions are.
It's slightly more complicated than that -- you want to pick a size that matches the display aspect ratio so your stuff doesn't look stretched (which is something you have to deal with on Android anyway). Grafika does this to ensure that the square remains square.
This approach is much more efficient in both CPU and memory consumption than scaling individual bitmaps.
Related
I have some troubles with video processing. I'm using Surface with Texture to process video with MediaCodec and MediaMuxer (decode, crop, encode with lower quality).
And on middle-step, which is cropping, I've got problems. Basically, what I want to do is to take lesser dimension of video, then, based on this side, define offsets for bigger side and crop it out. Then result should be properly scaled into 640x640 square. I searched for pretty big amount of time, but all information that I have found points that Open GL should scale image itself. Yes, I should admit that it's doing some scale, but result is looking very ugly it seems to be "compressed" verticaly quite hardly.
So, now the question itself. Can somebody provide to me some explanation or maybe even snipet of code that perfrom desired functions in Open GL ES?
I already tried to adjust Viewport thought.
GLES20.glViewport(0, 0, 640, 640)
UPDATE
Idea that Isogen74 have proposed more less worked for me. It's still stretched quite significantly, but it's better then nothing.
Here you can find updated code: OpenGL setup
Cropping - change your texture coordinates that you loading from when loading the texture; e.g. if you you want to crop the top and bottom 10% off the image load from (0.0, 0.1) to (1.0, 0.9).
Scaling - the answer depends how big your downscale to 640*640 is. OpenGL filtering isn't designed to handle large downscaling ratios directly - it's not an image processing library - but assuming relatively small scaling ratios just ensure you have turned on GL_LINEAR texture filtering rather than GL_NEAREST.
If you have a large downscale you may need to mipmap the texture first and GL_LINEAR_MIPMAP_NEAREST or GL_LINEAR_MIPMAP_LINEAR as your minification filter, but just be aware that this isn't going to give you something as good as, for example a proper scaling algorithm you might get in image processing software (e.g. bicubic, or something like that).
In my android game, I am using images of fixed resolution lets say 256x256. Now for different device screens, I am rendering them by calculating their width and height as appropriate for that device.
Assume that on galaxy note2 I calculated width=128 and height=128... similarly for different devices, width and height will vary.
This is how I created texture..
....
imageTexture = new Texture(...);
....
in render()..
....
spriteBatch.draw(imageTexture,x,y,width,height);
....
So, every time when I call draw() method, does libgdx/opengl scale image from 256x256 to 128x128, which I think, yes!
Is there any way to tell opengl/libgdx to calculate all scaling only once ?
I have no idea how images were rendered? loaded into memory? scaled etc ?
How does Sprite in libgdx work? I tried understanding the code of Sprite and looks to me like they are also getting image width and height and then scale it every time, even though they have setScale() method.
First rule of optimizing: get some numbers. Premature optimization is the root of many problems. That said, there are still some good rules of thumb to know.
The texture data will be uploaded by libgdx/OpenGL to the GPU when you invoke new Texture. When you actually draw the texture with spriteBatch.draw instructions are uploaded to the GPU by OpenGL that tell the hardware to use your existing texture and to fit it to the bounds. The draw call just uploads coordinates (the corners of the box that defines the Sprite) and a pointer to the texture. The actual texture data is not uploaded.
So, in practice your image is "scaled" on every frame. However, this is not that bad, as this is exactly what GPUs are designed to do very, very well. You only really need to worry about uploading so many textures that the GPU has trouble keeping track of them all, you do not need to worry much about scaling the textures beforehand.
The costs of scaling and transforming the four corners of the sprite are relatively trivial next to the costs of sending the data to the GPU and the cost of refreshing the screen, so they probably are not worth worrying about too much. The "batch" in SpriteBatch is all about "batching up" (or gathering together) a lot of coordinates to send up to the GPU at once, as roughly, each call out to the GPU can be expensive. So, its always good to do as much work within a single batch's begin/end as you can.
Again, though, modern machines are stupidly fast, and you should be able to do whatever is easiest to get your app running first. Then once you have something working correctly, you can figure out which parts are actually slow and fix those. The parts that are "inefficient" but are not actually measurably impacting your application can be left alone.
I'm developing a videogame in Android using OpenGL ES. I'm having some issues with the redimensioning of the textures.
I would like that my game could be compatible with any resolution, and for this, I created a constant with the relation between the game resolution and the screen resolution, like this:
Display display = getWindowManager().getDefaultDisplay();
KTE.SCREEN_WIDTH = display.getWidth();
KTE.SCREEN_HEIGHT = display.getHeight();
KTE.REDIMENSION_X = KTE.SCREEN_WIDTH/KTE.GAME_WIDTH;
KTE.REDIMENSION_Y = KTE.SCREEN_HEIGHT/KTE.GAME_HEIGHT;
Using this constant, I get the same result using differents screens sizes (with the redimensioning of all of the textures using the constant I calculated in the code before).
The problem is that I wanted to reduce the GAME resolution to make bigger all the textures, and now I get black pixels around the textures because my redimension constants are floats with a lot of decimals, and I guess all those black pixels are positions that are left during this calculations...
Anyone got this problem before? Any tip to redimensioning the game? I have tried a lot of things and I'm really stuck. Thanks.
It sounds like the "redimensioning" of the textures isn't working as expected. For instance, perhaps you are only resizing the data of the texture, but the texture itself is still the same size as before. This would account for black pixels at the boundary. Be sure you're creating your textures with your KTE.REDIMENSION_X/Y factor, and be sure when you're writing to your textures you're writing to the edges of them.
As for redimensioning the game, do you mean the screen size you render to? For this, it should be a simple change to glViewport(...) and perhaps the perspective frustrum's or orthos you create to view your scene. Changes to both of these are typically done when a screen size changes - changes to textures generally are not needed, except perhaps to bump up resolution (for instance for iOS retina displays that have 2x pixels).
I'm writing a simple 2D game for Android with a 300x200 play area with coords running from (0,0 to 299,199). I'd want this area to fill the screen as best as possible while maintaining its aspect ratio. e.g. if the GL view fills the full 800x480 of a device I could scale the area by 2.4x to 720x480 leaving 40 pixels of space either side.
I don't expect many devices would exactly scale in both dimensions so the code has to cope with a gap either in the horizontal or vertical.
So the question is how do I do this. My play area is 2D so I can use an orthgraphic projection. I just don't understand what values I need to plug in to set this up. I also suspect that because ES 2.0 has a heavy reliance on shaders that I might need to propagate some kind of scaling matrix to a vector shader to ensure objects are rendered to the right size.
Does anyone know of a good tutorial which perhaps talks in terms that make sense for my needs? Most tutorials I've seen seem content to dump a cube or square into the middle of the screen rather than rendering an area of exact dimensions.
These problem should be easy using the old and familiar Opengl functions, like glViewport and glProjection. GLM offers that for enviroments like Opengl ES, have a look
http://glm.g-truc.net/
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.