I'm working on an Android app using Open GL ES 2.0. I'm confused about memory management in Open GL.
My questions are:
How much memory is available to the Open GL hardware? Clearly it's going to vary from device to device.
How can I find out how much memory is in use and how
much is left?
What happens if I exceed the memory limits?
What techniques should I be using to unload data not currently being displayed?
I presume I'm going to have to implement some kind of system to unload textures that are not currently in use on an LRU basis, but I'd like some idea of what criteria to use for this.
The app silently dies at some point and I suspect it is because I'm using too much graphics memory.
Currently I'm never unloading textures and I seem to be able to load quite a few - testing on a Nexus 7 I have been able to load 134 1024x1024 RGBA textures, which I calculate to be over 500MB. I presume once the textures have been loaded into graphics memory they take up less space, but that's still a lot, and clearly I have to manage that but I'd like some tips on how to start.
Simply use gles glDeleteTextures
If you run out of memory you will gen GL_OUT_OF_MEMORY error probably. Another thing is to monitor memory usage in Android.
android memory: How do I discover memory usage of my application in Android?
See here an interesting question for opengl: how to manage memory with texture in opengl?
Related
I am loading textures into OpenGLES on Android and maintaining a reference to the generated id in a HashMap.
At a given point in time not all textures that have been loaded will be in use, but may be used at a later point in time, therefore if the device has enough free memory I'd like to keep the textures loaded.
However, if the device starts running low on memory I would like to delete any textures that are not in use since they can always be reloaded later if they're required.
I've tried a few methodologies for handling this scenario.
1. Respond to system memory warnings
If the application receives a memory warning, then it will identify which textures are not in use and schedule for those textures to be deleted.
This method did work reasonably well.
2. Use Soft References
In this approach the application would maintain a List<SoftReference<Texture>> where the Texture class is a wrapper around the id that was returned when a given texture was loaded into OpenGLES.
If a given Texture is not in use at a given point in time then only a SoftReference would exist to this Texture and thus if the garbage collector deemed it necessary it could reclaim this memory and the finalize method on the Texture class would schedule for this texture to be deleted.
This approach seemed ideal based upon the description of SoftReference in the Java documentation, since they would only be reclaimed when more memory was required.
Soft reference objects, which are cleared at the discretion of the garbage collector in response to memory demand. Soft references are most often used to implement memory-sensitive caches.
However, the Android implementation of SoftReference does not work like this: since Android 9 as the garbage collector is more aggressive and the soft references are reclaimed almost immediately regardless of whether the device is low on memory.
3. Use LruCache
The Android documentation advises against using SoftReference in a cache implementation and to use a LruCache instead. However, the LruCache poses some drawbacks.
Firstly, you have to specifying the maximum size of the cache. It isn’t obvious what to set the cache size to: ideally it’d just automatically be set as high as possible while still being a good citizen. If it’s set too small, it might be constantly reloading textures unnecessarily.
Secondly, a Texture may be removed from the cache which is currently in use and thus may be deleted from OpenGLES and then display as a missing texture to the user.
Is there a better way to maintain a cache of textures in OpenGL and be responsive to low memory scenarios (besides just deleting textures upon memory warnings)?
Talking about best practices, large caches (that can take all the memory) with OpenGL ids for textures are rarely used. Actually it's worth to have only a few textures to render in the scene at one moment. More textures you have in the scene, more texture switches you need to do in frame. It costs. Texture atlases were developed many years ago to reduce number of texture switches. However, atlases still can take a lot of memory.
With development of hardware capabilities and user expectations, texture atlases evolved into virtual/mega/sparse textures + texture data streaming for the cases of high memory usage. The idea is to use virtual memory approach and load/unload blocks of one very large texture in real-time. It has some drawbacks, good discussion about it can be found here. LRU cache can be built on top of it to tell what blocks are required at the moment.
Of course, engines can preload many textures (sparse or usual ones) and unload it after usage, e.g. as a part of dynamic loading of open world in a game. These textures aren't used for rendering simultaneously, and no one waits until they take all the memory to start unloading. The eviction algorithm here is highly dependant on particular app, even though maximum cache size in Mb is widely used practice here.
When I used Android profiler I noticed that graphics were Taking a lot of memory (169 mb) which was making the app extremely slow , and I though that it was caused by Bitmaps so I deleted all the bitmaps in the app and tried again..
and I noticed that graphics were still taking up 60 - 100 mb of RAM , and I would like to know what can cause Memory Drain other than Bitmaps?
(My App uses google Maps if that helps)
Graphics: Memory used for graphics buffer queues to display pixels to the screen, including GL surfaces, GL textures, and so on. (Note that this is memory shared with the CPU, not dedicated GPU memory.)
https://developer.android.com/studio/profile/memory-profiler.html
I've found the answer for desktops, but I could not find anything for Android/iOS (assume I can use up to OpenGL ES 3.0). So this is the same question for mobile devices: Is it possible to get the total memory in bytes used by OpenGL by my application programatically?
Note: I am OK with a non-universal solution (AFAIK universal solution does not exists), but something that works at least on popular devices (iOS/Snapdragon/..)
No, it's not possible via any standard API.
Most graphics drivers will account any graphics memory to the process, so you can always use the "top" command line utility to get total process memory. It's not able to isolate the graphics memory, but it should give you an idea how how much your process is using in total.
That said, you probably have a pretty good idea how much data you uploaded/allocated storage for using the GLES API, which is probably a good finger in the air estimate for total memory. Most of the bulk storage related to application assets.
I created an application with Starling, on the new mobile devices it performs amazingly well, however on the older devices (e.g. iPhone 4) I encounter a very odd lag.
I have as far as I can tell a completely static situation:
There are quite a few display objects added to stage, many of them are buttons in case it matters, their properties are not changed at all after initialization (x, y, rotation, etc...).
There are no enterframes / timeouts / intervals / requests of any kind in the background.
I'm not allocating / deallocating any memory.
In this situation, there's an average of 10 FPS out of 30, which is very odd.
Since Starling is a well established framework, I imagine it's me who's doing something wrong / not understanding something / not aware of something.
Any idea what might be causing it?
Has anyone else experienced this sort of problem?
Edit:
After reading a little I've made great optimizations in every possible way according to this thread:
http://wiki.starling-framework.org/manual/performance_optimization
I reduced the draw calls from around 90 to 12, flattened sprites and set blendmode to none in specific cases to ease on CPU, and so on...
To my surprise when I tested again, the FPS was unaffected:
fps: 6 / 60
mem: 19
drw: 12
Is it even possible to get normal fps with Starling on mobile? What am I missing?
I am using big textures that are scaled down to the size of the device, is it possible that such a thing affects the fps that much?
Regarding "Load textures from files/URLs", I'm downloading different piles of assets for different situations, therefore I assumed compiling each pile into a SWF would be way faster than sending a separate request for each file. The problem is, for that I can only use embed, which apparently uses twice the memory. Do you have any solution in mind to enjoy the best of both worlds?
Instead of downloading 'over-the-wire' your assets and manually caching them for re-use, you can embed the assets into your app bundle vs. embedding them and then use the Starling AssetManager to load the textures at the resolution/scale that you need for the device:
ie.
assets.enqueue(
appDir.resolvePath("audio"),
appDir.resolvePath(formatString("fonts/{0}x", scaleFactor)),
appDir.resolvePath(formatString("textures/{0}x", scaleFactor))
);
Ref: https://github.com/Gamua/Starling-Framework/blob/master/samples/scaffold_mobile/src/Scaffold_Mobile.as
Your application bundle gets bigger of course, but you do not take the 2x ram hit of using 'embed'.
Misc perf ideas from my comment:
Testing FPS with "Release" mode correct?
Are you using textures that are scaled down to match the resolution of the device before loading them?
Are you mixing BLEND modes that are causing additional draw calls?
Ref: The Performance Optimization is great reading to optimize your usage of Starling.
Starling is not a miracle solution for mobile device. There's quite a lot of code running in the background in order to make the GPU display anything. You the coder has to make sure the amount of draw call is kept to a minimum. The weaker the device and the less draw call you should force. It's not rare to see people using Starling and not pay any attention to their draw calls.
The size of graphics used is only relevant for the GPU upload time and not that much for the GPU display time. So of course all relevant texture need to be uploaded prior to displaying any scenes. You simply cannot try to upload any new texture while any given scene is playing. Even a small texture uploading will cause idling.
Displaying everything using Starling is not always a smart choice. In render mode the GPU gets a lot of power but the CPU still has some remaining. You can reduce the amount of GPU uploading and GPU charge by simply displaying static UI elements using the classic display list (which is where the Staling framework design is failing). Starling was originally made to make it very difficult to use both display system together that's one of the downsides of using this framework. Most professional I know including myself don't use Starling for that reason.
Your system must be flexible and you should embed your assets for mobile and not use any external swf as much as possible and be able to switch to another system for the web. If you expect to use one system of asset for mobile/desktop/web version of your app you are setting yourself up for failure. Embedding on mobile is critical for memory management as the AIR platform internally manages the cache of those embedded assets. Thx to that when creating new instances of those assets the memory consumption stays under control, if you don't embed then you are on your own.
Regarding overall performance a very weak Android device will probably never be able to go passed 10 fps when using Starling or any Stage3D framework because of the amount of code those framework need to run (draw calls) in the background. On weak device that amount of code is already enough to completely overload the CPU. On the other hand on weak device you can still get a good performance and user experience by using GPU mode instead of render mode (so no Stage3D) and displaying mostly raster graphic.
IN RESPONSE TO YOUR EDIT:
12 draw calls is very good (90 was pretty high).
That you still get low FPS on some device is not that surprising. Especially low end Android device will always have low FPS in render mode with Stage3D framework because of the amount of code that those framework have to run to render one frame. Now the size of the texture you are using should not affect the FPS that much (that's the point of Stage3D). It would help with the GPU uploading time if you reduce the size of those graphics.
Now optimization is the key and optimizing on low end device with low FPS is the best way to go since whatever you do will have great effect on better device as well. Start by running tests and only displaying static graphics with no or very little code on your part just to see how far the Stage3D framework can go on its own on those weak device without losing any FPS and then optimize from there. The amount of object displayed on screen + the amount of draw calls is what affects FPS with those Stage3D framework so keep a count of those and always seek ways to reduce it. On some low end device it's not practical to try to keep a 60fps so try to switch to 30 and adjust your rendering accordingly.
I am implementing a game engine using OpenGL and wonder if it's a good idea to manually unload textures that are not used within a certain radius of the camera. It has been suggested to me for Android and OpenGL ES that it might be a good idea to load and unload textures on the fly as needed to save memory. Is this recommended for OpenGL ES and OpenGL? I am personally not convinced by the approach but I am curious as to when this will have benefits, if any, in the setting of OpenGL on a desktop PC and with OpenGL ES on a mobile phone.
It will depend on how many textures/the size of the textures you use in your game.
I believe the magic number is minimum 10MB of texture memory, so if your textures use more than that people tend to use texture compression (phone compatibility issues here, some compression formats are proprietary, there's a whole topic on this). You could unload textures when they aren't being used but there will be a heavy cost when you reload them, so be aware of that.
Generally speaking, you shouldn't need to resort to this method though.
If you don't need that memory, then it does no harm being there. And if you do need that memory, then you already know that you need to do it. So the answer is... do it if you need to.