I'm always worried about optimization when it comes to game design and need to ask more experienced kivy users about some concerns.
Which one is truly faster?
Lets say you stored your graphic instructions in class attributes. If you're going to have a number of graphics updating on the screen every frame, but you're not adding or taking away from the canvas, Ask_Update seems to be the qualified choice.
Lets say you do add and remove graphic pieces around enough. Would it be better to just Clear the Canvas and canvas.add the stored instructions back?
or
Would it be better to call Clear after every removal or addition? That would seem like a pain in the tail vs just Clearing and canvas.add the graphics back.
Vectors....
How optimized are Vectors? Is the function/method a slow process? Just wondering because I've used 3D engines in the past that had some slow calls and it's usually the mathematical ones.
What is considered a good frame rate for a game app running on a hand-held device?
I also wonder about deleting instances. Does kivy have some special call for deleting an instance or would the usual del call (after running a cleanup function) and python garbage collection be enough?
I'm researching now because I don't want to develop something only to realize I wasn't aware of Kivy 'dos-and-donts'.
Clearing the canvas is inefficient, don't do that unless you actually want to remove everything.
You don't need to call ask_update in general.
Kivy's Vectors aren't particularly optimised, they're just wrappers around lists, but this probably isn't actually a problem for you.
A good framerate target is 60fps.
You can look at KivEnt for a game engine with particularly good performance with Kivy.
Related
I heard if I develop Android Game without using the NDK, the performance is significantly lower. Is this the truth?
I see 3 main reasons to use NDK:
you want to reuse C/C++ code base (e.g. game engine written in C++, crossplatform games)
get more memory (you can use much more memory via NDK)
your game is very CPU intensive and you need all your device power.
In all other cases you can choose whatever you like. SDK/Java allows you to use OpenGL same way as NDK, so your graphics won't slow. You should be careful with GC to get smooth gameplay. If your game is very CPU intensive you can write some methods in C++ and call them via JNI. By the way, Dalvik has JIT so Java code can be as fast (even faster sometimes) as C code.
No, that is simply not true.
It very much depends on what your particular game is doing. The essential question is: is your game going to be CPU intensive (or require larger amounts of memory).
If it's not, stick to the SDK.
If you don't know, because you haven't written many games yet in the past, by all means stick to the SDK.
Even it there turn out to be parts of the game that could do with the extra processing power, you can always extract those parts to native code during development as needed.
One reason to choose the NDK over the SDK is having a huge background in C++, which might make you more productive in that environment. However, given the current state of the toolsets (convenient debugging, build times, easy access to SDK libraries etc), this is rarely effective.
No you dont need to use the ndk unless you want to make a super-realistic 3d game such as Real Racing 3 if it is a game with simple graphics or not much time critical stuff java is ok
My game uses too much battery. I don't know exactly how much it uses as compared to comparable games, but it uses too much. Players complain that it uses a lot, and a number of them note that it makes their device "run hot". I'm just starting to investigate this and wanted to ask some theoretical and practical questions to narrow the search space. This is mainly about the iOS version of my game, but probably many of the same issues affect the Android version. Sorry to ask many sub-questions, but they all seemed so interrelated I thought it best to keep them together.
Side notes: My game doesn't do network access (called out in several places as a big battery drain) and doesn't consume a lot of battery in the background; it's the foreground running that is the problem.
(1) I know there are APIs to read the battery level, so I can do some automated testing. My question here is: About how long (or perhaps: about how much battery drain) do I need to let the thing run to get a reliable reading? For instance, if it runs for 10 minutes is that reliable? If it drains 10% of the battery, is that reliable? Or is it better to run for more like an hour (or, say, see how long it takes the battery to drain 50%)? What I'm asking here is how sensitive/reliable the battery meter is, so I know how long each test run needs to be.
(2) I'm trying to understand what are the likely causes of the high battery use. Below I list some possible factors. Please help me understand which ones are the most likely culprits:
(2a) As with a lot of games, my game needs to draw the full screen on each frame. It runs at about 30 fps. I know that Apple says to "only refresh the screen as much as you need to", but I pretty much need to draw every frame. Actually, I could put some work into only drawing the parts of the screen that had changed, but in my case that will still be most of the screen. And in any case, even if I can localize the drawing to only part of the screen, I'm still making an OpenGL swap buffers call 30 times per second, so does it really matter that I've worked hard to draw a bit less?
(2b) As I draw the screen elements, there is a certain amount of floating point math that goes on (e.g., in computing texture UV coordinates), and some (less) double precision math that goes on. I don't know how expensive these are, battery-wise, as compared to similar integer operations. I could probably cache a lot of these values to not have to repeatedly compute them, if that was a likely win.
(2c) I do a certain amount of texture switching when rendering the scene. I had previously only been worried about this making the game too slow (it doesn't), but now I also wonder whether reducing texture switching would reduce battery use.
(2d) I'm not sure if this would be practical for me but: I have been reading about shaders and OpenCL, and I want to understand if I were to unload some of the CPU processing to the GPU, whether that would likely save battery (in addition to presumably running faster for vector-type operations). Or would it perhaps use even more battery on the GPU than on the CPU?
I realize that I can narrow down which factors are at play by disabling certain parts of the game and doing iterative battery test runs (hence part (1) of the question). It's just that that disabling is not trivial and there are enough potential culprits that I thought I'd ask for general advice first.
Try reading this article:
Android Documents on optimization
What works well for me, is decreasing the use for garbage collection e.g. when programming for a desktop computer, you're (or i'm) used to defining variables inside loops when they are not needed out side of the loop, this causes a massive use of garbage collection (and i'm not talking about primitive vars, but big objects.
try avoiding things like that.
One little tip that really helped me get Battery usage (and warmth of the device!) down was to throttle FPS in my custom OpenGL Engine.
Especially while the scene is static (e.g. a turn-based game or the user tapped pause) throttle down FPS.
Or throttle if the user isn't responsive for more then 10 seconds, like a screensaver on a desktop pc. In the real world users often get distracted while using mobile devices. Don't let your app drain battery while your user figures out what subway-station he's in ;)
Also on the iPhone, sometimes 60FPS is the default, throttling this manually to 30 FPS is barely visible and safes you about half of the gpu cycles (and therefore a lot of battery!).
I just added some computationally expensive code to an Android game I am developing. The code in question is a collection of collision detection routines that get called very often (every iteration of the game-loop) and are doing a large amount of computation. I feel my collision detection implementation is fairly well developed, and as reasonably fast as I can make it in Java.
I've been using Traceview to profile the code, and this new piece of collision detection code has somewhat unsurprisingly doubled the duration of my game logic. That's obviously a concern since for certain devices, this performance hit could take my game from a playable to an unplayable state.
I have been considering different ways to optimize this code, and I am wondering if by moving the code into C++ and accessing it with the JNI, if I will get some noticeable performance savings?
The above question is my main concern and my reason for asking. I've determined that the two following reasons would be other positive results from using the JNI. However, it is not enough to persuade me to port my code to C++.
This would make the code cleaner. Since most of the collision detection is some sort of vector math, it is much cleaner to be able to use overloaded operators rather than using some more verbose vector classes in Java.
Memory management would be simpler. Simpler you say? Well, this is a game so the garbage collector running is not welcome because the GC could end up ruining the performance of your game if it constantly has to interrupt to clean up. In C I don't have to worry about the garbage collector, so I can avoid all the ugly things I do in Java with temporary static variables and just rely on the good old stack memory of C++
Long-winded as this question may be, I think I covered all my points. Given this information, would it be worth porting my code from Java to C++ and accessing it with the JNI (for reasons of improving performance)? Also, is there a way to measure or estimate a potential performance gain?
EDIT:
So I did it. Results? Well from TraceView's perspective, it was a 6x increase in speed of my collision detection routine.
It wasn't easy getting there though. Besides having to do the JNI dance, I also had to make some optimizations that I did not expect. Mainly, using a directly allocated float buffer to pass data from Java to native. My initial attempt just used a float array to hold the data in question because the conversion from Java to C++ was more natural, but that was realllly reallllly slow. The direct buffer completely side-stepped performance issues with array copying between java and native, and left me with a 6x bump.
Also, instead of rolling my own vector class, I just used the Eigen math library. I'm not sure how much of an affect this has had on performance, but at the least, it saved me the time of dev'ing my own (less efficient) vector class.
Another lesson learned is that excessive logging is bad for performance (jic that isn't obvious).
Not really a direct answer to your question, but the following links might be of use to you:
Android Developers, JNI Tips.
Android Developers, Designing for Performance
In the second link the following is written:
Native code isn't necessarily more efficient than Java. For one thing,
there's a cost associated with the Java-native transition, and the JIT
can't optimize across these boundaries. If you're allocating native
resources (memory on the native heap, file descriptors, or whatever),
it can be significantly more difficult to arrange timely collection of
these resources. You also need to compile your code for each
architecture you wish to run on (rather than rely on it having a JIT).
You may even have to compile multiple versions for what you consider
the same architecture: native code compiled for the ARM processor in
the G1 can't take full advantage of the ARM in the Nexus One, and code
compiled for the ARM in the Nexus One won't run on the ARM in the G1.
Native code is primarily useful when you have an existing native
codebase that you want to port to Android, not for "speeding up" parts
of a Java app.
If you are still at a fairly early stage of game development, you can consider using a Game Engine which provides a good collision detection mechanism, like Libgdx which does a fairly good job of box2d collision detection.
I'm trying to put a particle system together in Android, using OpenGL. I want a few thousand particles, most of which will probably be offscreen at any given time. They're fairly simple particles visually, and my world is 2D, but they will be moving, changing colour (not size - they're 2x2), and I need to be able to add and remove then.
I currently have an array which I iterate through, handling velocity changes, managing lifecyling (killing old ones, adding new ones), and plotting them, using glDrawArrays. What OpenGl is pointing at, though, for this call, is a single vertex; I glTranslatex it to the relevant co-ords for each particle I want to plot, one at a time, set the colour with glColor4x then glDrawArrays it. It works, but it's a bit slow and only works for a few hundred particles. I'm handling the clipping myself.
I've written a system to support static particles which I have loaded into a vertex/colourarray and plot using glDrawArrays, but this approach only seems suitable for particles which will never change relative location (ie I move all of them using glTranslate), colour and where I don't need to add/remove particles. A few tests on my phone (HTC Desire) suggest that trying to alter the contents of those arrays (which are ByteBuffers, pointed to by OpenGL) is extremely slow.
Perhaps there's some way of manually writing the screen myself with the CPU. If I'm just plotting 1x1/2x2 dots on the screen, and I'm purely interested in writing and not doing any blending/antialiasing, is this an option? Would it be quicker than whatever OpenGl is doing?
(200 or so particles on a 1ghz machine with megs of ram. This is way slower than I was getting 20 years ago on a 7mhz machine with <500k of ram! I appreciate I'm using Java here, but surely there must be a better solution. Do I have to use the NDK to get the power of C++, or is what I'm after possible)
I've been hoping somebody might answer this definitively, as I'll be needing particles on Android myself. (I'm working in C++, though -- Currently using glDrawArrays(), but haven't pushed particles to the limit yet.)
I found this thread on gamedev.stackexchange.com (not Android-specific), and nobody can agree on the best approach there, but you might want to try a few things out and see for yourself.
I was going to suggest glDrawArrays(GL_POINTS, ...) with glPointSize(), but the guy asking the question there seemed unhappy with it.
Let us know if you find a good solution!
I'm currently implementing a software keyboard ( using some sophisticated prediction ), and drawing it using canvas is insufficient in terms of perfomance. I'm getting frame drawing times well above 100ms, which is clearly unacceptable.
The keyboard itself consists of about 33 keys, each of them drawn using drawRoundRect and a simple Text above that. No widgets whatsoever are used, so it's the plain perfomance. Also, almost all of Googles perfomance tips are in use, so thats not the reason for the speed either.
I've now reached a point where switching to opengl actually would make sense, but I'm still sceptical considering the impact an opengl-based keyboard might have on battery life.
As I've found no sufficient documentation on that topic, I hope someone here can point me to the right direction.
Regardless of how much it drains the battery, you probably don't want to do this because most existing devices don't support multiple OpenGL contexts at the same time, so your soft keyboard would be incompatible with any application that is using OpenGL for its own drawing. On these devices the OpenGL context is owned only by the foreground application; it can not be used in secondary parts of the UI like the soft keyboard.
Also as the previous poster said, you would probably be best off looking how to optimize your regular drawing. Drawing vectors is quite slow, so pre-rendering them into a bitmap to just do bitmap blits would help a lot. Also be careful to only draw the parts of the window that have changed. 100ms is a pretty insane amount of time to take to draw the UI, so there are almost certainly significant optimizations you can make. You might want to look at the KeyboardView code in the platform (which is used by the standard soft keyboard and sample IME); this already contains many similar drawing optimizations.
An aside: Have you considered rendering the keys once and then grabbing them as sprites and blitting these? It should be vastly superior to rendering vector graphics.
I cannot give you hard numbers (and as apphacker pointed out, this is device-specific), but even if OpenGL is hardware-accelerated and hence might use more battery, the operation should complete much faster and so use less power in total.
If it is not hardware-accelerated, it seems logical that it should only use more power if it takes longer to complete the operation, as you are only exchanging one drawing API for another.
All in all, as you only have to draw when external events happen it shouldn't matter much in the long run, as people are probably tiping only a few keys per minute.
You'll probably just have to implement it (maybe in a simplified test case) and make measurements.