Performant 2D rendering with Android's Canvas API - android

Over the last month, I've taken a dive into trying my hand at a simple 2d side scroller using Android's default canvas APIs.
In short, my question boils down to "Is the Canvas API performant enough to pull 60fps with a simple side scroller with a multi-layer, parallaxing background?".
Now hear me out, because I've tried a ton of different approaches, and so far I've come up fairly empty handed on how to squeeze any more efficiency out of what I've attempted.
First, to address the easy problems:
I'm not allocating anything in the game loop
The Surface is Hardware Accelerated
Bitmaps are being loaded prior to starting the game, and drawn without scaling.
I don't believe I'm doing anything significant aside from the bitmap drawing in the game loop (mostly updating position variables and getting time)
The problems started, as you may have guessed, with the parallaxing BG. Seven layers in total, though I've been experimenting with as few as 3 while still unable to maintain 60fps. I'm clipping the parts that are overlapping to minimize overdraw, but I would guess I'm still drawing ~2-3x overdraw, all summed up.
As the game runs, I often start out at ~ 60fps, but about 30/40 seconds in, the fps drops to about 20. The time is random, and it seems to be a state of the phone, rather than a result of anything in the code (force killing the app & restarting causes the new app to start at ~20fps, but letting the phone sit for a while results in higher fps). My guess here is thermal throttling on the CPU...
Now I'm testing on a 5x, and naively thought these issues might disappear on a faster device (6P). Of course, due to the larger screen, the problems got worse, as it ran at ~15fps continuously.
A co-worker had suggested loading in small bitmaps and stretching them on draw, instead of scaling the bitmaps on load to the size they would appear on the screen. Implementing this by making each bitmap 1/3rd the size, and using the canvas.drawBitmap(bitmap, srcRect, destRect, Paint) method to compensate for size and scale yielded worse performance overall (though I'm sure it helped the memory footprint). I haven't tried the drawMesh method, but I imagined it wouldn't be more performant that the plain old drawBitmap.
In another attempt, I gave an array of ImageViews a shot, thinking the Android View class might be doing some magic I wasn't, but after an hour of fiddling with it, it didn't look any more promising.
Lastly, the GPU profiling on the device remains far beneath the green line, indicating 60fps, but the screen doesn't seem to reflect that. Not sure what's going on there.
Should I bite the bullet and switch over to OpenGL? I thought that Canvas would be up to the task of a side-scroller with a parallaxing background, but I'm either doin' it wrong™, or using the wrong tool for the job, it seems.
Any help or suggestions are sincerely appreciated, and apologies for the long-winded post.

Related

Simple particle system on Android using OpenGL ES 1.0

I'm trying to put a particle system together in Android, using OpenGL. I want a few thousand particles, most of which will probably be offscreen at any given time. They're fairly simple particles visually, and my world is 2D, but they will be moving, changing colour (not size - they're 2x2), and I need to be able to add and remove then.
I currently have an array which I iterate through, handling velocity changes, managing lifecyling (killing old ones, adding new ones), and plotting them, using glDrawArrays. What OpenGl is pointing at, though, for this call, is a single vertex; I glTranslatex it to the relevant co-ords for each particle I want to plot, one at a time, set the colour with glColor4x then glDrawArrays it. It works, but it's a bit slow and only works for a few hundred particles. I'm handling the clipping myself.
I've written a system to support static particles which I have loaded into a vertex/colourarray and plot using glDrawArrays, but this approach only seems suitable for particles which will never change relative location (ie I move all of them using glTranslate), colour and where I don't need to add/remove particles. A few tests on my phone (HTC Desire) suggest that trying to alter the contents of those arrays (which are ByteBuffers, pointed to by OpenGL) is extremely slow.
Perhaps there's some way of manually writing the screen myself with the CPU. If I'm just plotting 1x1/2x2 dots on the screen, and I'm purely interested in writing and not doing any blending/antialiasing, is this an option? Would it be quicker than whatever OpenGl is doing?
(200 or so particles on a 1ghz machine with megs of ram. This is way slower than I was getting 20 years ago on a 7mhz machine with <500k of ram! I appreciate I'm using Java here, but surely there must be a better solution. Do I have to use the NDK to get the power of C++, or is what I'm after possible)
I've been hoping somebody might answer this definitively, as I'll be needing particles on Android myself. (I'm working in C++, though -- Currently using glDrawArrays(), but haven't pushed particles to the limit yet.)
I found this thread on gamedev.stackexchange.com (not Android-specific), and nobody can agree on the best approach there, but you might want to try a few things out and see for yourself.
I was going to suggest glDrawArrays(GL_POINTS, ...) with glPointSize(), but the guy asking the question there seemed unhappy with it.
Let us know if you find a good solution!

Android drawBitmap Performance For Lots of Bitmaps?

I'm in the process of writing an Android game and I seem to be having performance issues with drawing to the Canvas. My game has multiple levels, and each has (obviously) a different number of objects in it.
The strange thing is that in one level, which contains 45 images, runs flawlessly (almost 60 fps). However, another level, which contains 81 images, barely runs at all (11 fps); it is pretty much unplayable. Does this seem odd to anybody besides me?
All of the images that I use are .png's and the only difference between the aforementioned levels is the number of images.
What's going on here? Can the Canvas simply not draw this many images each game loop? How would you guys recommend that I improve this performance?
Thanks in advance.
Seems strange to me as well. I am also developing a game, lots of levels, I can easily have a 100 game objects on screen, have not seen a similar problem.
Used properly, drawbitmap should be very fast indeed; it is little more than a copy command. I don't even draw circles natively; I have Bitmaps of pre-rendered circles.
However, the performance of Bitmaps in Android is very sensitive to how you do it. Creating Bitmaps can be very expensive, as Android can by default auto-scale the pngs which is CPU intensive. All this stuff needs to be done exactly once, outside of your rendering loop.
I suspect that you are looking in the wrong place. If you create and use the same sorts of images in the same sorts of ways, then doubling the number of screen images should not reduce performance by a a factor of over 4. At most it should be linear (a factor of 2).
My first suspicion would be that most of your CPU time is spent in collision detection. Unlike drawing bitmaps, this usually goes up as the square of the number of interacting objects, because every object has to be tested for collision against every other object. You doubled the number of game objects but your performance went down to a quarter, ie according to the square of the number of objects. If this is the case, don't despair; there are ways of doing collision detection which do not grow as the square of the number of objects.
In the mean time, I would do basic testing. What happens if you don't actually draw half the objects? Does the game run much faster? If not, its nothing to do with drawing.
I think this lecture will help you. Go to the 45 minute . There is a graph comparing the Canvas method and the OpenGl method. I think it is the answer.
I encountered a similar problem with performance - ie, level 1 ran great and level 2 didn't
Turned it wasn't the rendering that was a fault (at least not specifically). It was something else specific to the level logic that was causing a bottleneck.
Point is ... Traceview is your best friend.
The method profiling showed where the CPU was spending its time and why the glitch in the framerate was happening. (incidentally, the rendering cost was also higher in Level 2 but wasn't the bottleneck)

Can I increase performances using glscissor

I'm working on a 2D game for android using OpenGL ES 1.1 and I would like to know if this idea is good/bad/useless.
I have a screen divided in 3 sections, so I used scissors to avoid object overlapping from one view to the other.
I roughly understand the low level implementation of scissor and since my draws take a big part of the computation, I'm looking for ideas to speed it up.
My current idea is as follows:
If I put a glscissor around each object before I draw it, would I increase the speed of my application.
The idea is if I put a GLScissor, (center+/-sizetexture), then the OpenGL pipeline will have less tests to do (since it can discard 90~99% of the surface thanks to the glscissors.
So to all opengl experts, is this good, bad or will have no impact ? And why?
It shouldn't have any impact, IMHO. I'm not an expert, but my thinking is as follows:
Scissor test saves on your GPU's fill rate (the amount of fragments/pixels a hardware can put in the framebuffer per second),
if you put a glScissor around each object, the test won't actually cut off anything - the same number of pixels will be rendered, so no fill rate will be saved.
If you want to have your rendering optimized, a good place to start is to make sure you're doing optimal batching and reduce the number of draw calls or complex state switches (texture switches).
Of course the correct approach to optimizations is to try to diagnose why is your rendering slow, so the above is just my guess which may or may not help in your particular situation.

SurfaceView Fast Enough For Emulation?

For years now I've maintained a Tandy Color Computer Emulator applet on my home page. With the purchase of an Incredible I decided to do a port. Getting it going in Android didn't take long but I'm really surprised how slow it runs. You can literally see the pixels painting. I know there are other successful Android emulators so I must be doing something wrong.
My approach was to use a SurfaceView for rendering. There's a separate thread that runs a virtual 6809 CPU. Whenever that thread updates the emulated video memory, it calls SurfaceHolder.lockCanvas() with a Rect describing the part of the screen requiring a repaint. Then it calls the gfx routines with the resulting Canvas...this is where I did a repaint() in AWT/Swing. The gfx routines are smart enough to just render what's in the clipRect. Perhaps I'm still stuck in AWT but I can't think of any way to make this thing run at an acceptable speed. I tried to coalesce the gfx calls but that didn't work either. Any thoughts?
SurfaceView should be fast enough. All the drawing routines are smart enough to not do anything if outside of the clip region but you can cull the calls ahead of time if you want. It looks like you need to profile your app and see where you spend too much time.

Is OpenGL on Android a battery killer?

I'm currently implementing a software keyboard ( using some sophisticated prediction ), and drawing it using canvas is insufficient in terms of perfomance. I'm getting frame drawing times well above 100ms, which is clearly unacceptable.
The keyboard itself consists of about 33 keys, each of them drawn using drawRoundRect and a simple Text above that. No widgets whatsoever are used, so it's the plain perfomance. Also, almost all of Googles perfomance tips are in use, so thats not the reason for the speed either.
I've now reached a point where switching to opengl actually would make sense, but I'm still sceptical considering the impact an opengl-based keyboard might have on battery life.
As I've found no sufficient documentation on that topic, I hope someone here can point me to the right direction.
Regardless of how much it drains the battery, you probably don't want to do this because most existing devices don't support multiple OpenGL contexts at the same time, so your soft keyboard would be incompatible with any application that is using OpenGL for its own drawing. On these devices the OpenGL context is owned only by the foreground application; it can not be used in secondary parts of the UI like the soft keyboard.
Also as the previous poster said, you would probably be best off looking how to optimize your regular drawing. Drawing vectors is quite slow, so pre-rendering them into a bitmap to just do bitmap blits would help a lot. Also be careful to only draw the parts of the window that have changed. 100ms is a pretty insane amount of time to take to draw the UI, so there are almost certainly significant optimizations you can make. You might want to look at the KeyboardView code in the platform (which is used by the standard soft keyboard and sample IME); this already contains many similar drawing optimizations.
An aside: Have you considered rendering the keys once and then grabbing them as sprites and blitting these? It should be vastly superior to rendering vector graphics.
I cannot give you hard numbers (and as apphacker pointed out, this is device-specific), but even if OpenGL is hardware-accelerated and hence might use more battery, the operation should complete much faster and so use less power in total.
If it is not hardware-accelerated, it seems logical that it should only use more power if it takes longer to complete the operation, as you are only exchanging one drawing API for another.
All in all, as you only have to draw when external events happen it shouldn't matter much in the long run, as people are probably tiping only a few keys per minute.
You'll probably just have to implement it (maybe in a simplified test case) and make measurements.

Categories

Resources