Ok, I am going to be drawing a lot of "insects" on the screen. My question is, if I am drawing a spider for example, would it be faster to load a bitmap, matrix it to the correct angle, and draw it on screen (again and again), or draw an ant using the canvas.drawLine, drawCircle, etc? For a direct comparison:
Bitmap: 500 bytes w/ transparency
Drawn: 8 drawLines, 2 drawCircles
I already have a lot going on, so performance here is very important.
Thanks in advance!
It is much faster to draw a bitmap. What really matters is how many pixels you are going to draw (i.e. the overdraw, which will impact the maximum fillrate.) Using bitmaps also allows you to create richer graphics without performance penalties.
Related
I would like to create a Canvas instance that is too big to be backed by a heap memory Bitmap, lets say 5000x5000 pixels (approx. 95MB). I would like this very large Canvas to send all the various draw operations directly to a bitmap file. Unfortunately the Bitmap class in Android is marked final so I can't provide my own implementation. Does anyone have an idea if and how this might be accomplished? I'm not very interested in performance, 10 seconds to write make a few dozen draw operations is fine, the goal is to not get out of memory errors.
There's no facility to provide the function you are asking, and even if there were, to do such operations to a file would provide horrendous performance.
Probably the only reasonable way is to only store the drawing operations, and create a Canvas that is the same size as the device screen that would serve as a "window" into the while 5000x5000 pixel canvas. For detailed explanation see my answer to a related question here: Android - is there a possibility to make infinite canvas?
Here is an idea I had that I think could theoretically work, but would probably require far too much effort:
Create a subclass of Canvas that contains many smaller Canvas objects inside it. These would represent tiles of the overall Canvas. These tiles should be small enough to fit in memory at least one at a time. Create one file for each inner tile Canvas and use it to store uncompressed pixel data from a Buffer.
When a draw operation occurs on the overall Canvas figure out which tiles need to be drawn to. One at a time read the file for that tile into a Bitmap in memory and perform the possibly clipped draw and then save the Bitmap data back to the file and close it.
Theoretically it sounds possible, realistically it sounds like too much work.
I'm developing an Android game using Canvas element. I have many graphic elements (sprites) drawn on a large game map. These elements are drawn by standard graphics functions like drawLine, drawPath, drawArc etc.
It's not hard to test if they are in screen or not. So, if they are out of the screen, i may skip their drawing routines completely. But even this has a CPU cost. I wonder if Android Graphics Library can do this faster than I can?
In short, should I try to draw everything even if they are completely out of the screen coordinates believing Android Graphics Library would take care of them and not spend much CPU trying to draw them or should I check their drawing area rectangle myself and if they are completely out of screen, skip the drawing routines? Which is the proper way? Which one is supposed to be faster?
p.s: I'm targeting Android v2.1 and above.
From a not-entirely-scientific test I did drawing Bitmaps tiled across a greater area than the screen, I found that checking beforehand if the Bitmap was onscreen doesn't seem to make a considerable different.
In one test I set a Rect to the screen size and set another Rect to the position of the Bitmap and checked Rect.intersects() before drawing. In the other test I just drew the Bitmap. After 300-ish draws there wasn't a visible trend - some went one way, others went another. I tried the 300-draw test every frame, and the variation from frame to frame was much greater than difference between checked and unchecked drawing.
From that I think it's safe to say Android checks bounds in its native code, or you'd expect a considerable difference. I'd share the code of my test, but I think it makes sense for you to do your own test in the context of your situation. It's possible points behave differently than Bitmaps, or some other feature of your paint or canvas changes things.
Hope that help you (or another to stumble across this thread as I did with the same question).
We are to develop a scrolling/zooming scene in OpenGL ES on Android, very much like a level in Angry Birds but more like a level in World Of Goo. More like the latter as the world will not consist of repeated layers as featured in Angry Birds but of a large image. As the scene needs to scroll/zoom and therefore a lot of it will not be visible, I was wondering about the most efficient way to implement the rendering, focusing on the environment only (ie not the objects within the world but background layers).
We will be using an orthographic projection.
The first that comes to mind is creating a large 4 vertices rectangle at world size, which has the background texture mapped to it, and translate/scale this using glTranslatef / glScalef. However, I was wondering if the non visible area outside of the screens boundaries is still being rendered by OpenGL as it is not being culled (you would lose the visible area as well as there are only 4 vertices). Therefore, would it be more efficient to subdivide this rectangle, so non visible smaller rectangles can be culled?
Another option would be creating a 4 vertice rectangle that would fill the screen, then move the background by adjusting its texture coordinates. However, I guess we would run into problems when building bigger worlds, considering the texture size limit. It seems like a nice implementation for repeated backgrounds like AngryBirds has.
Maybe there is another way..?
If someone has an idea on how it might be done in AngryBirds / World of Goo, please share as I'd love to hear. They seem to have implemented a system that allows for the world to be moved and zoomed very (WorldOfGoo = VERY) smoothly.
This is probably your best bet for implementation.
In my experience, keeping a large texture in memory is very expensive on Android. I would get quite a few OutOfMemoryError exceptions for the background texture before I moved to tiling.
I think the biggest rendering bottleneck would be with memory transfer speeds and fill rate instead of any graphics computation.
Edit: Check out 53:28 of this presentation from Google I/O 2009.
You could split the background rectangle into smaller rectangles, so that OpenGL only renders the visible rectangles. You won't have a big ass rectangle with a big ass texture loaded but smallers rectangles with smaller textures that you could load/unload, depending on what is visible on screen...
Afaik there would be no performance drop due to large areas being rendered off-screen, subdividing and culling is normally done just to reduce vertex count, but you would actually be adding to it here.
Putting that aside for now; from the way you phrased the question I am unsure whether you have a large background texture or a small repeating one. If it is large, then you will need to subdivide because of texture size limitations anyway, so the question is moot! If it is small, then I would suggest the second method, fit a quad to the screen and move the background by changing the texture coordinates.
I feel like I may have missed something, though, as I am unsure why you mentioned the texture size limitation issue when talking about the the texture coordinate method and not the large quad method. Surely for both this is not a problem for repeating textures as you can use GL_REPEAT texture wrap mode...
But for both it is a problem for a single large texture unless you subdivide, which would make the texture coordinate tactic way more complicated than necessary. In this case subdividing the mesh along texture subdivisions would be best, and culling off-screen sections. Deciding which parts to cull should be trivial with this technique.
Cheers.
I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.
I'm tried to determine the "best" way to scroll a background comprised of tiled Bitmaps on an Android SurfaceView. I've actually been successful in doing so, but wanted to determine if there is a more efficient technique, or if my technique might not work on all Android phones.
Basically, I create a new, mutable Bitmap to be slightly larger than the dimensions of my SurfaceView. Specifically, my Bitmap accomodates an extra line of tiles on the top, bottom, left, and right. I create a canvas around my new bitmap, and draw my bitmap tiles to it. Then, I can scroll up to a tile in any direction simply by drawing a "Surfaceview-sized" subset of my background Bitmap to the SurfaceHolder's canvas.
My questions are:
Is there a better bit blit technique than drawing a background bitmap to the canvas of my SurfaceHolder?
What is the best course of action when I scroll to the edge of my background bitmap, and wish to shift the map one tile length?
As I see it, my options are to:
a. Redraw all the tiles in my background individually, shifted a tile length in one direction. (This strikes me as being inefficient, as it would entail many small Bitmap draws).
b. Simply make the background bitmap so large that it will encompass the entire scrolling world. (This could require an extremely large bitmap, yet it would only need to be created once.)
c. Copy the background bitmap, draw it onto itself but shifted a tile length in the direction we are scrolling, and draw the newly revealed row or column of tiles with a few individual bitmap draws. (Here I am making the assumption that one large bitmap draw is more efficient than multiple small ones covering the same expanse.)
Thank you for reading all this, and I would be most grateful for any advice.
I originally used a similar technique to you in my 'Box Fox' platformer game and RTS, but found it caused quite noticeable delays if you scroll enough that the bitmap needs to be redrawn.
My current method these games is similar to your Option C. I draw my tiled map layers onto a grid of big bitmaps (about 7x7) taking up an area larger than the screen. When the user scrolls onto the edge of this grid, I shift all the bitmaps in the grid over (moving the end bitmaps to the front), change the offset of grid, and then just redraw the new edge.
I'm not quite sure which is faster with software rendering (your Option C or my current method). I think my method maybe faster if you ever change to OpenGL rendering as you wouldn't have to upload as much texture data to the graphics card as the user scrolls.
I wouldn't recommend Option A because, as you suggest, the hundreds small bitmap draws for a tiled map kills performance, and it gets pretty bad with larger screens. Option B may not even be possible with many devices, as it's quite easy to get a 'bitmap size exceeds VM budget' error as the heap space limit is set quite low on many phones.
Also if you don't need transparency on your map/background try to use RGB_565 bitmaps, as it's quite a lot faster to draw in software, and uses up less memory.
By the way, I get capped at 60fps on both my phone and 10" tablet in my RTS with the method above, rendered in software, and can scroll across the map smoothly. So you can definitely get some decent speed out of the android software renderer. I have a 2D OpenGL wrapper built for my game but haven't yet needed to switch to it.
My solution in a mapping app relies on a 2 level cache, first tile objects are created with a bitmap and a position, these are either stored on disk or in a Vector (synching is important for me, multithreaded HTTP comms all over the place).
When I need to draw the background I detect the visible area and get a list of all the tiles I need (this is heavily optimised as it gets called so often) then either pull the tiles from memory or load from disk. I get very reasonable performance even on slightly older phones and nice smooth scrolling with no hiccups.
As a caveat, I allow tiles not to be ready and swap them with a loading image, I don't know if this would work for you, but if you have all the tiles loaded in the APK you should be fine.
I think one efficent way to do this would be to use canvas.translate.
On the first drawing the entire canvas would have to be filled with tiles. New android phones can do this easily and quickly.
When the backround is scrolled I would use canvas.translate(scrollX, scrollY), then I would draw individualy one by one tile to fill the gaps, BUT, I would use
canvas.drawBitmap(tileImage[i], fromRect, toRect, null) which would only draw the parts of the tiles that are needed to be shown, by setting fromRect and toRect to correspond to scrollX and scrollY.
So all would be done by mathematics and no new bitmaps would be created for the background - save some memory.
EDIT:
However there is a problem using canvas.translate with surfaceView, because it is double buffered and canvas.translate will translate only one buffer but not the second one at the same time, so this alternating of buffers would have to be taken into account when depending on surfaceView to preserve the drawn image.
I am using your original method to draw a perspective scrolling background. I came up with this idea entirely by accident a few days ago while messing around with an easy technique to do a perspective scrolling star field simulation. The app can be found here: Aurora2D.apk
Just tilt your device or shake it to make the background scroll (excuse the 2 bouncing sprites - they are there to help me with an efficient method to display trails). Please let me know if you find a better way to do it, since I have coded several different methods over the years and this one seems to be superior. Simply mail me if you want to compare code.