Setup:
I have implemented a native (read JNI) mechanism to copy pixels from a Bitmap object, to native memory. This is done by malloc() uint23_t array in native memory and later using memcpy() to copy pixels to/from Bitmap's native pointer. This works well and have been tested. Pixels are successfully saved in native memory from a Bitmap object, and copied back to a Bitmap object, and visible on screen. Its pretty fast in copying, up to order of several milliseconds for fairly large bitmaps. But extremely slow in rendering it.
Intention:
The above was done to break free of heap limit on default android Bitmaps (refer to https://stackoverflow.com/a/1949205/1531054). There would be only 1 Java Bitmap object acting as buffer between native memory and target canvas.
Save a shape:
clear Buffer Bitmap.
Draw shape on Bitmap.
Copy pixels to native memory, and save the memory pointer.
Clear Buffer Bitmap.
So, any number of shapes can be saved to native memory, without running into heap size limits. This works.
Later when need to draw a shape (say in onDraw()):
clear Buffer Bitmap.
Copy pixels from native memory, to Buffer Bitmap, using the saved memory pointer.
Draw Buffer Bitmap on canvas.
Clear Buffer Bitmap.
Repeat again for next shape.
Problem When quickly drawing many shapes from memory, The Buffer Bitmap sorts of lags. Basically we're doing
clear bitmap -> load pixels from memory onto it -> draw it on view canvas
in Quick succession inside onDraw(), only the latest shape's pixels are drawn onto canvas. It appears as if:
The internal canvas.drawBitmap() is asynchronous and copies pixels off the bitmap later sometimes.
Android's Bitmaps have some hidden caching mechanism.
Has anyone run into such trouble before ? Or has some insight regarding this ?
I know one can get native skia lib's canvas instance in JNI and draw on it, but this is a non standard way.
In recent Android versions (3.0 and on, which is the majority of devices), pixels use regular Java memory heap. With the introduction of hardware acceleration, bitmaps are drawn asynchronously, and there is a caching system that manages bitmaps loaded as textures to the GPU. Therefore the hack you are trying to do will probably degrade performance on new devices. If you need more memory, try to use largeHeap="true" in your manifest.
On relatively new Androids (from 3.0 if I recall correctly) with hardware acceleration canvas.drawBitmap method does not actually draws anything (as well as dispatchDraw, draw, and onDraw). Instead, it creates record in display list which:
Might be cached for an indefinite amount of time.
Might (and will be) drawn in the future, not right away. It is not exactly asynchronous as for now, it just executed later in the same thread.
Those two points, I think are the answer to your question.
Alternatively, you can disable hardware acceleration for your view/window and see if your approach is working.
For further reading:
http://android-developers.blogspot.de/2011/03/android-30-hardware-acceleration.html
http://developer.android.com/guide/topics/graphics/hardware-accel.html#model
Related
When loading OpenGL texture data on Android, to cope with the difference in coordinate systems between Android and OpenGL the typical pattern I see is to flip the Bitmap before uploading it:
Bitmap original = BitmapFactory.decodeStream(...);
Matrix flip = new Matrix();
flip.postScale(1f, -1f);
Bitmap toUpload = Bitmap.createBitmap(original, ..., flip, true);
original.recyle();
Unfortunately for a brief amount of time the memory required for these bitmaps is doubled because both the original and flipped version are resident. This is problematic for very large images such as texture atlases.
Is there a clever way to avoid this doubling? e.g., manipulating the original in place, or loading and transforming in a single step? Of course I can always manipulate the source data (i.e., pre-flip the images) or texture mapping (i.e., invert the V coordinates) but I'd prefer to tackle this issue during image load time as a way of separating concerns.
As an extreme example, working with mobile VR, the image and video source content is usually in some arbitrary format, depending on what tool was used to produce it.
VR content is so large that transforming the image pixels (like flipping) is not practical. It is far more efficient to just insert transforms into the GL pipeline.
Transforming the vertices and UV mappings of the 3D objects, as you say does not preserve a nice clean separation of concerns, but it is by far the optimal way to resolve image formatting problems. Yes, it can get ugly inserting customized transforms into the pipeline.
Of course if you have the option, the images can be pre-transformed in an editor, specifically for your app, to keep your pipeline nice and clean.
Take an application containing a glSurfaceView which loads several separate images on start (GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); for each image).
This application will then have to call gl.glBindTexture(GL10.GL_TEXTURE_2D, texture pointer int) before it can proceed to draw a different texture to a "texturable square" gl object.
Is it recommended to instead load one bitmap with all the textures (ie: a sprite sheet), and then create an array of "texturable squares" that each map a different area of the giant single image, like that gl.glBindTexture(...) only needs be called once...?
Or perhaps is there no significant difference between the two techniques?
As far as I know, once a texture has been loaded via texImage2D, binding a texture is simply a matter of pointing the native OpenGL library to the correct preloaded texture, so performance wise, it shouldn't be costly.
However, you raise a good option which you should probably consider regardless of performance issues.
Normally, the textures you need don't have to be in power of two dimensions, but are set to that size anyway because of the requirements of OpenGL. This often results in a very wasteful memory allocation. Utilising "sprite sheets" as you put it can help save the time & memory of loading multiple bitmaps into textures which are usually larger than the parts you'll be rendering anyway.
For this reason, I would recommend using sprite sheets anyway, simply because it saves calls to texImage2D (which are quite costly), and potentially saves memory as well. If you properly manage the texture coordinates when switching between objects you want to render, this is the method I would recommend.
I would like to create a Canvas instance that is too big to be backed by a heap memory Bitmap, lets say 5000x5000 pixels (approx. 95MB). I would like this very large Canvas to send all the various draw operations directly to a bitmap file. Unfortunately the Bitmap class in Android is marked final so I can't provide my own implementation. Does anyone have an idea if and how this might be accomplished? I'm not very interested in performance, 10 seconds to write make a few dozen draw operations is fine, the goal is to not get out of memory errors.
There's no facility to provide the function you are asking, and even if there were, to do such operations to a file would provide horrendous performance.
Probably the only reasonable way is to only store the drawing operations, and create a Canvas that is the same size as the device screen that would serve as a "window" into the while 5000x5000 pixel canvas. For detailed explanation see my answer to a related question here: Android - is there a possibility to make infinite canvas?
Here is an idea I had that I think could theoretically work, but would probably require far too much effort:
Create a subclass of Canvas that contains many smaller Canvas objects inside it. These would represent tiles of the overall Canvas. These tiles should be small enough to fit in memory at least one at a time. Create one file for each inner tile Canvas and use it to store uncompressed pixel data from a Buffer.
When a draw operation occurs on the overall Canvas figure out which tiles need to be drawn to. One at a time read the file for that tile into a Bitmap in memory and perform the possibly clipped draw and then save the Bitmap data back to the file and close it.
Theoretically it sounds possible, realistically it sounds like too much work.
I am currently working on a game for Android. The game is a real time strategy game that uses tiles or cells and has actors (units, trees, rock, etc.) that will occupy those cells.
In my game, cells and actors are objects that have their own draw method. For fear of speed problems, I currently decode the resource in the map class and feed the decoded image to the object's draw method.
Like this:
_waterCell = BitmapFactory.decodeResource(context.getResources(), R.drawable.watertile);
...
_row.get(Cell).draw(_waterCell, canvas, _paint, _X, _Y);
This is fine for now, considering I have few cell types and only one actor but how would I go about this when I have hundreds of images to decode without having to decode the resource every time I draw the object? And if I were to decode all of my resources in the map class, would it cause out of memory errors?
I cannot say whether decoding the resources would cause an out of memory error since I don't know their size. However, the decoding of the resource itself, assuming there is enough memory to handle whatever is being decoded should not cause an out of memory exception. The more likley result is, depending on how frequently you need to decode resources, it will just slow your app down.
Have you considered using a cache? If you're decoding resources you could use a simple LRUcache as demonstrated in the android docs to avoid repeating this process. If you decode your resources as needed, store them to the cache, and then check the cache fore their presence before decoding again - you can probably save yourself a lot of time.
I'm tried to determine the "best" way to scroll a background comprised of tiled Bitmaps on an Android SurfaceView. I've actually been successful in doing so, but wanted to determine if there is a more efficient technique, or if my technique might not work on all Android phones.
Basically, I create a new, mutable Bitmap to be slightly larger than the dimensions of my SurfaceView. Specifically, my Bitmap accomodates an extra line of tiles on the top, bottom, left, and right. I create a canvas around my new bitmap, and draw my bitmap tiles to it. Then, I can scroll up to a tile in any direction simply by drawing a "Surfaceview-sized" subset of my background Bitmap to the SurfaceHolder's canvas.
My questions are:
Is there a better bit blit technique than drawing a background bitmap to the canvas of my SurfaceHolder?
What is the best course of action when I scroll to the edge of my background bitmap, and wish to shift the map one tile length?
As I see it, my options are to:
a. Redraw all the tiles in my background individually, shifted a tile length in one direction. (This strikes me as being inefficient, as it would entail many small Bitmap draws).
b. Simply make the background bitmap so large that it will encompass the entire scrolling world. (This could require an extremely large bitmap, yet it would only need to be created once.)
c. Copy the background bitmap, draw it onto itself but shifted a tile length in the direction we are scrolling, and draw the newly revealed row or column of tiles with a few individual bitmap draws. (Here I am making the assumption that one large bitmap draw is more efficient than multiple small ones covering the same expanse.)
Thank you for reading all this, and I would be most grateful for any advice.
I originally used a similar technique to you in my 'Box Fox' platformer game and RTS, but found it caused quite noticeable delays if you scroll enough that the bitmap needs to be redrawn.
My current method these games is similar to your Option C. I draw my tiled map layers onto a grid of big bitmaps (about 7x7) taking up an area larger than the screen. When the user scrolls onto the edge of this grid, I shift all the bitmaps in the grid over (moving the end bitmaps to the front), change the offset of grid, and then just redraw the new edge.
I'm not quite sure which is faster with software rendering (your Option C or my current method). I think my method maybe faster if you ever change to OpenGL rendering as you wouldn't have to upload as much texture data to the graphics card as the user scrolls.
I wouldn't recommend Option A because, as you suggest, the hundreds small bitmap draws for a tiled map kills performance, and it gets pretty bad with larger screens. Option B may not even be possible with many devices, as it's quite easy to get a 'bitmap size exceeds VM budget' error as the heap space limit is set quite low on many phones.
Also if you don't need transparency on your map/background try to use RGB_565 bitmaps, as it's quite a lot faster to draw in software, and uses up less memory.
By the way, I get capped at 60fps on both my phone and 10" tablet in my RTS with the method above, rendered in software, and can scroll across the map smoothly. So you can definitely get some decent speed out of the android software renderer. I have a 2D OpenGL wrapper built for my game but haven't yet needed to switch to it.
My solution in a mapping app relies on a 2 level cache, first tile objects are created with a bitmap and a position, these are either stored on disk or in a Vector (synching is important for me, multithreaded HTTP comms all over the place).
When I need to draw the background I detect the visible area and get a list of all the tiles I need (this is heavily optimised as it gets called so often) then either pull the tiles from memory or load from disk. I get very reasonable performance even on slightly older phones and nice smooth scrolling with no hiccups.
As a caveat, I allow tiles not to be ready and swap them with a loading image, I don't know if this would work for you, but if you have all the tiles loaded in the APK you should be fine.
I think one efficent way to do this would be to use canvas.translate.
On the first drawing the entire canvas would have to be filled with tiles. New android phones can do this easily and quickly.
When the backround is scrolled I would use canvas.translate(scrollX, scrollY), then I would draw individualy one by one tile to fill the gaps, BUT, I would use
canvas.drawBitmap(tileImage[i], fromRect, toRect, null) which would only draw the parts of the tiles that are needed to be shown, by setting fromRect and toRect to correspond to scrollX and scrollY.
So all would be done by mathematics and no new bitmaps would be created for the background - save some memory.
EDIT:
However there is a problem using canvas.translate with surfaceView, because it is double buffered and canvas.translate will translate only one buffer but not the second one at the same time, so this alternating of buffers would have to be taken into account when depending on surfaceView to preserve the drawn image.
I am using your original method to draw a perspective scrolling background. I came up with this idea entirely by accident a few days ago while messing around with an easy technique to do a perspective scrolling star field simulation. The app can be found here: Aurora2D.apk
Just tilt your device or shake it to make the background scroll (excuse the 2 bouncing sprites - they are there to help me with an efficient method to display trails). Please let me know if you find a better way to do it, since I have coded several different methods over the years and this one seems to be superior. Simply mail me if you want to compare code.