I am attempting to make a drawing/painting app using TextureView on Android. I want to support a drawing surface of up to 4096x4096 pixels, which seems reasonable for my minimum target device (which I use for testing) which is a Google Nexus 7 2013 which has a nice quad core CPU and 2GB memory.
One of my requirements is that my view must be inside of a view that allows it to be zoomed in and out and panned, which is all custom code I have written (think UIScrollView from iOS).
I've tried using a regular View (not TextureView) with OnDraw and performance was absolutely horrible - less than 1 frame a second. This happened even if I called Invalidate(rect) with only the rect that changed. I tried turning off hardware acceleration for the view, but nothing rendered, I assume because 4096x4096 is too big for software.
I then tried using TextureView and performance is a little better - about 5-10 frames per second (still terrible but better). The user draws into a bitmap, which is then later drawn into the texture using a background thread. I'm using Xamarin but hopefully the code makes sense to Java people.
private void RunUpdateThread()
{
try
{
TimeSpan sleep = TimeSpan.FromSeconds(1.0f / 60.0f);
while (true)
{
lock (dirtyRect)
{
if (dirtyRect.Width() > 0 && dirtyRect.Height() > 0)
{
Canvas c = LockCanvas(dirtyRect);
if (c != null)
{
c.DrawBitmap(bitmap, dirtyRect, dirtyRect, bufferPaint);
dirtyRect.Set(0, 0, 0, 0);
UnlockCanvasAndPost(c);
}
}
}
Thread.Sleep(sleep);
}
}
catch
{
}
}
If I change lockCanvas to pass null instead of a rect, performance is great at 60 fps, but the contents of the TextureView flicker and get corrupted, which is disappointing. I would have thought it would simply be using an OpenGL frame buffer / render texture underneath or at least have an option to preserve contents.
Are there any other options short of doing everything in raw OpenGL in Android for high performance drawing and painting on a surface that is preserved in between draw calls?
First off, if you want to understand what's going on under the hood, you need to read the Android Graphics Architecture document. It's long, but if you sincerely want to understand the "why", it's the place to start.
About TextureView
TextureView works like this: it has a Surface, which is a queue of buffers with a producer-consumer relationship. If you're using software (Canvas) rendering, you lock the Surface, which gives you a buffer; you draw on it; then you unlock the Surface, which sends the buffer to the consumer. The consumer in this case is in the same process, and is called SurfaceTexture or (internally, more aptly) GLConsumer. It converts the buffer into an OpenGL ES texture, which is then rendered to the View.
If you turn off hardware acceleration, GLES is disabled, and TextureView cannot do anything. This is why you got nothing when you turned hardware acceleration off. The documentation is very specific: "TextureView can only be used in a hardware accelerated window. When rendered in software, TextureView will draw nothing."
If you specify a dirty rect, the software renderer will memcpy the previous contents into the frame after rendering is complete. I don't believe it sets a clip rect, so if you call drawColor(), you will fill the entire screen, and then have those pixels overwritten. If you aren't currently setting a clip rect, you may see some performance benefit from doing so. (I didn't check the code though.)
The dirty rect is an in-out parameter. You pass the rect you want in when you call lockCanvas(), and the system is allowed to modify it before the call returns. (In practice, the only reason it would do this would be if there were no previous frame or the Surface were resized, in which case it would expand it to cover the entire screen. I think this would have been better handled with a more direct "I reject your rect" signal.) You're required to update every pixel inside the rect you get back. You are not allowed to alter the rect, which you appear to be trying to do in your sample -- whatever is in the dirty rect after lockCanvas() succeeds is what you're required to draw on.
I suspect the dirty rect mis-handling is the source of your flickering. Sadly, this is an easy mistake to make, as the behavior of the lockCanvas() dirtyRect arg is only documented in the Surface class itself.
Surfaces and buffering
All Surfaces are double- or triple-buffered. There is no way around this -- you cannot read and write simultaneously and not get tearing. If you want a single buffer that you can modify and push when desired, that buffer will need to be locked, copied, and unlocked, which creates stalls in the composition pipeline. For best throughput and latency, flipping buffers is better.
If you want the lock-copy-unlock behavior, you can write that yourself (or find a library that does it), and it will be as efficient as it would be if the system did it for you (assuming you're good with blit loops). Draw to an off-screen Canvas and blit the bitmap, or to a OpenGL ES FBO and blit the buffer. You can find an example of the latter in Grafika's "record GL app" Activity, which has a mode that renders once off-screen, and then blits twice (once for display, once for recording video).
More speed and such
There are two basic ways to draw pixels on Android: with Canvas, or with OpenGL. Canvas rendering to a Surface or Bitmap is always done in software, while OpenGL rendering is done with the GPU. The only exception is that, when rendering to a custom View, you can opt to use hardware acceleration, but this does not apply when rendering to the Surface of a SurfaceView or TextureView.
A drawing or painting app can either remember the drawing commands, or just throw pixels at a buffer and use that as its memory. The former allows for deeper "undo", the latter is much simpler, and has increasingly better performance as the amount of stuff to render grows. It sounds like you want to do the latter, so blitting from off-screen makes sense.
Most mobile devices have a hardware limitation of 4096x4096 or smaller for GLES textures, so you won't be able to use a single texture for anything larger. You can query the size limit value (GL_MAX_TEXTURE_SIZE), but you may be better off with an internal buffer that is as large as you want, and just render the portion that fits on screen. I don't know what the Skia (Canvas) limitation is offhand, but I believe you can create much larger Bitmaps.
Depending on your needs, a SurfaceView may be preferable to a TextureView, as it avoids the intermediate GLES texture step. Anything you draw on the Surface goes directly to the system compositor (SurfaceFlinger). The down side to this approach is that, because the Surface's consumer is not in-process, there is no opportunity for the View system to handle the output, so the Surface is an independent layer. (For a drawing program this could be beneficial -- the image being drawn is on one layer, your UI is on a separate layer on top.)
FWIW, I haven't looked at the code, but Dan Sandler's Markers app might be worth a peek (source code here).
Update: the corruption was identified as a bug and fixed in 'L'.
UPDATE I ditched TextureView and now use an OpenGL view where I call glTexSubImage2D to update changed pieces of a render texture.
OLD ANSWER
I ended up tiling TextureView in a 4x4 grid. Depending on the dirty rect each frame, I refresh the appropriate TextureView views. Any view that is not updated that frame I call Invalidate on.
Some devices, such as the Moto G phone have an issue where the double buffering is corrupted for one frame. You can fix that by calling lockCanvas twice when the parent view has it's onLayout called.
private void InvalidateRect(int l, int t, int r, int b)
{
dirtyRect.Set(l, t, r, b);
foreach (DrawSubView v in drawViews)
{
if (Rect.Intersects(dirtyRect, v.Clip))
{
v.RedrawAsync();
}
else
{
v.Invalidate();
}
}
Invalidate();
}
protected override void OnLayout(bool changed, int l, int t, int r, int b)
{
for (int i = 0; i < ChildCount; i++)
{
View v = GetChildAt(i);
v.Layout(v.Left, v.Top, v.Right, v.Bottom);
DrawSubView sv = v as DrawSubView;
if (sv != null)
{
sv.RedrawAsync();
// why are we re-drawing you ask? because of double buffering bugs in Android :)
PostDelayed(() => sv.RedrawAsync(), 50);
}
}
}
Related
I have to simulate the motion of some objects, so I have created a SurfaceView on which I draw them with a dedicated Thread. Every loop I call canvas.drawColor() to clean up all previous object's positions and to draw the new states. Everything works fine and the frame rate is decent.
The problem is: what if I want to draw the trails of the objects' trajectories? In that case I have to memorize the positions for every object and, at every loop, draw all past positions that are hundreds of points. This task keep the frame rate lower and it seems to me absurd that the only way is to redraw every time the same points! There is a way to keep the points painted on the canvas and not to cancel them with the canvas.drawColor() at every loop (that is necessary for others tasks)?
Sort of.
The SurfaceView's Surface uses multiple buffers. If it's double-buffered, and you don't clear the screen every frame, then you'll have the rendering from all the odd-numbered frames in one buffer, and all the even-numbered frames in the other. Every time you draw a new frame, it'll flip to the other buffer, and half of your positions will disappear (looks like everything is vibrating).
You could, on each frame, draw each object at its current position and its previous position. That way both frames would get every object position.
The practical problem with this idea is that you don't know how many buffers the Surface is using. If it's triple-buffered (which is very possible) then you would need to draw the current, previous, and previous-previous positions to ensure that each buffer had every position. Higher numbers of buffers are theoretically possible but unlikely.
Having said all this, you don't want to pursue this approach for a simple reason: when you lock the canvas, you are agreeing to modify every pixel in the dirty area. If you don't, the results are unpredictable, and your app could break weirdly in a future version of the operating system.
The best way to do what you want is to draw onto an off-screen Bitmap and then blit the entire thing onto the Surface. It's a huge waste at first, since you're copying a screen-sized bitmap for just a couple of objects, but very shortly the reduced draw calls will start to win.
Create a Bitmap that's the same size as the Surface, then create a Canvas using the constructor that takes a Bitmap. Do all your drawing through this Canvas. When you want to update the screen, use a drawBitmap() method on the SurfaceView's Canvas.
I recommend against using software scaling due to the performance cost -- make sure you're doing a 1:1 copy. You can use the setFixedSize() call on the SurfaceView surface to make it a specific size if that's helpful -- for devices with larger pixel densities it can improve your frame rates and reduce battery usage (blog post here).
I'm trying to create a bullethell game and Ive run into a bit of a trouble. I can't get more than 17 fps after about 500 bullets. The update logic code takes around 1-4ms for all of them while the render code takes around 40ms
For now my code is
private void drawEntities(Canvas canvas) {
for (HashMap<UUID, Spatial> h: spatialList) {
for (Spatial spatial: h.values()) {
spatial.render(canvas);
if(spatial.life > 0)
spatial.life--;
else if (spatial.life == 0)
engine.deleteEntity(spatial.owner);
}
}
}
spatialList is an arrayList where each index is a zLevel
The spatial which displays the actual bullet is
public void render(Canvas canvas) {
float angle = (float) (vel.getAngle() * (180 / Math.PI));
matrix.reset();
matrix.setTranslate(pos.x - bullet.getWidth() / 2, pos.y - bullet.getHeight() / 2);
matrix.postRotate(angle + 90, pos.x, pos.y);
canvas.drawBitmap(bullet, matrix, paint);
canvas.drawCircle(pos.x, pos.y, col.getRadius(), paint);
}
I can provide more code but these seem to be the main issue. I've tried everything I can think of and can't find much else online. The only thing I can think of to fix this is to switch from a surfaceview to a GLSurfaceview but I really think there is a better way and I'm just using bad code.
Edit: I noticed my timer was off and removed the drawcircle and after running it again I get 40ms~ around 500 which is still a bit too low for reasonable performance.
TLDR; 500 entities = 17 fps.
You may be limited by pixel fill rate. How large (in pixels) is the display on your test device?
One simple thing to play with is to use setFixedSize() to reduce the size of the SurfaceView's Surface. That will reduce the number of pixels you're touching. Example here, video here, blog post here.
It's generally a good idea to do this, as newer devices seem to be racing toward absurd pixel counts. A full-screen game that performs all rendering in software is going to struggle on a 2560x1440 display, and flail badly at 4K. "Limiting" the game to 1080p and letting the display scaler do the heavy lifting should help. Depending on the nature of the game you could set the resolution even lower with no apparent loss of quality.
Another thing to try is to eliminate the drawBitmap() call, check your timings, then restore it and eliminate the drawCircle() call, to see if one or the other is chewing up the bulk of the time.
You may find switching to OpenGL ES not so bad. The "hardware scaler exerciser" activity in Grafika (from the video linked above) shows some simple bitmap rendering. Replace drawCircle() with a scaled bitmap and you might be most of the way done. (Note it uses SurfaceView, not GLSurfaceView, for GLES in that activity.)
I had the same issue. The problem will be solved to a large extent if you do all the drawing on a bitmap framebuffer and then draw the framebuffer to the canvas. If you are directly drawing on the canvas then there are several overheads. I looked around and found this tutorial
"http://www.kilobolt.com/day-6-the-android-game-framework-part-ii.html"
Look at the implementation for AndroidGraphics and AndroidFastRenderView and see how he uses AndroidGraphics to do all the actual drawing in a buffer and paints that buffer to the canvas in AndroidFastRenderView.
Try the performance when you remove the rotate and/or drawCircle parts from your render() method. Both of them might be quite time consuming as they probably contain sin() / cos() calculations.
If that helps, you'll have to figure out how to replace them with something faster, if not, well...
I'm making an element of the game, where a man shoots a rocket to the target, then the target explodes. I'm doing this with canvas and threads, always redrawing the whole screen.
Can it be done other way? Because if there will be a lot of action, game will eat a lot of memory. So I'm looking for optimization and how to animate objects without redrawing the whole screen.
If you are using surfaceview or textureview, you can lock part of the screen and just redraw that. (I recommend textureview over surfaceview).
Canvas android.view.TextureView.lockCanvas(Rect dirty).
public Canvas lockCanvas (Rect dirty).
Added in API level 14.
Just like lockCanvas() but allows specification of a dirty rectangle. Every pixel within that rectangle must be written; however pixels outside the dirty rectangle will be preserved by the next call to lockCanvas(). This method can return null if the underlying surface texture is not available (see isAvailable() or if the surface texture is already connected to an image producer (for instance: the camera, OpenGL, a media player, etc.)
Just a suggestion.
why don't you use a gaming framework such as libgdx?
It takes you off from pure android code but it will let you focus on your game rather than memory management.(and your game will be playable on other platforms also)
In case you like the idea, there are also other tools (unity, gamesalad, etc)
I want to crop and object my openGL ES application, it should be done in the following manner:
The left is the initial image, middle is the stencil buffer matrix, and right is the result.
From what i have read here: Discard with stencil might have performance issues,
and since the model that will be clipped is going to be rotated and translated, i honestly don't know if the drawn model will be clipped out in the wrong places after these actions.
will it?
So, i thought about depth buffer.
Again, an example:
(This photo was taken from this question.)
Assume that the black square is movable, and might not be just a simple square, but a complex UIBezierPath.
I was wondering about how to use the depth buffer, so that all that is drawn outside the square (or UIBezierPath) will be clipped out, meaning, adjusting all z values of the left out pixels to some threshold value, that wont be shown on screen.
So to summarise:
1) Will using stencil is going to be expensive as stated?
2) Is it possible to use stencil on a rotated and translated object so that it will always will be drawn?
3) Using depth buffer, Is it possible to find out what is inside and what is outside the square (or UIBezierPath) and how? masking it somehow?
4) What is the better approach?
I know it's a lot to answer on but since they all relate to each other i thought it better be asked at the same question.
The stencil buffer is the way to go here. The discard answer you refer to is about using the discard function in fragment shaders, which is very expensive for tile based deferred rendering GPUs (ie basically every mobile GPU).
Using the stencil buffer however is very cheap, as it is present on chip for each tile and does not interfere with deferred rendering.
To summarise:
No.
Yes, the stencil buffer operates in 2D over the whole viewport on the transformed vertices. It will clip the cube after its model transforms have been applied.
Yes, but needless to say this is complicated, somehow sounds similar to shadow volumes.
Use the stencil buffer.
I'm working on a camera app, i'm showing the preview image luminosity histogram in a small rect (128x128 pix) overlying the live preview.
Sometimes an ANR happened, so i started using traceview to optimize my code (i'm doing some image manipulation on the fly, but it's very quick NEON asm & native code, no problem with it).
Using traceview i discovered that Canvas.drawLine() method is terribly slow. I have to update histogram 30 times per second in customView.onDraw(), drawing just 128 lines every frame. Incredibly, drawing 128 lines takes >8% cpu time (!!), when the entire native code to manipulate-convert the whole frame (720x480 yuv to ARGB_8888) takes <18%
I tried to draw the histogram on a new bitmap canvas then drawBitmap() it to the view's canvas but drawLine()s still take a lot of CPU.
I'm looking for an idea to avoid drawLine()...
I juts have to draw a small histogram from a int[128] normalized to 128
Here's my customView.onDraw (more or less...)
#Override
protected void onDraw(Canvas canvas) {
int size = 128;
int y = pos_y + size;
int x;
for(int i=0;i<size;i++) {
if(histogram_data[i]>1) {
x = pos_x+i;
// this is the slow call!!
canvas.drawLine(x, y, x, y-histogram_data[i], paint_histogram);
}
}
}
You could try to use Path instead of Lines.
http://developer.android.com/reference/android/graphics/Path.html
If you read HERE they say the following:
Animating large Paths
When Canvas.drawPath() is called on the hardware accelerated Canvas
passed to Views, Android draws these paths first on CPU, and uploads
them to the GPU. If you have large paths, avoid editing them from
frame to frame, so they can be cached and drawn efficiently.
drawPoints(), drawLines(), and drawRect/Circle/Oval/RoundRect() are
more efficient – it's better to use them even if you end up using more
draw calls.
So you can try calling drawLines() once for drawing multiple lines at the same time instead of calling drawLine() for each line separately.
For me personally I had to draw 500+ lines, and I tried all available method for drawing line in canvas: drawPath(), drawLine() and drawLines(), but with all of them I felt enormous lagging on my phone, so the only available alternative was to use OpenGL ES. Here is a example I wrote using OpenGL for drawing large amount of lines, I tested it on my phone and it is smooth as butter😄. You can check the source code it is written in Kotlin, it also support finger gestures, that way you can Scale, Translate and Rotate the whole scene with your fingers. And I have included all basic shapes Rectangle, Triangle, Circle, Line, Polygon, and you can set shape coordinates as values between [-1,1], or with the familiar way using pixel values. If you want to learn more about OpenGL you can check this post by the android team.
Here is the result of the OpenGL example I wrote:
maybe you can add new layer on top and draw these 128 lines only once while leaving other pixels transparent.