I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.
Related
Im looking for Android 2D framework which allows me to create canvas layer on which I can draw simple shapes like rect, oval atc (raster graphics). The canvas have to PERSIST everything I draw on it.
I found many engines (libgdx, andengine ...) but if they have capability to draw shapes, its only for one screen update. Reason becouse I dont store drawn shapes to some kind of List is becouse in app, the drawing occurs every screen update so I just want to modify canvas and dont remember anything.
Thanks for every answer.
As far as I know, android doesn't support something like that. Android uses double-buffering which means 2 alternatives "screens" that alternate each other so if you draw on one the next would be on random state.
There are tricks you can use to achieve what you want like draw both screens and then stop drawing, but android doesn't support such behavior because when you get hold of a canvas it's not certain that it returned exactly as what you did last frame, it doesn't specify what could cause an error, but if you ask me it could be anything that pops up on screen.
You don't really need an engine to do that, you can use a SurfaceView and draw on it (it supports shapes like the ones you want)
I'm making an element of the game, where a man shoots a rocket to the target, then the target explodes. I'm doing this with canvas and threads, always redrawing the whole screen.
Can it be done other way? Because if there will be a lot of action, game will eat a lot of memory. So I'm looking for optimization and how to animate objects without redrawing the whole screen.
If you are using surfaceview or textureview, you can lock part of the screen and just redraw that. (I recommend textureview over surfaceview).
Canvas android.view.TextureView.lockCanvas(Rect dirty).
public Canvas lockCanvas (Rect dirty).
Added in API level 14.
Just like lockCanvas() but allows specification of a dirty rectangle. Every pixel within that rectangle must be written; however pixels outside the dirty rectangle will be preserved by the next call to lockCanvas(). This method can return null if the underlying surface texture is not available (see isAvailable() or if the surface texture is already connected to an image producer (for instance: the camera, OpenGL, a media player, etc.)
Just a suggestion.
why don't you use a gaming framework such as libgdx?
It takes you off from pure android code but it will let you focus on your game rather than memory management.(and your game will be playable on other platforms also)
In case you like the idea, there are also other tools (unity, gamesalad, etc)
OpenGL ES 2.0 is implemented in a project that I have been working on with a couple shader components that define what a texture should look like after modifications from a Bitmap. The SurfaceView will only ever have a single image in it for my project.
While doing several different approaches and looking through code in the past 24 hours, just hoping for a quick response or two from the community. Not looking for solutions, I'll do that research.
It sounds as though since we are using shaders, that in order to do scaling and movements in the texture based on touch events, that I will have have to use the Matrix utilities and OpenGL translations or movements with the camera to get the same effect as what is currently done within an ImageView. Would this be the appropriate approach? Perhaps even modify the shader code so that I have some additional input variables?
I don't believe that I can use anything on the Android side that would get the same effect, such as modifying the canvas of the SurfaceView or altering dimensions of the UI in some other fashion that would achieve the same effect?
Thanks. Again, solutions for zooming and moving around aren't necessary, just trying to get a grasp on intermixing OpenGL and Android appropriately for the task.
Why does it seem that several elements in 1.0 are easier than 2.0; ease of use should improve between releases.
Yes. You will need to use an ortho projection and adjust the extents to zoom. See this link here. To pan, you can simply use a glTranslatef.
If you would like to do this entirely in the pixel shader, you can use the texture matrix stack with glScalef and glTranslatef.
I'm developing an Android game using Canvas element. I have many graphic elements (sprites) drawn on a large game map. These elements are drawn by standard graphics functions like drawLine, drawPath, drawArc etc.
It's not hard to test if they are in screen or not. So, if they are out of the screen, i may skip their drawing routines completely. But even this has a CPU cost. I wonder if Android Graphics Library can do this faster than I can?
In short, should I try to draw everything even if they are completely out of the screen coordinates believing Android Graphics Library would take care of them and not spend much CPU trying to draw them or should I check their drawing area rectangle myself and if they are completely out of screen, skip the drawing routines? Which is the proper way? Which one is supposed to be faster?
p.s: I'm targeting Android v2.1 and above.
From a not-entirely-scientific test I did drawing Bitmaps tiled across a greater area than the screen, I found that checking beforehand if the Bitmap was onscreen doesn't seem to make a considerable different.
In one test I set a Rect to the screen size and set another Rect to the position of the Bitmap and checked Rect.intersects() before drawing. In the other test I just drew the Bitmap. After 300-ish draws there wasn't a visible trend - some went one way, others went another. I tried the 300-draw test every frame, and the variation from frame to frame was much greater than difference between checked and unchecked drawing.
From that I think it's safe to say Android checks bounds in its native code, or you'd expect a considerable difference. I'd share the code of my test, but I think it makes sense for you to do your own test in the context of your situation. It's possible points behave differently than Bitmaps, or some other feature of your paint or canvas changes things.
Hope that help you (or another to stumble across this thread as I did with the same question).
I want to make brushes displayed in below image for drawing application. Which is a suitable method - Open GL or Canvas & How can we implement it?
I'd say Canvas, as you'll want to modify an image. OpenGLES is good for displaying images, but does not (as far as I know) have methods for modifying its textures (unless you render to a texture that then render to screen with some modifications, which is not always so effective).
Using the Canvas you will have the methods for drawing your brush-strokes onto the Bitmap you're painting on, in GLES you would have to modify a texture (by using a canvas) and then upload that to the GPU again, before it could be rendered, and the rendering would most likely just consist of drawing a square with your texture on it (as the fillrate for most mobile GPUs are quite bad, you don't want to draw the strokes separately).
What I'm trying to say is; The most convenient way to let the user draw on an openGLES surface would be by creating a texture by drawing on a Canvas.
But, there might still be some gain in using GL for drawing, as the Canvas-operations can be performed off-screen, and you can push this data to a gl-renderer to (possibly) speed up the on-screen drawing.
However; if you are developing for Android 3.x+ you should take a look at RenderScript, (which I personally have never had a chance to use), but seems like it would be a good solution in this case.
Your best solution is going to be using native code. That's how Sketchbook does it. You could probably figure out how by browsing through the GIMP source code http://www.gimp.org/source . Out of Canvas vs OpenGL, Canvas would be the way to go.
It depends. if you want to edit the image statically, go with canvas. But if you want after brushing the screen, to have the ability to edit, scale, rotate, it would be easier with opengl.
An example with opengl: Store the motion the user do with touchs. create a class that store a motion and have fields for size, rotation etc. to draw this class, just make a path of the brush image selected following the stored motion.