I'm currently writing an Android game using surfaceView. I've optimized the game as much as possible and it runs quite smoothly. However, I have collision detection incorporated which is a bit messy. I would like to do collision detection by reading pixels directly from the canvas. Is this possible to do? The closest to this that I have found was to attach a new bitmap to the canvas using setBitmap. Then when I drew to the canvas, the bitmap would be updated. Would this be the way to go? Thanks.
Although you should manage the collision detection in a different way, you can get the pixel color on a given position with the next line of code:
mView.mBitmap.getPixel(j, i)
mView is the View that contains your canvas and mBitmap is the Bitmap with which you created your canvas:
Canvas mCanvas = new Canvas(mBitmap);
From the Canvas API:
The Canvas class holds the "draw"
calls. To draw something, you need 4
basic components: A Bitmap to hold the
pixels, a Canvas to host the draw
calls (writing into the bitmap), a
drawing primitive (e.g. Rect, Path,
text, Bitmap), and a paint (to
describe the colors and styles for the
drawing).
So you wouldn't ever ask a canvas for pixel data, because the canvas itself doesn't really "own" pixel data. The canvas which you use to make draw calls is always attached to a bitmap (the one you're drawing on), so that bitmap is where you should get your pixel data from.
Collision detection is usually costly, but going to a bitmap-based process could make it even worse, depending on what you're trying to do. Just a heads up.
I agree with Josh. If precision is that important in your app, you may want to incorporate screen size/resolution data into some kind of physics engine. Trying to build a physics engine based entirely on visual processing is probably unnecessarily costly.
Related
I want to get a bitmap and manipulate it in the following way:
I have created an empty bitmap, on this bitmap, I drew what I needed. Now I need to distorted in a way similar to this because I will then be drawing the whole thing ontop of another bitmap. Think of it as applying a texture to a box. The box simply being a picture of a box. The way that I see myself doing this is creating bitmaps off the main bitmap and drawing them onto the final bitmap through a matrix modified by Graphics.Camera.getMatrix().
I already have this working, but my problem is understanding just exactly how to manipulate the camera. I don't know where the camera creates its X Y and Z axis within the matrix. Or where does the matrix get applied. Or just how it all comes together.
When drawing on canvas set to view, I know I can rotate the canvas and draw from there to create a straight diagonal line, for example in a game engine to draw a projectile acting on two vectors. And I know when working on OpenGL, there is a state machine approach and I can imagine where the matrix is in 3D space. But I just don't understand how Camera, Matrix, and bitmap all relate.
From what I've looked up, I managed to set up the basic solution to this but havent been able to understand just exactly how to tweak this in order to get the right rotations. I have read the documentation but it doesn't really explain the relationship between Camera, Matrix, and canvas beyond the fact that Camera modifies a matrix and then canvas can draw something based on that matrix.
Can anyone explain how I would go about doing what I have in the picture? I already know that I'll be creating a bitmap from region in original bitmap. Then combining the two, to create what is on the right column , and then rotating the canvas/bitmap and getting another bitmap from green section and repeat the whole thing again.
Thanks
Camera is just a utility class that generates a Matrix you can use on Canvas. The generated Matrix contains the appropriate transform. You said it yourself:
it doesn't really explain the relationship between Camera, Matrix, and
canvas beyond the fact that Camera modifies a matrix and then canvas
can draw something based on that matrix.
That's all there is to it really :)
I want to make brushes displayed in below image for drawing application. Which is a suitable method - Open GL or Canvas & How can we implement it?
I'd say Canvas, as you'll want to modify an image. OpenGLES is good for displaying images, but does not (as far as I know) have methods for modifying its textures (unless you render to a texture that then render to screen with some modifications, which is not always so effective).
Using the Canvas you will have the methods for drawing your brush-strokes onto the Bitmap you're painting on, in GLES you would have to modify a texture (by using a canvas) and then upload that to the GPU again, before it could be rendered, and the rendering would most likely just consist of drawing a square with your texture on it (as the fillrate for most mobile GPUs are quite bad, you don't want to draw the strokes separately).
What I'm trying to say is; The most convenient way to let the user draw on an openGLES surface would be by creating a texture by drawing on a Canvas.
But, there might still be some gain in using GL for drawing, as the Canvas-operations can be performed off-screen, and you can push this data to a gl-renderer to (possibly) speed up the on-screen drawing.
However; if you are developing for Android 3.x+ you should take a look at RenderScript, (which I personally have never had a chance to use), but seems like it would be a good solution in this case.
Your best solution is going to be using native code. That's how Sketchbook does it. You could probably figure out how by browsing through the GIMP source code http://www.gimp.org/source . Out of Canvas vs OpenGL, Canvas would be the way to go.
It depends. if you want to edit the image statically, go with canvas. But if you want after brushing the screen, to have the ability to edit, scale, rotate, it would be easier with opengl.
An example with opengl: Store the motion the user do with touchs. create a class that store a motion and have fields for size, rotation etc. to draw this class, just make a path of the brush image selected following the stored motion.
I am creating an Android app and in my app I have a canvas which I draw numerous bitmaps to the canvas via the canvas.drawBitmap () function. From my understanding the z-index on these bitmaps are set based on the order in which they are being drawn to the canvas. What I am trying to figure out is after drawing these bitmaps if I can dynamically change the z-index on a bitmap to push it to the top? This seems like a very simple problem, but I have had not luck in finding a solution yet.
Not really possible: after you call drawBitmap the contents of the bitmap are rendered onto the canvas, but the canvas does not store any references to the original bitmap, it only stores the results of applying the bitmap's contents to the canvas. There is no way to dynamically say that bitmap you drew 1st out of 50, I want you to make that the 50th bitmap and automatically redraw every single other bitmap to reflect the change.
So you'll need to order your drawing operations before hand.
I have a drawing app where the user can draw lines with their finger, adjust the color, thickness, etc. As the user is drawing, I am converting the massed X/Y points from MotionEvent into SVG Paths, as well as creating Android Path's and then drawing the Android Path's to the screen via a Canvas, and committing the SVG Path's to the app's database.
I am following the model used in FingerPaint, in that the 'in progress' lines are drawn on the fly by repeated calls to invalidate() (and thus, onDraw()), and once the line is complete and a new line is started, the previous line(s) are drawn in onDraw() from the underlying Canvas Bitmap, with in progress lines again generating repeated re-draws.
This works fine in this application - until you start rotating the underlying Bitmap to compensate for device rotation, supporting the ability to 'zoom in' on the drawing surface and thus having to scale the underlying Bitmap, etc. So for example, with the device rotated and the drawing scaled in, when the user is drawing, we need to scale AND rotate our Bitmap in onDraw(), and this is absolutely crawling.
I've looked at a SurfaceView, but as this still uses the same Canvas mechanism, I'm not sure I'll see noticeable improvement... so my thoughts turn to OpenGL. I have read somewhere that OpenGL can do rotations and scaling essentially 'for free', and even seen rumors (third comment) that Canvas may be disappearing in future versions.
Essentially, I am a little stuck between the Canvas and OpenGL solutions... I have a 2D drawing app that seems to fit the Canvas model perfectly when in one state, as there are not constant re-draws going on like a game (for instance when the user is not drawing I don't need any re-drawing), but when the user IS drawing, I need the maximum performance necessary to do some increasingly complex things with the surface...
Would welcome any thoughts, pointers and suggestions.
OpenGL would be able to handle the rotations and scaling easily.
Honestly, you would probably need to learn a lot of OpenGL to do this, specifically related to the topics of:
Geometry
Lighting (or just disabling it)
Picking (selecting geometry to draw on it)
Pixel Maps
Texture Mapping
Mipmapping
Also, learning OpenGL for this might be overkill, and you would have to be pretty good at it to make it efficient.
Instead, I would recommend using the graphic components of a game library built on top of openGL, such as:
Cocos2d
libgdx
any of the engines listed here
Well, this question was asked 6 years ago. Maybe Android 4.0 has not come up?
Actually, after Android 4.0 the Canvas at android.view.View is a hardware accelerated canvas, which means it is implementd by OpenGL, so you do not need to use another way for performance.
You can see the https://github.com/ChillingVan/android-openGL-canvas/blob/master/canvasglsample/src/main/java/com/chillingvan/canvasglsample/comparePerformance/ComparePerformanceActivity.java to compare the performance of normal canvas in view with GLSurfaceView.
You are right that SurfaceView uses Canvas underneath the hood. The main difference is that SurfaceView uses another thread to do the actual drawing, which generally improves performance. It sounds like it would not help you a great deal, though.
You are correct that OpenGL can do rotations very quickly, so if you need more performance that is the way to go. You should probably use GLSurfaceView. The main drawback with using OpenGL is that it is a real pain to do text. Basically you have to (okay, don't have to, but seems to be the best option) render bitmaps of text.
I am trying to construct a SurfaceView by reading in an array and using case switches to build the canvas.
so the question is: can I construct a canvas looping Y, by tracking X. loading bitmaps using BitmapFactory() into the canvas and then using 1 .show() to render the canvas to the screen? or will I need to call the canvas render for each of these (or will that through away the screen every time I do that)?
Not sure what you're getting at, but for one thing avoid using BitmapFactory in onDraw. You don't want to be doing bitmap decoding at the same time as rendering. You should load your bitmaps ahead of time and keep them around in memory for faster drawing later on.