I need pixel-perfect collision detection for my Android game. I've written some code to detect collision with "normal" bitmaps (not rotated); works fine. However, I don’t get it for rotated bitmaps. Unfortunately, Java doesn’t have a class for rotated rectangles, so I implemented one myself. It holds the position of the four corners in relation to the screen and describes the exact location/layer of its bitmap; called "itemSurface". My plan for solving the detection was to:
Detect intersection of the different itemSurfaces
Calculating the overlapping area
Set these areas in relation to its superior itemSurface/bitmap
Compare each single pixel with the corresponding pixel of the other bitmap
Well, I’m having trouble with the first one and the second one. Does anybody has an idea or got some code? Maybe there is already code in Java/Android libs and I just didn’t find it.
I understand that you want a collision detection between rectangles (rotated in different way). You don't need to calculate the overlapping area. Moreover, comparing every pixel will be ineffective.
Implement a static boolean isCollision function which will tell you is there a collision between one rectangle and another. Before you should take a piece of paper do some geometry to find out the exact formulas. For performance reasons do not wrap a rectangle in some Rectangle class, just use primitive types like doubles etc.
Then (pseudo code):
for (every rectangle a)
for (every rectangle b)
if (a != b && isCollision(a, b))
bounce(a, b)
This is O(n^2), where n is number of rectangles. There are better algorithms if you need more performance. bounce function changes vectors of moving rectangles so that imitates a collision. If the weight of objects was the same (you can aproximate weight with size of the rectangles), you just need to swap two speed vectors.
To bounce elements correctly you could need to store auxiliary table boolean alreadyBounced[][] to determine which rectangles do not need a change of their vectors after bounce (collision), because they were already bounced.
One more tip:
If you are making a game under Android you have to watch out to not allocate memory during gameplay, because it will faster invoke GC, which takes a long time and slow downs your game. I recommend you watching this video and related. Good luck.
Related
I have image sprites representing two polygons below. These polygons loosely represent the sprite area. What I want to do is use these polygons to detect the overlap (or collision) of sprites. However the overlap should be valid inside the green square. (this is a jigsaw puzzle game, what I am trying to implement is snapping of puzzle pieces when they are moved closer)
I tried Intersector.overlapConvexPolygons(adjacentPiece.polygon, currentPiece.polygon); however this one detects overlaps for entire polygon.
Any clever things I can do here to detect the overlap.
I think your approach might be over-complicating it. If you need your puzzle pieces to bump into each other, you can keep your physics boundaries, but if not, you can remove them entirely.
Either way, to detect if two pieces should snap, you can approximate each piece by a point roughly at the center of each of the piece's four basic sides. To test for pieces being close enough to snap together, you only need to measure the distance between the points on the sides of the two pieces and see if it's smaller than some threshold value you want to use.
If this is a typical puzzle game, you would only need to check this when the player releases a piece, so if it takes a while to brute-force cycle through all the potential matches, it won't really be noticeable because it isn't done while the player is dragging pieces.
If all your jigsaw pieces are a regular size you can simple use normal squares for each jigsaw piece. The square used to define the shape will be halfway between the solid part of the jigsaw piece and the extruded pieces.
From your image I have applied the squares to the pieces shown.
Am trying to break image in shattered pieces, but am unable to catch the logic, please give me way how to achieve.
I hope the below image can give my idea, what I want, Breaking the bitmap into a shattered pieces like triangle or any shape. later i will shuffle those bitmap shapes and giving puzzle to enduser rearrange them in order.
OK, if you want to rearrange the pieces (like in a jigsaw) then each triangle/polygon will have to appear in a rectangular bitmap with a transparent background, because that's how drawing bitmaps works in Java/Android (and most other environments).
There is a way to do this sort of masking in Android, its called porter-duff compositing. The Android documentation is poor to non-existent, but there are many articles on its use in Java.
Basically you create a rectangular transparent bitmap just large enough to hold your cut-out. Then you draw onto this bitmap a filled triangle (with transparency non-zero) representing the cut-out. It can be any colour you like. Then draw the cutout on top of the source image at the correct location using the Porter-Duff mode which copies the transparency data but not the RGB data. You will be left with your cutout against a transparent background.
This is much easier if you make the cutout bitmap the same size as the source image. I would recommend getting this working first. The downsides of this are twofold. Firstly you will be moving around large bitmaps to move around small cutouts, so the UI will be slower. Secondly you will use a lot of memory for bitmaps, and on some versions of Android you may well run out of memory.
But once you have it working for bitmaps the same size as the source image, it should be pretty straightforward to change it to work for smaller bitmaps. Most of your "mucking about" will be in finding and using the correct Porter-Duff mode. As there are only 16 of them, its no great effort to try them all and see what they do. And they may suggest other puzzle ideas.
I note your cutout sections are all polygons. With only a tiny amount of extra complexity, you could make them any shape you like, including looking like regular jigsaw pieces. To do this, use the Path class to define the shapes used for cutouts. The Path class works fine with Porter-Duff compositing, allowing cutouts of almost any shape you can imagine. I use this extensively in one of my apps.
I am not sure what puzzle game you are trying to make, but if there is no special requirements of the shattered pieces,
only the total number of them which can span the whole rectangle, you may try doing the following steps,
the idea is basically by knowing that n non-intersecting lines with two end points lie on any of the 4 edges of the rectangle, n+1 disjoint areas is formed.
Create an array and store the line information
For n times, you randomly pick two end points which lie on those 4 edges of the rectangle
2a. Try to join these two points: start from either end point, if you get an intersection with another line you drew before, stop at the intersection, otherwise stop at the other end point
You will get n+1 disjoint areas with n lines drawn
You may constrain your lines choosing if you have some special requirements of the areas.
For implementation details, you may want to have a look of dot product and euler's theorem
Currently I am doing app allowing user to draw. Simple think, just extend Canvas class and most of the thing is done.
That was my initial thinking and idea. But as the canvas is rather small because this is only what user see on the screen there is not much possible space to draw. Going through documentation I found translate() method allowing me to move canvas. What I did find out is when I move it, there is some kind of blank space just as you would move piece of paper. I understand that this is totally normal, as I've said before - canvas is only "the screen".
My question is - is there a possibility to make something like infinite canvas so you can make a huge painting and move everything around?
Before this question I was thinking about two things how something like this can be done:
Move all objects on canvas simultaneously - bad idea, because if you have a lot of them then the speed of moving is very bad.
Do something similar as it is done in ListView when you move it (or better see on the screen) only views that are on the screen together with one before and one after are loaded to memory and rest is uploaded dynamically when needed. I think this is the best option to achieve this goal.
EDIT:
Question/answer given by Kai showed me that it is worth to edit my question to clarify some of the things.
Basic assumptions about what can be done by user:
User is given opportunity to draw only circles and rectangles with some (around 80%) having drawable (bitmap) on them on canvas.
I assume that on all screens there will be maximum 500-800 rectangles or circles.
First of all thinking about infinity I was thinking about quite big number of screens - at least 30 on zoom 1x in each side. I just need to give my users bigger freedom in what they are doing.
On this screen everything can be done as on normal - draw, scale (TouchListener, ScaleListener, DoubleTapListener). When talking about scaling, there is another thing that has to be concerned and connected with "infinity" of canvas. When user is zooming out then screens, or more precise objects on the invisible "neighbours" should appear with proper scaling as you would zoom out camera in real life.
The other thing that I've just realised is possibility of drawing at small zoom level - that is on two or three screens and then zooming in - I suppose it should cut and recalculate it as a smaller part.
I would like to support devices at least from API 10 and not only high-end.
The question about time is the most crucial. I want everything to be as smooth as possible, so user wouldn't know that new canvas is being created each time.
I think it really depends on a number of things:
The complexity of this "infinite canvas": how "infinite" would it really be, what operations can be done on it, etc
The devices that you want to support
The amount of time/resource you wish to spend on it
If there are really not that many objects/commands to be drawn and you don't plan to support older/lower end phones, then you can get away with just draw everything. The gfx system would do the checking and only draws what would actually be shown, so you only waste some time to send commands pass JNI boundary to the gfx system and the associated rect check.
If you decided that you needs a more efficient method, you can store all the gfx objects' positions in 4 tree structures, so when you search the upper-left/upper-right/lower-left/lower-right "window" that the screen should show, it'll fast to find the gfx objects that intersects this window and then only draw those.
[Edit]
First of all thinking about infinity I was thinking about quite big
number of screens - at least 30 on zoom 1x in each side. I just need
to give my users bigger freedom in what they are doing.
If you just story the relative position of canvas objects, there's practically no limit on the size of your canvas, but may have to provide a button to take users to some point on canvas that they are familiar lest they got themselves lost.
When talking about scaling, there is another thing that has to be
concerned and connected with "infinity" of canvas. When user is
zooming out then screens, or more precise objects on the invisible
"neighbours" should appear with proper scaling as you would zoom out
camera in real life.
If you store canvas objects in a "virtual space", and using a "translation factor" to translate objects from virtual space to screen space then things like zoom-in/out would be quite trivial, something like
screenObj.left=obj.left*transFactor-offsetX;
screenObj.right=obj.right*transFactor-offsetX;
screenObj.top=obj.top*transFactor-offsetY;
screenObj.bottom=obj.bottom*transFactor-offsetY;
//draw screenObj
As an example here's a screenshot of my movie-booking app:
The lower window shows all the seats of a movie theater, and the upper window is a zoomed-in view of the same theater. They are implemented as two instances of the same SurfaceView class, besides user input handling, the only difference is that the upper one applies the above-mentioned "translation factor".
I assume that on all screens there will be maximum 500-800 rectangles
or circles.
It is actually not too bad. Reading your edit, I think a potentially bigger issue would be if an user adds a large number of objects to the same portion of your canvas. Then it wouldn't matter if you only draw the objects that are actually shown and nothing else - you'd still get bad FPS since the GPU's fill-rate is saturated.
So there are actually two potential sources of issues:
Too many draw commands (if drawing everything on canvas instead of just drawing visible ones)
Too many large objects in the same part of the screen (eats up GPU fill-rate)
The two issues requires very different strategy (1st one using tree structures to sort objects, 2nd one using dynamically generated Bitmap cache). Since how users use your app are likely to different than how you envisioned it to be, I would strongly recommend implementing the functions without the above optimizations, try to get as many people as possible to do testing, and then apply optimizations to each of the bottlenecks you encounter until the satisfactory performance is achieved.
[Edit 2]
Actually with just 500~800 objects, you can just calculate the position of all the objects, and then check to see if they are visible on screen, you don't even really need to use some fancy data structures like a tree with its own overheads.
I'm writing this game on Android where I have a bunch of characters moving around who collide with each other. Everything works fine but when I get passed a certain number of characters on the screen at the same time, the performance of the app gets hit severely. I did my tests and drawing is not causing the low frame rate, it is the algorithm for collision detection, since every time they move they have to check their location to all the other characters. So currently I'm just looping through them all for each character. Is there a way to improve on this? Is there a performance trick to collision detection on a big number of objects that I don't know about?
Yes, there is a technique based on a first broad-phase and second narrow-phase colission detection.
I'll quote some paragraps from: Beginning Android Games, by Mario Zechner.
Broad phase: In this phase we try to figure out which objects can
potentially collide. Imagine having 100 objects that could each
collide with each other. We’d need to perform 100 * 100 / 2 overlap
tests if we chose to naively test each object against each other
object. This naive overlap testing approach is of O(n^2) asymptotic
complexity, meaning it would take n^2 steps to complete (it actually
finished in half that many steps, but the asymptotic complexity
leaves out any constants). In a good, non-brute-force broad phase, we
try to figure out which pairs of objects are actually in danger of
colliding. Other pairs (e .g., two objects that are too far apart for
a collision to happen) will not be checked . We can reduce the
computational load this way, as narrow-phase testing is usually pretty
expensive.
Narrow phase: Once we know which pairs of objects can potentially
collide, we test whether they really collide or not by doi ng an
overlap test of their bounding shapes.
The broad phase involves dividing the world in large cells, making some sort of grid.
Each cell has the exact same size, and the whole world is covered in cells. If two objects are not in the same cell, a narrow phase for those two objects is not needed.
Quote once again:
All we need to do is the following:
Update all objects in the world based on our physics and controller step.
Update the position of each bounding shape of each object according to the object’s position. We can of course also include the orientation and scale as well here.
Figure out which cell or cells each object is contained in based on its bounding shape, and add it to the list of objects contained in those cells.
Check for collisions, but only between object pairs that can collide (e.g., Goombas don’t collide with other Goombas) and are in the same cell.
This is called a spatial hash grid broad phase, and it is very easy to implement. The first thing we have to define is the size of each cell. This is highly dependent on the scale and units we use for our game’s world.
It also depends on the bounding shape you're using. A simple rectangle or circle around the characters and it's euclidean distance is one simple thing to calculate, but a finer shape (including details as "the head", "the legs" with little additional bounding shapes) will be more a lot more computationally expensive to calculate.
If all objects are free to move to any part of the screen, then the best you can do is your O(n^2) algorithm. You can improve it by a constant factor by realizing that when you check if object A collides with object B, then you don't have to later check if object B collides with object A.
enclose each character within a fixed size square. Before you check for character collision, check if the squares in which they are enclosed collide. If and only if the squares collide, there would be a chance for the characters to collide. Now checking for squares collision is easy as you have to just compare the x & y co-ordinates.
Dividing into a broad phase and narrow phase as Federico suggests only helps if your collision detection algorithm is expensive, i.e. it's not a simple bounding box.
Fortunately there are other options.
You could try a collision mask technique. Since you don't seem to be limited by rendering speed, render a bounding box for each object into a hidden bitmap. Before rendering the next object, check the pixels at the four corners of its bounding box to see if they have already been written. You can even use a different colour for each object so that the colour tells you which object the collision was with.
Another popular trick is to simply not do every collision check every frame. For example, games like Super Mario Bros actually only check for collisions between the player and enemies every other frame. You can do a more advanced version where you check all objects in a round-robin fashion, doing as many as it can per frame. When things get busy each object might only be checked every other or even every third frame, but the player is unlikely to notice. This works best if your objects are not moving so fast that they can pass through each other one only one frame of collision.
I want to move an image in in 3 dimensional way in my android application according to my device movement, for this, I am getting my x y z co-ordinate values through sensorEvent,But I am unable to find APIs to move image in 3 dimesions. Could any one please provide a way(any APIs) to get the solution.
Depending on the particulars of your application, you could consider using OpenGL ES for manipulations in three dimensions. A quite common approach then would be to render the image onto a 'quad' (basically a flat surface consisting of two triangles) and manipulate that using matrices you construct based on the accelerometer data.
An alternative might be to look into extending the standard ImageView, which out of the box supports manipulations by 3x3 matrices. For rotation this will be sufficient, but obviously you will need an extra dimension for translation - which you're probably after, seen your remark about 'moving' an image.
If you decide to go with the first suggestion, this example code should be quite useful to start with. You'll probably be able to plug your sensor data straight into that and simply add the required math for the matrix manipulations.