2d Graphics - sprite click detection - android

I'm working on a small app that uses sprites which are rendered using a canvas and simple drawBitmap.
Once the user touch the screen I need to know which sprite was clicked on.
I'm able to achieve this goal when I treat each sprite as a rectangular which has the width and height of the image.
However, some of the sprites takes only small portion of the entire rectangular and I would like to ignore when user clicked inside the rectangular but not on the internal shape.
Any ideas what could be a good method to do that?
Edit: Just to be more clear, lets say I have a sprite with size of 200x200, the sprite is an image of an airplane from above and the airplane has long long wings. Since the wings are long there will lots of "dead" areas in the sprite.
I would like to detect when user clicks the airplane itself only and not the "dead" area.
Thanks.

You will need to create a 2d array of all the pixels in the bitmap. Mask the pixels to either a 0(transparent) or a 1(has color). Then when you click inside of a rectangle you will just need to get the width offset and the height offset within the rectangle. This gives you your indices for mapping to the pixel array. Then check and see if the index in the pixel array contains a 1 for a value. If so then you clicked on the actual image. Does that make sense?

You have to check for the area where your Bitmap gets drawn, not another rectangular shape. Just treat every sprite (which may have different sizes) as a single rectangle, whose width and height equals to the width and height of the sprites.
Since you elaborated your question I'll give another suggestion.
When you have detected a click on the sprite, simply check (in the Bitmap's area) the current pixel via the Bitmap.getPixel() function. You can then easily check if the color at the specified position is something you're interested in, otherwise you can just skip detecting the touch.

Related

Positioning images based on detecting reference points within another image

Let's say for example I have a bitmap image of a tree, and I want to position other images (such as bitmaps of apples) on the tree leaves. Is there a way that I could put markers on the leaves... red dots for instance... and then and then programmatically place apple images centered on those dots?
As a very basic test, I have image with a white background with one red pixel in the center. I'd like to calculate the coordinates of this red point, and then set an ImageView to be placed on those coordinates.
How might I go about this?
It depends, where your 'red point' marker is. If it's in the center or in any specific point (like 2/3 of width, 1/3 of height), you can just divide layout width and height to get right coordinates.
In other cases it would be better to set white background and draw markers manually in overriden dispatchDraw method. In such case you would just know the coordinates of the marker.
You want to position an image over the red dot, right?
I'm thinking of two different ways:
A-> You could make the red dot to be an ImageView itself, and then centering it by using gravity in order to transform it into another kind of image.
Or...
B-> Make a container that uses the white background with red dot as background resource. Then center it by using gravity too, and finally, positioning your image to the center of the container so it will be over the red dot.
No calculation is needed if you thing this could help.
It sounds like you are the one putting the markers onto your bitmaps.
If that is the case, is there a really good reason why you would want to be trying to embed the markers as data in the bitmap itself? That leads you to the problem of having to rediscover the locations. This could be a fuzzy task...what if there is a red barn next to the tree? Are you going to put an apple image on every red pixel making up the barn?
What you might actually want is to define a format which has a bitmap with no markers on it, and then a separate list of coordinates for where you want the apples to go. That doesn't require discovery of any kind...you just ship the image along with the list and you are done.
There are some cases where there is no "place on the side" that you can put information, and you actually need it to go into the bitmap file. If so, consider also that there are some hidden places you can put data in bitmaps... metadata like Exif:
http://en.wikipedia.org/wiki/Exchangeable_image_file_format
So that's a middle-ground, where you can manage to get the list of points to "stow away" into the file containing the image without actually requiring the modification of the pixels.
If you find you are really stuck in a situation where you must put these coordinate specifications into the image data, then something a little bit more unique than a red dot would be easier to detect with certainty. Maybe there's something you know about your images... for instance, that they are PNG files and do not have any transparency. You could make transparent dots indicate substitution points.
The larger and weirder the pattern, the more rare it is...so if you know your objects being pasted are always going to be bigger than 3x3 you could come up with a very unusual 3x3 pixel imprint for your markers that would be unlikely to occur in nature. Uncompressed in 24-bit color, a sufficiently random pattern would only happen 1/(2^24^9) by accident. Small number; although compression would create more gray areas.
But greater point being: if you don't have a good reason to turn a simple problem into a complex image-recognition exercise, don't. Just keep the list of points on the side somewhere so you don't have to hunt for them in the image.

Animate a ImageView so it starts sliding in from the opposite side it slided out

Swag, I'm currently programming a android game and I need help with one problem. I want my background to slowly slide out in the right side of the screen, and at the same time make the exact same part that just slided out from the screen, slide in but from the left side. So almost like a marquee TextView.
Is there any simple way of doing this without having to create a set of different ImageViews and animate them differently.
Hope you understand, and that somebody got an answer to my question, cheers!
The easiest way to do this would be to create a single large Bitmap that contains the entire background of your environment. Then each frame you display a different subsection of that image.
Here the gray box represents the entire background and the pink box represents the portion of the box you are actually drawing. Each time you redraw you would need calculate how much time has passed since the last draw and than use that delta_time to calculate the number of pixels to move the pink box.
To get a subimage from a bitmap you can simply use:
createBitmap(Bitmap source, int x, int y, int width, int height, Matrix m, boolean filter)
Where source would be the gray box, x would represent how far along the pink box is, y would be 0 and width would be the width of the pink box and height would be the height of the background image.
Additionally I would recommend you checkout andengine its a great open-source game engine with a vibrant support community. They have great solutions for problems like this.

Have huge prob how to move in Canvas

I have a Android program which you type in equation and them program display you in "new" layout a graph, its like coordinate system.You have function line, x line, y line... like school basic, you know, easy one.
But if your equation numbers are to hight like: "x*x*40" your graph line is to big to be on display. So here i need yur help.
In android you can move picture up, down, left, right, zoom,... and i what to do same with a graph.I found a tutorails like this one:http://obviam.net/index.php/displaying-graphics-with-android/
,but this contains picture and i dont have picture!I have no picture or what so ever. Program works in Canvas and draw lines with command like this:"g.drawLine(x1, y1, x2, y2, color);" and the and it looks somethink like this in full screen:
http://grockit.com/blog/collegeprep/files/2009/12/14.JPG
So here is problem how to move like picture but its not a picture. In a lot of examples you must have a picture like R.drawable.image, but here are just calculated lines.
I have one idea how to do it, but its probably stupid:
-if you made a graph bigger than your screen (much bigger) and than do a screenshot, save like picture and than move like picture as in example
(if you need more explanation i can do it) sry if my English was bad :(
Thank you
Well, your best bet here is to use OpenGL. Otherwise, not only will you have problems with lines sometimes being to big or to small for a given screen, but also with different screen resolutions (your line might be too big for a 320x480 screen, but it will very well become too small for some of the new 1280x720 screens).
Here's what I would do:
make an opengl surface view with ORTHOGONAL projection
make orthogonal projection's "far side" be of high resolution, with fixed width, like maybe 1600
when surface is initialized, the opengl viewport is initted to screen's width and height
also the surface's far side's height will be set to keep the proportions with those of the screen.
you can then use Canvas and its drawxxx() methods to create a bitmap with your graph and text and whatever else you want to display.
then you use that bitmap to make a texture for a rectangular poligon that you draw in your orthogonal perspective.
now the size of the graph will always scale properlly with the user's screen size (like fit in all the time)
also now you can easily add zoom and scroll options

Is it possible to get the points of bitmap as path?

suppose my bitmap is like
This image is actually in square transparent. I want to get the only viewable points as array so I can bound it and handle touch event on canvas. Right now it is square so when I touch at the corner of the image it still detect touch event on image. i don't want to do like this. Only if user clicked on viewable part then only action would be taken otherwise not.
For temporary I have used radius of image from center point it works fine but accurate, also if this image triangle part length is long then if it remain in square format user fill/get event on image outside.
I have used canvas to draw bitmap. Is there any other way or easy way to do this thing and handle event.
I have seen many games in that they used like custom shapes and touch event fire only on display part of object, how could i achieve this things.
Take a look at coordinates:
Android Canvas Coordinate System
and
http://code.google.com/p/apps-for-android/source/browse/trunk/SpriteMethodTest/src/com/android/spritemethodtest/CanvasSprite.java?r=150
Which goes into sprite:
http://p-xr.com/android-tutorial-how-to-paint-animate-loop-and-remove-a-sprite/
This may help too:
http://www.droidnova.com/playing-with-graphics-in-android-part-vi,209.html
Some of that should be helpful.

Viewport boundaries on any resolution with OpenGL ES

I'm having difficulties understanding about the OpenGL perspective view. I've read tons of information however it didn't help me trying to achieve what I'm after. Which is making sure my 3d scene is filling the entire screen on every Android device.
To test this, I will be drawing a quad in 3d space which in the end should touch every corner, filling up the whole device's screen. I could then use this quad, or actually its coordinates to specify a bounding box at a certain Z distance which I could use to put my geometry and making sure those fill up my screen. When the screen resizes, or I am to run it on another screen resolution, I would recalculate this bounding box and geometry. I'm not talking about static geometry, but for instance say I want to fill the screen with balls and it doesn't matter how big or how many balls there are, the only important thing is the screen is filled and there are no redundant balls outside the visible frustum.
As far as I understand when specifying the viewport you actually bind pixel values to the frustum's boundaries. I know that you can actually set an orthographic view in a way your window pixels match 3d geometry position but I'm not sure how this works in perspective view.
Here I'm assuming the viewport width and height to be mapped to the nearZ. So when a ball is at Z=1f it has it's original size
When moving the ball into the screen so into the direction of farZ, the ball would be scaled down in order for the perspective to work. So a ball at Z=51f for instance, would appear smaller on my screen and I would need more balls to fill up the entire screen.
Now in order to do so, I'm looking for the purple boundaries
Actually I need these boundaries to fill the entire screen on different frustum sizes (width x height) while the frustum angle and Z distance for the balls is always the same
I guess I could use trig to calculate these purple boundaries (see blue triangle note)
Am I correctly marking the frustum angle, it being the vertical angle of the frustum?
Could someone elaborate on the green 1f and -1f values as I seem to have read something about it? Seems like some scalar that is used to resize the geometry within the frustum?
I actually want to be able to programmaticaly position geometry against the viewport borders within 3d space at any resolution/window for any arbitrary Android device.
Note: I have a flash background which uses a stage (visual area) with a known width x height at any resolution which makes it easy to position/scale assets either using absolute measurements or percentual measurements. I guess I'm trying to figure out how this system applies to OpenGL's perspective view.
I guess this post using gluUnproject answers the question.

Categories

Resources