how to make a rectangle sprite stick to one side in Unity? - android

I am using a square sprite in Unity for making Android game. I want this rectangle to be always placed at the bottom of the screen. This strip acts as a ground and other objects can fall on it. I want the lower side of this rectangle to stick to the lower side of the screen. How to do it?.If i try to place it there manually then after I change the screen resolution the placement gets disturbed?Also the camera is not moving, so I only want to fix the position of this strip with respect to the camera once. (i think so). What should i do?

We need a little more information. Sprites at the lower side of the screen, usually intend to be in the UI. Is this the case? If it is, the way to do it is very different from a normal sprite in the world.
(Since I am not allowed to comment, I will try to answer for both cases, however I am not at a computer capable of running Unity, so I can't really provide a concrete answer (aka: Code))
UI:
Add an Image to your Canvas. Go to your anchor preset (inside the Rect Transform) and set it to the appropriate position. Then, in case you want the image to not be stretched, go to the Image component and Check on the "Maintain Aspect Ratio" (Or something like that) option. Add the sprite to the image and you are all set.
World:
Here, the situation is way more complicated. You first need to get the screen dimensions, then calculate the size of the object according to the aspect ratio, then have some Camera.ScreenToWorldPoint conversion and finally use the LookAt method in the Update() in order to have the sprite face the camera at all times. Or use the UI layer as described above, let's be honest that is probably what you need.

Related

Android game board with irregular board spaces

I'm creating an Android board game with several differently shaped board spaces (like Risk).
I want to be sure that my board appears correct and that OnTouchListeners stay in place on the GUI regardless of screen size/resolution.
Possible solutions I have thought of and their problems:
Create a single image for the board and assign OnTouchListeners based upon pixel geometry. Problem: If the user's display is a different resolution, my Listener might not be under the same pixels as my image (right?)
Create several ImageButtons and arrange them together. Problem: the ImageButtons might get rearranged based upon the display and I would end up with overlapping spaces or gaps.
Use Android custom drawing. If I do this, how would I link my Listeners to my Canvas and be sure that they are synced?
Basic question:
How to be sure that listeners sync with graphics in a GUI that uses irregular geometry?
I worked on an app with irregular touch areas so I can give you guidance on one way to achieve this.
Start with a single image for your entire board. This image is going to have a certain ("intrinsic") width and height regardless of any device resolutions.
Now here comes the tedious part. You (or maybe your graphic designer) will need to plot out coordinates of an irregular polygon for each touch area. These will be constants to your application.
When you are displaying your board, if you are zooming and panning on the image, you want to keep track of the transform matrix for the display. When the user touches the screen, you will get x,y coordinates from OnTouchListener and for those to be useful, you will have to "de-transform" the x,y to normalize it against the intrinsic dimensions of the board and your polygons.
We rolled our own hit-testing logic using an algorithm from http://alienryderflex.com/polygon/, but you can also try this: Create a Path out of your polygon coordinates (using moveTo(), lineTo(), and close()), then assign the Path to a Region using Region.setPath(). Once you have that, supposedly you should be able to hit-test using Region.contains(x,y), but I've never tried it so I can't guarantee that's going to work.

Libgdx Hud with two stages

I have an app that's currently working with a single stage but i need to add a side display/section as a HUD, with scores/lives etc on it, so that the HUD is on the left, and the main hand screen on the right. The main game screen will be fixed and will not move around.
From researching I've found a couple of solutions.
1 - two stages
2 - a group with two groups to it, possibly using a horizontalgroup
3 - two cameras one stage
4 - one stage, one camera, but changing the position of the camera for each set of actors.
I think, option 1 is my preference, but i have some questions.
Do stages always fill the whole screen, or can i start then where i want? This would make it easier for the right hand screen to calculate positions based on 0,0 of that screen rather than always having to add the width of the HUD on to any calculations.
Do i need to work about viewports? Currently I'm not using one (which i think means my stage is set to scaling by default) but nothing looks stretched as a result of this. I don't know much about viewports, but there always seems to be a compromise to be made with them, i.e. black bars top or sides.
If I have two stages, do they each have their own camera? Do I need to with about this? Can I possibly aim the right hand camera at an offset so i can still draw things from 0,0 with that being the bottom left corner of the right stage, not the whole screen?
Finally, off topic, I am a little confused about spritebatch. I'm not currently using one, because I use a stage. Is that OK, or should i still be using one in conjunction with a stage somehow? And add all my actors to that?
It I understand correctly, you're using scene2d for your game world and also for your HUD. And the HUD doesn't overlay the game world, but rather uses its own portion of the screen exclusively.
Stages do not always fill the whole screen. They have no concept of filling or not filling anything, because they can have objects that are being drawn off screen. However, they are clipped to a rectangle defined by their Viewport.
In your case, it seems you need two Viewports, and therefore, two stages. You say you aren't using a Viewport, but you are...Stage automatically creates its own ScalingViewport that's set up like a StretchViewport. (ScalingViewport is not mentioned in the documentation, which is out of date.) StretchViewport is usually bad because your game will be distorted to fit whatever the aspect ratio of the device is.
ExtendViewports do not cause black bars as long as you don't set a max width/height on them and I think are usually the best choice for any game world view.
You can set your two Viewports to cover specific parts of the screen that you calculate yourself. Since this is a specialized case, I think you will have to directly subclass the Viewport class (not one of its subclasses) and manipulate each of them using viewport.setScreenBounds(...).
Regarding your last question: yes, each of the two stages has its own Viewport, and each Viewport has its own camera. Once you set up your two Viewports to each have their own portion of the screen, you can also set them to treat their respective bottom left corners as 0,0.

Positioning images based on detecting reference points within another image

Let's say for example I have a bitmap image of a tree, and I want to position other images (such as bitmaps of apples) on the tree leaves. Is there a way that I could put markers on the leaves... red dots for instance... and then and then programmatically place apple images centered on those dots?
As a very basic test, I have image with a white background with one red pixel in the center. I'd like to calculate the coordinates of this red point, and then set an ImageView to be placed on those coordinates.
How might I go about this?
It depends, where your 'red point' marker is. If it's in the center or in any specific point (like 2/3 of width, 1/3 of height), you can just divide layout width and height to get right coordinates.
In other cases it would be better to set white background and draw markers manually in overriden dispatchDraw method. In such case you would just know the coordinates of the marker.
You want to position an image over the red dot, right?
I'm thinking of two different ways:
A-> You could make the red dot to be an ImageView itself, and then centering it by using gravity in order to transform it into another kind of image.
Or...
B-> Make a container that uses the white background with red dot as background resource. Then center it by using gravity too, and finally, positioning your image to the center of the container so it will be over the red dot.
No calculation is needed if you thing this could help.
It sounds like you are the one putting the markers onto your bitmaps.
If that is the case, is there a really good reason why you would want to be trying to embed the markers as data in the bitmap itself? That leads you to the problem of having to rediscover the locations. This could be a fuzzy task...what if there is a red barn next to the tree? Are you going to put an apple image on every red pixel making up the barn?
What you might actually want is to define a format which has a bitmap with no markers on it, and then a separate list of coordinates for where you want the apples to go. That doesn't require discovery of any kind...you just ship the image along with the list and you are done.
There are some cases where there is no "place on the side" that you can put information, and you actually need it to go into the bitmap file. If so, consider also that there are some hidden places you can put data in bitmaps... metadata like Exif:
http://en.wikipedia.org/wiki/Exchangeable_image_file_format
So that's a middle-ground, where you can manage to get the list of points to "stow away" into the file containing the image without actually requiring the modification of the pixels.
If you find you are really stuck in a situation where you must put these coordinate specifications into the image data, then something a little bit more unique than a red dot would be easier to detect with certainty. Maybe there's something you know about your images... for instance, that they are PNG files and do not have any transparency. You could make transparent dots indicate substitution points.
The larger and weirder the pattern, the more rare it is...so if you know your objects being pasted are always going to be bigger than 3x3 you could come up with a very unusual 3x3 pixel imprint for your markers that would be unlikely to occur in nature. Uncompressed in 24-bit color, a sufficiently random pattern would only happen 1/(2^24^9) by accident. Small number; although compression would create more gray areas.
But greater point being: if you don't have a good reason to turn a simple problem into a complex image-recognition exercise, don't. Just keep the list of points on the side somewhere so you don't have to hunt for them in the image.

Android Game Development - Custom map leading to different activities

I'd like to create a custom map. It should be or look like one picture, but according to the part of which the user clicks, it should move the user to a different location (i.e. start a different activity). I've seen it done in several games but I don't know how to do it myself.
The part of the picture should have non-geometrical borders (obviously it would be easily done with many square images). Sadly, I don't even know what term describes what I want to do so I wasn't able to find any helpful tutorials or discussed topics.
Example:
Picture: http://i236.photobucket.com/albums/ff40/iathen/mapEx.png
If the user touches the purple slide, (s)he should be leaded to activity_1
If the user touches the blue slide, (s)he should be leaded to activity_2
If the user touches the green slide, (s)he should be leaded to activity_3
In my experience there are 2 main (most used) ways to achieve this.
The first (my favorite):
Get the data from a PNG
You should write multiple layers to a canvas. These layers constitute your "zones" (blue, green, purple in the image). To obtain the data of these areas, you get it from PNGs (with transparencies off course) to write the canvas with whatever you want. You must store the values where there can be a tap from the user (non-transparent areas). Notice that this values can be scaled up/down depending on the map size, screen resolution, map dimensions, etc.
Once you've written the layers to the canvas you should check for a match of the user tap and the stored areas you have. You should take into consideration here the order in which the user tap is processed in your code. For instance, in your image, the purple layer is on top so it must be processed first, the blue as second, and the green as the last one. This way you can have an "island" inside a bigger area.
The second way:
Generate the boundaries programmaticaly
I think this solution is self-explanatory. The only I've faced with this variant is that when the surfaces boundaries get messy, it's really complicated to generate the proper equations.
EDIT:
Using the first approach you can employ multiple PNGs to load data or use a single PNG with data coded into the bytes (i.e. RGB values). It's up to you to decide which one to implement.
Hope it helps!
Since a touchscreen itself isn't very accurate, your collision detection for the buttons doesn't need to be either. It would be a waste of time to try to make a complicated collision detection algorithm to detect a touch within those weird shapes.
Since you are making a game, I assume you know how to handle custom touch events, as well as canvas (at least). There are many ways to do what you want, but in the specific example image you linked is kind of a special case.
You could create a giant bounding circle around the three blobs, and then check if the user touched within the bounds of the circle (ie check if the distance from the touch to the center of the circle is less than or equal to the radius). Once you determine that it is, you could check which section of the circle it falls into by splitting it up into 3 equal sections. Requires some math, but shouldn't be that complicated.
It wouldn't be a perfect solution, but it should be good enough. Although, you might have to change the buttons a little so they aren't so stretched out horizontally, otherwise a bounding circle wouldn't be ideal.
Personally, in my games I always have "nodes" that represent the visual elements of the game, such as buttons. Instead of using a large image like you are doing, I would create separate images for each button, and then check their collisions with touch events independently. That way I could have each button check with their own individual bounding circles, or, if absolutely necessary, I could even have custom algorithms for each individual button.
These aren't perfect solutions. If you do want a pixel-perfect solution, you'll need to implement some polygon collision detection algorithms
One thing to consider is screen size and ratio. The only constants you should use are for percentages.

Have huge prob how to move in Canvas

I have a Android program which you type in equation and them program display you in "new" layout a graph, its like coordinate system.You have function line, x line, y line... like school basic, you know, easy one.
But if your equation numbers are to hight like: "x*x*40" your graph line is to big to be on display. So here i need yur help.
In android you can move picture up, down, left, right, zoom,... and i what to do same with a graph.I found a tutorails like this one:http://obviam.net/index.php/displaying-graphics-with-android/
,but this contains picture and i dont have picture!I have no picture or what so ever. Program works in Canvas and draw lines with command like this:"g.drawLine(x1, y1, x2, y2, color);" and the and it looks somethink like this in full screen:
http://grockit.com/blog/collegeprep/files/2009/12/14.JPG
So here is problem how to move like picture but its not a picture. In a lot of examples you must have a picture like R.drawable.image, but here are just calculated lines.
I have one idea how to do it, but its probably stupid:
-if you made a graph bigger than your screen (much bigger) and than do a screenshot, save like picture and than move like picture as in example
(if you need more explanation i can do it) sry if my English was bad :(
Thank you
Well, your best bet here is to use OpenGL. Otherwise, not only will you have problems with lines sometimes being to big or to small for a given screen, but also with different screen resolutions (your line might be too big for a 320x480 screen, but it will very well become too small for some of the new 1280x720 screens).
Here's what I would do:
make an opengl surface view with ORTHOGONAL projection
make orthogonal projection's "far side" be of high resolution, with fixed width, like maybe 1600
when surface is initialized, the opengl viewport is initted to screen's width and height
also the surface's far side's height will be set to keep the proportions with those of the screen.
you can then use Canvas and its drawxxx() methods to create a bitmap with your graph and text and whatever else you want to display.
then you use that bitmap to make a texture for a rectangular poligon that you draw in your orthogonal perspective.
now the size of the graph will always scale properlly with the user's screen size (like fit in all the time)
also now you can easily add zoom and scroll options

Categories

Resources