android zoomable and scrollable board game - android

I'm developing a Android board game and have a question regarding creating a board that is zoomable and scrollable. The board contains a static background, characters (players) and the actual "level" which is drawn using tiles.
My solutions is to have a collection of elements (tiles, figures, all game elements - all have x,y coordinates and width + height), a camera and a renderer that draws the collection according to cameraX,cameraY, cameraWidth and cameraHeight. So if a user would scroll to the right, the camera would just set the cameraX appropriately - and the surface is scrollable. And if a user would zoom in/out the renderer would just scale every element image appropriately.
Example code for the renderer with scrollable surface and zoom in/out
protected function draw(Canvas c){
Collection elements = collection.getElements(cameraX,cameraY,cameraWidth,cameraHeight);
if(elements.size() > 0) {
for(int i = 0; i < elements.size(); i++) {
elements.get(i).drawElement(c);
}
}
}
.
.
.
// element class drawElement function
protected drawElement function(Canvas c) {
if(this.image != null) {
int w = this.width;
int h = this.height;
if(this.zoomFactor < 1) {
w*=this.zoomFactor;
h*=this.zoomFactor;
}
c.drawBitmap(this.image,this.x,this.y,w,h);
}
}
Is this the best solutions?
Could it be achived somehow else?
Could scrolling be achived using a ScrollView?
I dont wanna use any engine, because this is for a school project.

Actually you can simplify this situation somewhat. If you are indeed seeking a flat texture plane that is simply distorted by perspective, the Android Camera class can help you. Do not confuse this with the hardware camera for taking photos. This camera is a helper class wrapped around a matrix to perform transformations on 2D objects. You can read more about this very complex rendering topic by googling "fast fourier transforms". Basically you will want to create a canvas and do your drawing in a completely 2D way. Then right before you draw to the screen, you should transform this canvas using the Camera class. Let me know if you need some clarification. There is a lot of cool mathematics going on behind the scenes!
Take a look at this sample from the Android API Demos
http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/animation/Rotate3dAnimation.html
Android graphics.Camera documentation
http://developer.android.com/reference/android/graphics/Camera.html

Related

How do MPAndroidChart renderers work and how do I write a custom renderer?

I am using the library MPAndroidChart but it doesn't have all of the functionality I want out of the box.
I have heard that it is possible to implement the functionality I want by writing a custom renderer.
I have looked at the source code for the renderers in the MPAndroidChart GitHub repo, but I can't understand the concepts involved.
How do MPAndroidChart renderers work?
What is the high-level procedure for writing a custom renderer?
Understanding Views and Canvas
First, one should study the Canvas and Drawables Guide from the official Android documentation. Particularly, it is important to note that LineChart, BarChart etc. are subclasses of View that display themselves by overriding the onDraw(Canvas c) callback of the View superclass. Note also the definition of "canvas":
A Canvas works for you as a pretense, or interface, to the actual surface upon which your graphics will be drawn — it holds all of your "draw" calls.
When you are working with renderers, you will be dealing with the functionality for drawing lines, bars, etc. on the canvas.
Translation between values on the chart and pixels on the canvas
Points on the chart are specified as x and y values with respect to the units on the chart. For example, in the chart below, the centre of the first bar is at x = 0. The first bar has the y-value of 52.28.
This clearly does not correspond to the pixel co-ordinates on the canvas. On the canvas, x = 0 on the canvas would be a left-most pixel which are clearly blank. Likewise, because pixels enumeration starts from the top as y = 0, the tip of the bar is clearly not at 52.28 (the y-value on the chart). If we use Developer options/Pointer location we can see that the tip of the first bar is approximately x = 165 and y = 1150.
A Transformer is responsible for converting chart values to pixel (on-screen) co-ordinates and vice-versa. A common pattern in renderers is to perform calculations using chart values (which are easier to understand) and then at the end use the transformer to apply a transformation for rendering onto the screen.
View port and bounds
A view port is a window i.e., a bounded area on the chart. View ports are used to determine which part of the chart the user can currently see. Each chart has a ViewPortHandler that encapsulates the functionality related to view ports. We can use ViewPortHandler#isInBoundsLeft(float x) isInBoundsRight(float x) to determine which x values the user can currently see.
In the chart pictured above, the BarChart "knows about" the BarEntry for 6 and above but because they are out of bounds and not in the current viewport, 6 and upward are not rendered. Hence, x-values 0 through to 5 are within the current viewport.
ChartAnimator
The ChartAnimator provides an additional transformation to be applied to the chart. Usually this is a simple multiplication. For example, assume we want an animation where the points of the chart start at the bottom and gradually rise to their correct y-value over 1 second. The animator will provide a phaseY that is a simple scalar starts at 0.000 at time 0ms and rises gradually to 1.000 at 1000ms.
An example of renderer code
Now that we understand the basic concepts involved, let's look at some code from LineChartRenderer:
protected void drawHorizontalBezier(ILineDataSet dataSet) {
float phaseY = mAnimator.getPhaseY();
Transformer trans = mChart.getTransformer(dataSet.getAxisDependency());
mXBounds.set(mChart, dataSet);
cubicPath.reset();
if (mXBounds.range >= 1) {
Entry prev = dataSet.getEntryForIndex(mXBounds.min);
Entry cur = prev;
// let the spline start
cubicPath.moveTo(cur.getX(), cur.getY() * phaseY);
for (int j = mXBounds.min + 1; j <= mXBounds.range + mXBounds.min; j++) {
prev = cur;
cur = dataSet.getEntryForIndex(j);
final float cpx = (prev.getX())
+ (cur.getX() - prev.getX()) / 2.0f;
cubicPath.cubicTo(
cpx, prev.getY() * phaseY,
cpx, cur.getY() * phaseY,
cur.getX(), cur.getY() * phaseY);
}
}
// if filled is enabled, close the path
if (dataSet.isDrawFilledEnabled()) {
cubicFillPath.reset();
cubicFillPath.addPath(cubicPath);
// create a new path, this is bad for performance
drawCubicFill(mBitmapCanvas, dataSet, cubicFillPath, trans, mXBounds);
}
mRenderPaint.setColor(dataSet.getColor());
mRenderPaint.setStyle(Paint.Style.STROKE);
trans.pathValueToPixel(cubicPath);
mBitmapCanvas.drawPath(cubicPath, mRenderPaint);
mRenderPaint.setPathEffect(null);
}
The first few lines before the for loop are the setup for the renderer loop. Note that we obtain the phaseY from the ChartAnimator, the Transformer, and calculate the view port bounds.
The for loop basically means "for each point that is within the left and right bounds of the view port". There is no point in rendering x-values that cannot be seen.
Within the loop, we get the x-value and y-value for the current entry using dataSet.getEntryForIndex(j) and create a path between that and the previous entry. Note how the path are all multiplied by the phaseY for animation.
Finally, after the paths have been calculated a transformation is applied with trans.pathValueToPixel(cubicPath); and the paths are rendered to the canvas with mBitmapCanvas.drawPath(cubicPath, mRenderPaint);
Writing a custom renderer
The first step is choosing the correct class to subclass. Note the classes
in the package com.github.mikephil.charting.renderer including XAxisRenderer and LineChartRenderer etc. Once you create a subclass, you can simply override the appropriate method. As per the example code above, we would override void drawHorizontalBezier(ILineDataSet dataSet) without calling super (so as to not invoke the rendering stage twice) and replace it with the functionality we want. If you're doing it right, the overridden method should look at least a little bit like the method you are overriding:
Obtaining a handle on the transformer, animator, and bounds
Looping through the visible x-values (the x-values that are within the view port bounds)
Preparing points to render in chart values
Transforming the points into pixels on the canvas
Using the Canvas class methods to draw on the canvas
You should study the methods in the Canvas class (drawBitmap etc.) to see what operations you are allowed to perform in the renderer loop.
If the method that you need to override is not exposed, you may have to subclass a base renderer like LineRadarRenderer to achieve the desired functionality.
Once you have engineered the renderer subclass you want, you can consume it easily with the Chart#setRenderer(DataRenderer renderer) or BarLineChartBase#setXAxisRenderer(XAxisRenderer renderer) and other methods.

Locking Text in Android Studio using Libgdx

I made a flappy bird clone and the pipes are going forward to the bird, but the text comes with the pipes. I want to lock the text in the middle of the screen.
public void render(SpriteBatch sb) {
sb.setProjectionMatrix(cam.combined);
sb.begin();
sb.draw(bg, cam.position.x - (cam.viewportWidth / 2), 0);
sb.draw(bird.getTexture(), bird.getPosition().x, bird.getPosition().y);
for (Tube tube : tubes) {
sb.draw(tube.getTopTube(), tube.getPosTopTube().x, tube.getPosTopTube().y);
sb.draw(tube.getBottomTube(), tube.getPosBotTube().x, tube.getPosBotTube().y);
}
sb.draw(ground, groundPos1.x, groundPos1.y);
sb.draw(ground, groundPos2.x, groundPos2.y);
sb.end();
}
I usually use a Scene2D Stage with a static Viewport to display a hud or gui as a overlay on top of the actual game.
In most cases a HUD or GUI should be fixed to the actual screen and should not move therefor to have a movable camera in your game world you have to separate these elements entirely. What is happening in your case is that your current camera is drawing all elements and when it moves position the elements of your GUI will be drawn at a different position on your screen, like your "pipes" do.
If you are just drawing your text using the same batch an easy fix would be to just setup a new camera for this and when you are done drawing your game just input it's combined matrix to the projection matrix of the SpriteBatch, after you end it.
spriteBatch.setProjectionMatrix(movingCam.combined);
spriteBatch.begin();
//... draw stuff that belongs to your game world
spriteBatch.end();
spriteBatch.setProjectionMatrix(staticCam.combined);
spriteBatch.begin();
//... draw stuff that belongs to your GUI
spriteBatch.end();
Anyway, I would still recommend using Scene2D for the purpose of a GUI since it has everything you would need like buttons, tables, labels, etc.
Are you moving the camera to follow the bird? Just use an extra camera for your UI and/or static texts like score counters. And draw them after drawing game objects (on top of them).
See this question and submitted answer.

LibGdx Collisions

I'm currently working on my first Android game project using LibGdx. It is a 2D maze game where you use touch input to "draw" a line from one of the entrances to one of the exits. The world itself is a TiledMap, which act only as a visual backround at the moment.
The problem I have is the whole system of collision detection. There are (obliviously) walls in the maze, located on the edges of my background tiles. So when I slide my finger across a wall, the "player line" should stop at the wall. Additionally, events should be triggered when the line reaches an exit.
I could not find a way to properly implement these functionalities using the built-in libraries (I tried using Scene2D and Box2D). Stopping an actor's movement and firing an event is not that exotic, or?
All I need is some information on what I should use and maybe some first steps. :)
Thanks in advance!
Collision detection is probably the most difficult part of making a tiled game. Libgdx has a lot of useful methods to help you get the geometry but the actual collision handling after a collision is detected (called collision resolution) is a large topic. I suspect all you want is the player to stop. More advanced realistic solutions such as bouncing and friction are what Box2D specializes in.
First and foremost, getting the geometry that's going to collide.
a) The player can be represented by a rectangle. Have it extend the Sprite class and then you can use the getBoundingRectangle() function (A rectangle keeps things simple, but there are many other shapes that get used for collisions ).
b) Getting the geometry of the tiles, also a bunch of rectangles.
A function that gets the surrounding tiles.
public void getsurroundingTiles(int startX, int startY, int endX, int endY, Array<Rectangle> surroundingTiles){
TiledMapTileLayer layer = (TiledMapTileLayer) worldRef.getCurrentMap().getLayers().get("Terrain");
surroundingTiles.clear();
for (int y = startY; y <= endY; y++) {
for (int x = startX; x <= endX; x++) {
Cell cell = layer.getCell(x, y);
if (cell != null && cell.getTile().getProperties().containsKey("blocked")) {
Rectangle rect = new Rectangle();
rect.set(x, y, 1, 1);
surroundingTiles.add(rect);
}
}
}
}
}
This function gets the tiled map tile layer made in the Tiled program and fills up a Rectangle ArrayList with the rectangles only if the tile has the key "blocked".
So now you have rectangles representing your player, and representing all the
colliding blocks. You can draw them with a ShapeRenderer object.
This is what it looks like in a game I am working on.
Lastly, actually resolving the collisions is a larger topic. This is a great starting point.
http://www.metanetsoftware.com/technique/tutorialA.html

how to rotate 3d perspective camera for isometric tiled map in andenigne

i am working on isometric tiled map game.here in introduction i want to show complete game field .so i used
this.mCamera = new ZoomCamera(CAMERA_WIDTH, CAMERA_HEIGHT, CAMERA_WIDTH, CAMERA_HEIGHT) {
#Override
public void onApplySceneBackgroundMatrix(final GLState pGLState) {
final float widthRaw = this.getWidthRaw();
final float heightRaw = this.getHeightRaw();
pGLState.orthoProjectionGLMatrixf(0, widthRaw, heightRaw, 0, getZNear(), getZFar());
}
#Override
public void onUpdate(float pSecondsElapsed) {
if (timeCounter >= 1) {
mCamera.setRotation(i);
timeCounter = 0;
i=i+1;
}
timeCounter += pSecondsElapsed;
super.onUpdate(pSecondsElapsed);
}
};
but its rotating 2d view .i want to rotate in 3d perspective .how can i rotate camera in 3d perspective in andenigne GLES2.0.
please suggest me
You cannot rotate your camera in 3D. The appearance of 3D is caused by the fact that the artwork is drawn in perspective. Rotating the artwork does not cause the artwork to change the way its drawn any more than rotating a piece of paper with a drawing on it would cause a 3D transformation. To rotate in 3D you need to be using a 3D engine.
Several times I stumbled upon this tutorial in the AndEngine Forum, where it states that it is possible to rotate the Camera in a way to show distant objects further away (smaller). It is made for the AndEngine GLES1 but it should be possible to adapt it to GLES2.
AndEngine knows a z-axis. The position on that axis is set automatically based on the order in which you attach the Sprites to the Scene. However, you should set it manually. In most cases it will be sufficient to set the z-axis according to the y-position (z-axis = y-axis) every time a Sprite changes its position.
public class YourSprite extends Sprite{
...
#Override
setPosition(float x, float y){
super.setPosition(x,y);
this.setZIndex(y);
}
}
If you then manage to implement the camera rotation as described in the tutorial, together with the z-axis you should have a pretty realistic 3D effect.
However, I never tried that tutorial, because most games that use the Bird's-eye view (as the game from the youtube link you provided) don't need a real vanishing point, since the display is most likely so small, so the player wouldn't notice anyway. So I stick to changing the position on the z-axis. But I would certainly like to know more if anyone manages to rotate the camera!

Dynamically create / draw images to put in android view

I'm not sure I'm doing this the "right" way, so I'm open to other options as well. Here's what I'm trying to accomplish:
I want a view which contains a graph. The graph should be dynamically created by the app itself. The graph should be zoom-able, and will probably start out larger than the screen (800x600 or so)
I'm planning on starting out simple, just a scatter plot. Eventually, I want a scatter plot with a fit line and error bars with axis that stay on the screen while the graph is zoomed ... so that probably means three images overlaid with zoom functions tied together.
I've already built a view that can take a drawable, can use focused pinch-zoom and drag, can auto-scale images, can switch images dynamically, and takes images larger than the screen. Tying the images together shouldn't be an issue.
I can't, however, figure out how to dynamically draw simple images.
For instance: Do I get a BitMap object and draw on it pixel by pixel? I wanted to work with some of the ShapeDrawables, but it seems they can only draw a shape onto a canvas ... how then do I get a bitmap of all those shapes into my view? Or alternately, do I have to dynamically redraw /all/ of the image I want to portray in the "onDraw" routine of my view every time it moves or zooms?
I think the "perfect" solution would be to use the ShapeDrawable (or something like it to draw lines and label them) to draw the axis with the onDraw method of the view ... keep them current and at the right level ... then overlay a pre-produced image of the data points / fit curve / etc that can be zoomed and moved. That should be possible with white set to an alpha on the graph image.
PS: The graph image shouldn't actually /change/ while on the view. It's just zooming and being dragged. The axis will probably actually change with movement. So pre-producing the graph before (or immediately upon) entering the view would be optimal. But I've also noticed that scaling works really well with vector images ... which also sounds appropriate (rather than a bitmap?).
So I'm looking for some general guidance. Tried reading up on the BitMap, ShapeDrawable, Drawable, etc classes and just can't seem to find the right fit. That makes me think I'm barking up the wrong tree and someone with some more experience can point me in the right direction. Hopefully I didn't waste my time building the zoom-able view I put together yesterday :).
First off, it is never a waste of time writing code if you learned something from it. :-)
There is unfortunately still no support for drawing vector images in Android. So bitmap is what you get.
I think the bit you are missing is that you can create a Canvas any time you want to draw on a bitmap. You don't have to wait for onDraw to give you one.
So at some point (from onCreate, when data changes etc), create your own Bitmap of whatever size you want.
Here is some psuedo code (not tested)
Bitmap mGraph;
void init() {
// look at Bitmap.Config to determine config type
mGraph = new Bitmap(width, height, config);
Canvas c = new Canvas(mybits);
// use Canvas draw routines to draw your graph
}
// Then in onDraw you can draw to the on screen Canvas from your bitmap.
protected void onDraw(Canvas canvas) {
Rect dstRect = new Rect(0,0,viewWidth, viewHeight);
Rect sourceRect = new Rect();
// do something creative here to pick the source rect from your graph bitmap
// based on zoom and pan
sourceRect.set(10,10,100,100);
// draw to the screen
canvas.drawBitmap(mGraph, sourceRect, dstRect, graphPaint);
}
Hope that helps a bit.

Categories

Resources