So I am currently designing a game where the main idea is that you choose your attack and then an animation plays out based on the attack that you picked (think pokemon). The game is turn-based as well.
My question is whether scene2d would be easier to use than implementing a custom solution for handling the animation part of the game. From what I've read (and I've found it difficult to find good information on scene2d), it sounds like scene2d would make designing the UI for the buttons/menu extremely easy, but I'm not sure how I can roll that into making the actors move. Is it as simple as handling the touch event on the button and calling the corresponding actor's action method based on the player's choice?
In what I have in mind, the actors never actually move (except during their animation and they don't move across the screen, they merely go through their animations in place). During the animation, there will also be particle effects (the attack) which, if using scene2d, would need to be their own actors. Would the synchronization of the actors and the attack be difficult to produce?
Actors do movee..
actor.addAction(Actions.moveTo(posX, posY, 5)));
by this your actor moves to posX, posY and 5 is the time duration ..
using scene2d would be a good idea in my opinion..
Scene2D would be better since you would have to manually implement the listener for actions such as Click when you use Sprite whereas Scene2D provides the functions to set Listeners.
you may already know this, but let me try to answer your question:
Scene2d has a very handy action system, which basically allows the following:
do any of the provided actions
fine tune them with the many provided interpolations
make new actions with Actions.run()
chain existing actions forming sequences
Like this:
import com.badlogic.gdx.scenes.scene2d.actions.Actions;
import com.badlogic.gdx.scenes.scene2d.Action;
Action a1 = Actions.sequence(Actions.fadeOut(0), Actions.fadeIn(3, Interpolation.bounce));
Action a2 = Actions.moveTo(100, 200, 3, Interpolation.pow2Out);
Runnable r = new Runnable() {
#Override
public void run() {
setColor(1, 0, 0, 1);
System.out.println("now I'm a red actor");
}
};
And then combine then, for example like this:
addAction(Actions.sequence(Actions.parallel(a1, a2), Actions.run(r)));
This allows you to profit from scene2d's built-in sequencer saving you the half of the work, at least. So, answering your question, I think it is very possible indeed to easily implement fixed as well as reactive/randomized animations using this system. It also allows you to easily encapsulate simpler actions into complexer ones, and has the following advantages:
Very readable and maintainable code
Tradeoff CPU/Mem: much more memory-efficient than storing plain sequences or even videos
Reactivity: this way you can program your animations to be slightly different each time
On the other hand, this developing system can become very time consuming if you constantly want "uncovered" things, like the following:
Implement time-based actions yourself that aren't built-in (like camera travelling)
Make your own interpolations if the built-in ones don't fit your goals
Work with many little granular elements (for that sake I would use the ParticleEditor).
Which I don't think is your case. As a last remark, you should take a look at the spine animation engine. I don't use it myself but may be useful for what you have in mind.
Related
I have a newbie question. I just started learning about libgdx and I'm a bit confused. I read the documentation/wiki, I followed some tutorials like the one from gamefromscratch and others and I still have a question.
What's the best way to check and do something for a touch/tap event?
I'm using Scenes and Actors and I found at least 4 ways (till now) of interacting with an Actor, let's say:
1) myActor.addListener(new ClickListener(){...});
2) myActor.setTouchable(Touchable.enabled); and putting the code in the act() method
3) verifying Gdx.input.isTouched() in the render() method
4) overriding touchDown, touchUp methods
Any help with some details and suggestions when to use one over the other, or what's the difference between them would be very appreciated.
Thanks.
I've always been using the first method and I think from an OOP viewpoint, it's the "best" way to do it.
The second approach will not work. Whether you set an Actor to be touchable or not, Actor.act(float) will still be called whenever you do stage.act(float). That means you would execute your code in every frame.
Gdx.input.isTouched() will only tell you that a touch event has happened anywhere on the screen. It would not be a good idea to try to find out which actor has been hit by that touch, as they are already able to determine that themselves (Actor.hit()).
I'm not sure where you'd override touchDown and touchUp. Actors don't have these methods, so I'm assuming you are talking about a standard InputProcessor. In this case you will have the same problem like with your 3rd approach.
So adding a ClickListener to the actors you want to monitor for these kind of events is probably the best way to go.
Talking in context of a game based on openGL renderer :
Lets assume there are two threads :
that updates the gameLogic and physics etc. for the in game objects
that makes openGL draw calls for each game object based on data in the game objects (that thread 1 keeps updating)
Unless you have two copies of each game object in the current state of the game you'll have to pause Thread 1 while Thread 2 makes the draw calls otherwise the game objects will get updated in the middle of a draw call for that object ! which is undesirable!
but stopping thread 1 to safely make draw calls from thread 2 kills the whole purpose of multithreading/cocurrency
Is there a better approach for this other than using hundreds or thousands or sync objects/fences so that the multicore architecture can be exploited for performance?
I know I can still use multiThreading for loading texture and compiling shaders for the objects which are yet to be the part of the current game state but how do I do it for the active/visible objects without causing conflict with draw and update?
The usual approach is that the simulation thread after completing a game step commits the state into an intermediary buffer and then signals the renderer thread. Since OpenGL executes asynchronously the render thread should complete rather quickly, thereby releasing the intermediary buffer for the next state.
You shouldn't render directly from the game state anyway, since what the renderer needs to do its works and what the simulation produces not always are the same things. So some mapping may be necessary anyway.
This is quite a general question you're asking. If you ask 10 different people, you'll probably get 10 different answers. In the past I implemented something similar, and here's what I did (after a long series of optimisation cycles).
Your model-update loop which runs on a background thread should look something like this:
while(true)
{
updateAllModels()
}
As you said, this will cause an issue when the GL thread kicks in, since it may very well render a view based on a model which is half way through being rendered, which can cause UI glitches at the best case.
The straight-forward way for dealing with this would be synchronising the update:
while (true)
{
synchronized(...)
{
updateAllModels();
}
}
Where the object you synchronize with here is the same object you'll use to synchronize the drawing method.
Now we have an improved method which won't cause glitches in the UI, but the overall rendering will probably take a very severe performance hit, since all rendering needs to wait until all model updates are finished, or vise versa - the models update will need to wait until all drawing is finished.
Now, lets think for a moment - what do we really need to be synchronizing?
In my app (a space game), when updating the models, I needed to calculate vectors, check for collisions and update all the object's positions, rotations, scale, etc.
Out of all these things, the only things the view cares about is the position, rotation, scale and a few other small considerations which the UI needs to know in order to correctly render the game world. The rendering process doesn't care about a game object's vector, the AI code, collision tests, etc. Considering this, I altered my update code to look something like this:
while (true)
{
synchronized(...)
{
updateVisibleChanges(); // sets all visible changes - positions, rotations, etc
}
updateInvisibleChanges(); // alters vectors, AI calculations, collision tests, etc
}
Same as before, we're synchronising the update and the draw methods, but this time, the critical section is much smaller than before. Essentially, the only things which should be set in the updateVisibleChanges method are things which pertain to the position, rotation, scale, etc of the objects which should be rendered. All other calculations (which are usually the most exhaustive ones) are performed afterwards, and do not stop the rendering from occurring.
An added bonus from this method - when you're performing your invisible changes, you can be sure that all objects are in the position they need to be (which is very useful for accurate collision tests). For example, in the method before the last one, object A moves, then object A tests a collision against object B which hasn't moved yet. It is possible that had object B moved before object A tested a collision, there would be a different result.
Of course, the last example I showed isn't perfect - you will still need to hang the rendering method and/or the updateVisible method to avoid clashes, but I fear that this will always be a problem, and the key is minimizing the amount of work you're doing in either thread sensitive method.
Hope this helps :)
I'm working on an arcade shoot-em-up game for Android similar to Ikaruga. The problem I'm facing is that it's proving quite difficult to robustly create move and shoot patterns for the enemies. At the moment I've created two abstract classes EnemyShip and FlightPath from which each different enemy and move pattern derive from respectively. When the World
is created it instantiates a LevelManager which stores level info in the form of:
waveInfos.add(new WaveInfo(3, 3f)); // new WaveInfo(NumberOfGroups, spawn interval)
enemyGroups.add(new EnemyGroup(8, EnemyGroup.TYPE_SCOUT_SHIP, EnemyGroup.F_PATH_INVADERS));
enemyGroups.add(new EnemyGroup(1, EnemyGroup.TYPE_QUAD_SPHERE, EnemyGroup.F_PATH_QUAD_SPHERE_L, World.BLACK));
enemyGroups.add(new EnemyGroup(8, EnemyGroup.TYPE_SCOUT_SHIP, EnemyGroup.F_PATH_INVADERS));
// new EnemyGroup(NumberOfEnemies, EnemyType, FlightPathType)
// new EnemyGroup(NumberOfEnemies, EnemyType, FlightPathType, ShipColour)
waveInfos.add(new WaveInfo(2, 0.33f));
enemyGroups.add(new EnemyGroup(1, EnemyGroup.TYPE_QUAD_SPHERE, EnemyGroup.F_PATH_QUAD_SPHERE_L, World.WHITE));
enemyGroups.add(new EnemyGroup(1, EnemyGroup.TYPE_QUAD_SPHERE, EnemyGroup.F_PATH_QUAD_SPHERE_R, World.WHITE));
totalWaves = waveInfos.size();
The levels are split into waves of groups of enemies and right now the EnemyGroup class takes care instantiating, adding the specified FlightPath to the newly created enemy and passing that enemy to the ArrayList in LevelManager for storage until spawned into the world at the time needed.
Once spawned the FlightPath componant takes over and starts giving instructions based on it's own stateTime and since each FlightPath has a reference field to its EnemyShip owner it can access the ship's functions and members it's controlling.
The EnemyShip class has a few functions for easy instruction such as moveTo(float x, float y, float duration) and shoot() but even with these the FlightPath derivatives are diffcult to make especially when I want different enemies in the same group to have slightly different paths and slightly different time arrivals.
I created a few fields in the FlightPath to keep track of keyFrames:
public int currentKeyFrame = 0;
public int totalKeyFrames;
public KeyFrame[] keyFrames; // Stores duration of instruction to be done, the spreadTime, totalFrameTime and enemyIntervalTime
public int shipNumber; // Stores which ship out of group this FlightPath is attached to
public int totalShips; // Stores total number of ships in this EnemyShip's group
public float stateTime = 0;
KeyFrame.spreadTime is my attempt to control the time between the first enemy in group to begin moving/shooting and the last.
KeyFrame.totalFrameTime = KeyFrame.duration + KeyFrame.spreadTime
KeyFrame.enemyIntervalTime = KeyFrame.spreadTime / Number of enemies in this group
While this setup works great for very simple linear movement, it feels quite cumbersome.
Thanks for reading this far. My question is how do I implement a more streamlined pattern control which would allow for complex movement without hordes of if() statements to check what other enemies in the group are doing and the like.
I hope I've provided enough information for you to understand how the enemies are handled. I'll provide any source code to anyone interested. Thanks in advance for any light you can shed on the subject.
Marios Kalogerou
EDIT: I found a page which very much describes the kind of system which would be perfect for what I want but I'm unsure how to correctly implement it with regards to overall group keyFrames
http://www.yaldex.com/games-programming/0672323699_ch12lev1sec3.html
FlightPath should not control any objects. It's a path, not a manager. However, it should be able to give coordinates given any keyframe or time. For example: flightPath.getX(1200) -> where should I be in the X-coordinate at 1200ms?
Each EnemyShip should maintain a possession of a FlightPath instance. EnemyShip checks where it should be in the path every frame.
EnemyGroup then controls the spawning of each EnemyShip. If you have 8 EnemyShips in one EnemyGroup, all possess the same FlightPath type, then you can imagine that EnemyGroup would spawn each ship around 500ms apart to create the wave.
Finally, you translate all the EnemyShip coordinates relative to the world/screen coordinate, which traditionally moves slowly in the vertical direction.
There are different approaches:
You can add random intervals before shooting, and set slightly random arrival times. Like currentEnemyArrivalTime += (X - rand(2*X)).
You can control movement of group of enemies. Each enemy in the group tries to maintain it's position relative to the center of the group.
For really complex patterns may be better to develop some simple scripting engine. It can be very simple (like array of coefficients for spline), or something more complex. I believe, in such games behavior is done by scripts.
I've been studying and making little games for a while, and I have decided lately that I would try to develop games for Android.
For me, jumping from native C++ code to Android Java wasn't that hard, but it gives me headaches to think about how could I maintain the logic separate from the rendering.
I've been reading around here and on other sites that:
It is better to not create another thread for it, just because
Android will for sure have no problems for processing.
Meaning the code would be something like this:
public void onDrawFrame(GL10 gl) {
doLogicCalculations();
clearScreen();
drawSprites();
}
But I'm not sure if that would be the best approach. Since I don't think I like how it will look like if I put my logic inside the GLRenderer::onDrawFrame method. As far as I know, this method is meant to just draw, and I may slow down the frames if I put logic there. Not to mention that it hurts the concepts of POO in my understanding.
I think that using threads might be the way, this is how I was planning:
Main Activity:
public void onCreate(Bundle savedInstanceState) {
//set fullscreen, etc
GLSurfaceView view = new GLSurfaceView(this);
//Configure view
GameManager game = new GameManager();
game.start(context, view);
setContentView(view);
}
GameManager:
OpenGLRenderer renderer;
Boolean running;
public void start(Context context, GLSurfaceView view) {
this.renderer = new OpenGLRenderer(context);
view.setRenderer(this.renderer);
//create Texturelib, create sound system...
running = true;
//create a thread to run GameManager::update()
}
public void update(){
while(running){
//update game logic here
//put, edit and remove sprites from renderer list
//set running to false to quit game
}
}
and finally, OpenGLRenderer:
ListOrMap toDraw;
public void onDrawFrame(GL10 gl) {
for(sprite i : toDraw)
{
i.draw();
}
}
This is a rough idea, not fully complete.
This pattern would keep it all separated and would look a little better, but is it the best for performance?
As long as I researched, most examples of threaded games use canvas or surfaceview, those won't fit my case, because I'm using OpenGLES.
So here are my questions:
Which is the best way to separate my
game logic from the rendering when using OpenGLES? Threading my
application? Put the logic in a separate method and just call it from
the draw method?
So I think there are two ways you can go here.
Do all updates from onDrawFrame(): This is similar to using GLUT, and Android will call this function as often as possible. (Turn that off with setRenderMode(RENDERMODE_WHEN_DIRTY).) This function gets called on it own thread (not the UI thread) and it means you call your logic update here. Your initial issue was that this seems a little messy, and I agree, but only because the function is named onDrawFrame(). If it was called onUpdateScene(), then updating the logic would fit this model well. But it's not the worse thing in the world, it was designed this way, and people do it.
Give logic its own thread: This is more complicated since now you're dealing with three threads: the UI thread for input, the render thread onDrawFrame() for drawing stuff, and the logic thread for continuously working with both input and rendering data. Use this if you need to have a really accurate model of what's happening even if your framerate is dropping (for example, precise collision response). This may be conceptually a little cleaner, but not practically cleaner.
I would recommend #1. You really don't need #2. If you do, you can add it later I guess, but most people are just using the first option because the hardware is fast enough so you don't have to design around it.
Keep your design as simple as possible :)
The standard (and maybe the best) approach is following:
public void update(){
while(running){
updateAll();
renderAll();
}
}
I would pay attention on some moments:
you need to call sequently update and render methods, avoid calling update twice a time
if you prefer multithreading (I don't), design your methods so update writes data and render reads only
keep in mind that OpenGL has it's own "thread", so when you call GL function it only sends some command to OpenGL (except glFlush() - it completes all commands)
In a game I need to keeps tabs of which of my pooled sprites are in use. When "active" multiple sprites at once I want to transfer them from my passivePool to activePool both of which are immutable HashSets (ok, i'll be creating new sets each time to be exact). So my basic idea is to along the lines of:
activePool ++= passivePool.take(5)
passivePool = passivePool.drop(5)
but reading the scala documentation I'm guessing that the 5 that I take might be different that the 5 I then drop. Which is definitely not what I want. I could also say something like:
val moved = passivePool.take(5)
activePool ++= moved
passivePool --= moved
but as this is something I need to do pretty much every frame in realtime on a limited device (Android phone) I guess this would be much slower as I will have to search one by one each of the moved sprites from the passivePool.
Any clever solutions? Or am I missing something basic? Remember the efficiency is a primary concern here. And I can't use Lists instead of Sets because I also need random-access removal of sprites from activePools when the sprites are destroyed in the game.
There's nothing like benchmarking for getting answers to these questions. Let's take 100 sets of size 1000 and drop them 5 at a time until they're empty, and see how long it takes.
passivePool.take(5); passivePool.drop(5) // 2.5 s
passivePool.splitAt(5) // 2.4 s
val a = passivePool.take(5); passivePool --= a // 0.042 s
repeat(5){ val a = passivePool.head; passivePool -= a } // 0.020 s
What is going on?
The reason things work this way is that immutable.HashSet is built as a hash trie with optimized (effectively O(1)) add and remove operations, but many of the other methods are not re-implemented; instead, they are inherited from collections that don't support add/remove and therefore can't get the efficient methods for free. They therefore mostly rebuild the entire hash set from scratch. Unless your hash set has only a handful of elements in it, this is bad idea. (In contrast to the 50-100x slowdown with sets of size 1000, a set of size 100 has "only" a 6-10x slowdown....)
So, bottom line: until the library is improved, do it the "inefficient" way. You'll be vastly faster.
I think there may be some mileage in using splitAt here, which will give you back both the five sprites to move and the trimmed pool in a single method invocation:
val (moved, newPassivePool) = passivePool.splitAt(5)
activePool ++= moved
passivePool = newPassivePool
Bonus points if you can assign directly back to passivePool on the first line, though I don't think it's possible in a short example where you're defining the new variable moved as well.