Quote: Original post by The Reindeer EffectHoly crap... why not? It's free, dude!
I'm not a Gamasutra member, so I can't read the link.
Different FSM patterns - your experience?
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Quote: Original post by Timkin
One thing I'd like to hear feedback on (so long as it doesn't hijack the thread from IFs original intent) is this prevalence of state machines as 'behavioural machines'. I read through the Gamasutra article linked above and noted that the Halo 2 HFSM was another example of this design pattern. This, at least to me, appears to be a perversion of the classic FSM, but one that is quite relevant and useful in game AI design, which is often concerned with designing actors that instantiate gameplay requirements. Do others find it useful to work/think/design in this behaviour space and if so, why?
In games, I don't think I've ever seen a state machine used to represent the overall environmental states or anything like that. They've always been used purely to represent one sub-state of the game, usually the current behaviour of an actor.
Why? I think it's because you start off thinking about developing AI for the actor, and that becomes classified into several separate behaviours, which tend to lead from one to the other, and a FSM is a natural representation of that.
I think that modelling the more general domain of a game with state machines never gets considered because almost all the important data is continuous. Personally I can't think of many overall game situations in my experience that could be adequately modelled with a FSM.
Quote: If you do, how are you positioned with regards to the original question in this thread: do you prefer to encode such behaviours within the agent, or in a separated machine+logic idiom?
In our current project, all behaviour is within the agent, though the external game environment, and behavioural state transitions are done by the agent as a result of processing its own implicit state and that of the environment. But it's sometimes practical to have a State hierarchy of objects that act on an Actor hierarchy which can give some useful separation and re-use.
In my experience, most game AI is written with a focus on behaviors. Designers know what they want to see in game and they typically think in behaviors. State systems are nice as they provide a way to encapsulate behavior and the additional state it requires.
Thinking about this a bit more, there are two interesting ramifications:
1) In doing this, we are encoding a production system in a state machine along with the functionality/data for the AI to act.
2) Using states this way is very close to a strategy pattern. We think of most AI 'behavior states' as highly differentiated, but in practice each state is an alternate 'thing for the AI to do' (typically with the same interface). If we ignore the functionality provided by each behavior state and look at it abstractly, it might end up closer to a strategy pattern.
Coming full circle for the OP, I've used both approaches. In general, encapsulation of data and functionality is good. It can cause implementation issues if you don't refactor logic out into reusable chunks, but it generally scales well.
You also might want to consider a more data driven approach. I think a few of the AI Wisdom books have examples, but the idea is to move the transition definitions out of code entirely. Depending on who is using the system, this might be worth evalating.
Of the genres that depict players as actors, my intuition is that it is only natural and correct to design gameplay in terms of interactions between actors and interactions with an environment. This leads, as both of you noted, to thinking in terms of behaviours. We often start off with a high level behavioural descriptor (such as kill enemy) and then break this down via decision heirarchies into implementable actions that we hope will achieve the goal. This is essentially a planning process, where we end up with a 'state machine' that implements the conditional logic of plan execution that we require to deal with a dynamic game. Thus, our state machine actually represents a policy (universal/conditional plan).
I do think you find more classical implementations of FSMs depicting actual game states in traditional game genres like puzzle games. Game trees, for example, are just an expansion of a state machine along a subset of finite depth path sequences.
So what does this mean for Dave's original question? I suspect it gets down to issues of extensibility and reusability. Both the state-as-class pattern and the internal, conditional logic pattern (e.g., compiler macros) have issues with extensibility (they each trade off opposing aspects to make the other more efficient) but the state-as-class pattern seems to me at least to admit reusability more cleanly. Ideally, I think there is only one golden rule to follow and that is that it should be easy to code and easy to understand and that it should improve design and production process efficiency, rather than hinder it. If your choice does that, then I don't see a problem with it! ;)
Cheers,
Timkin
Steve Rabin's FSM class as used in one of my classes. In this case, Class CGate inherits from StateMachine but otherwise all states and logic are kept in CGate:
// H file //////////////////////////////////////class CGate : public CFacility, public StateMachinepublic: virtual bool States( StateMachineEvent event, int state );// CPP file //////////////////////////////////bool CGate::States(StateMachineEvent event, int state){smBeginStateMachine /////////////////////////////// smState(GTS_UNLEASED) smOnEnter gFields.SetDirtyFlag(mField, true); /////////////////////////////// smState(GTS_OPEN) smOnEnter gFields.SetDirtyFlag(mField, true); /////////////////////////////// smState(GTS_OCCUPIED) smOnEnter gFields.SetDirtyFlag(mField, true); //printf("Gate %s at %d is occupied\n",mName, mField); smOnExit mAircraft = NULL; /////////////////////////////// smState(GTS_TURNING) smOnEnter gFields.SetDirtyFlag(mField, true); //printf("Gate %s at %d is turning\n",mName, mField); short minutes = CalcTurnTime(); mNextStateTime = gGameData.GetTime() + replCHighTimeSpan(0,0,minutes,0); smOnUpdate if (TimeToChangeState()) { SetState(GTS_OPEN); } /////////////////////////////// smState(GTS_BROKEN) smOnEnter gFields.SetDirtyFlag(mField, true);smEndStateMachine}
Mat Buckland's method where an agent has a pointer to other states:
class MinersWife : public BaseGameEntity{private: StateMachine<MinersWife>* m_pStateMachine; { //set up the state machine m_pStateMachine = new StateMachine<MinersWife>(this); m_pStateMachine->SetCurrentState(DoHouseWork::Instance()); m_pStateMachine->SetGlobalState(WifesGlobalState::Instance()); } ~MinersWife(){delete m_pStateMachine;} StateMachine<MinersWife>* GetFSM()const{return m_pStateMachine;}};/// in another file titled "Miner's wife owned states"class WifesGlobalState : public State<MinersWife>{ private: WifesGlobalState(){} //copy ctor and assignment should be private WifesGlobalState(const WifesGlobalState&); WifesGlobalState& operator=(const WifesGlobalState&); public: //this is a singleton static WifesGlobalState* Instance(); virtual void Enter(MinersWife* wife){} virtual void Execute(MinersWife* wife); virtual void Exit(MinersWife* wife){} virtual bool OnMessage(MinersWife* wife, const Telegram& msg);};//------------------------------------------------------------------------////------------------------------------------------------------------------class DoHouseWork : public State<MinersWife>{private: DoHouseWork(){} //copy ctor and assignment should be private DoHouseWork(const DoHouseWork&); DoHouseWork& operator=(const DoHouseWork&);public: //this is a singleton static DoHouseWork* Instance(); virtual void Enter(MinersWife* wife); virtual void Execute(MinersWife* wife); virtual void Exit(MinersWife* wife); virtual bool OnMessage(MinersWife* wife, const Telegram& msg);};//------------------------------------------------------------------------////------------------------------------------------------------------------class VisitBathroom : public State<MinersWife>{private: VisitBathroom(){} //copy ctor and assignment should be private VisitBathroom(const VisitBathroom&); VisitBathroom& operator=(const VisitBathroom&); public: //this is a singleton static VisitBathroom* Instance(); virtual void Enter(MinersWife* wife); virtual void Execute(MinersWife* wife); virtual void Exit(MinersWife* wife); virtual bool OnMessage(MinersWife* wife, const Telegram& msg);};//------------------------------------------------------------------------////------------------------------------------------------------------------class CookStew : public State<MinersWife>{private: CookStew(){} //copy ctor and assignment should be private CookStew(const CookStew&); CookStew& operator=(const CookStew&); public: //this is a singleton static CookStew* Instance(); virtual void Enter(MinersWife* wife); virtual void Execute(MinersWife* wife); virtual void Exit(MinersWife* wife); virtual bool OnMessage(MinersWife* wife, const Telegram& msg);};
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Quote: Original post by dmail
I had a scripted FSM but since reading Game Programming AI by Example I really like the goal driven approach, where goals are composed of small tasks and are pushed onto a stack. Each possible high level goal calculates a desirability for the task and the highest scoring goal is performed. When a goal is completed or failed the agent then continues will goals which are on the stack. So an enitity may have been tracking to a position and came under attack. It would then be decided either to fight or flee .... after the instance if the entity is still alive it would continue on with the task set before the conflict if it is still valid.
I recommend a read of the book it is really good. The FSM section of the book is available online.
http://www.ai-junkie.com/architecture/state_driven/tut_state1.html
This works very well in many cases. I've used it before in retail games, and works really well, and is very expandible. AI stacks are sooo useful.