Hey, folks! Something I'd love your opinion on:
A mistake I've seen at a few of my jobs, is game engines trying to use a single ai approach to do too many different things.
For example, one of my employers released a pretty well-known FPS game a few years ago, in which enemies could do things like: engage with the player, choose their own targets, choose their own cover, etc. They were able to string together lists of behaviors in response to a situation, which was pretty dang cool! For instance, if they were backed into a corner, they might make a plan for how to escape. And that plan might be different, based on the player's weapon, etc.
These enemy agents used Behavior Trees for their ai…but I mean for all of their AI. For instance, every possible escape plan they could make was hand-coded as a branch of their behavior tree. BT logic was even used to make them choose their next goal (things like attack, heal, etc.), based on what was best for them at the moment. All of it was technically possible to do with a BT, but it was very difficult to do, and the resulting BT graphs were very complex, and very difficult to manage.
I see this as a bad habit, but one that I think a lot of devs just don't think about. Their game uses the one model (in this case, BTs), so that's the model they're going to use to solve every ai problem in their game.
The challenge is - BTs probably weren't the best tool to solve some of those problems. NPC planning could probably have been done more simply using GOAP, or another type of action planner. Goal-formation might be handled much better by a Utility AI. The devs were really twisting themselves in knots to make a Behavior Tree graph handle some of those needs. And I saw it result in a lot of compromised features and crunch.
I've seen the problem pop up on another team, but this time every ai problem was solved with finite state machines - again, because that's just the model that team was using for everything. Maybe the toughest example of this I've heard was an indie team, who was trying to solve every AI problem with a Utility AI. That included navmesh pathfinding and behavior states.
I'm more of the mind that different ai models can be used to handle different ai responsibilities. For example, your game might use a finite stat machine to organize and execute behaviors. But if it needed an agent to make a decision - say, to choose between 10 possible goals - then that FSM could call outside of itself to a separate Utility AI. That Utility AI could weigh those options, and return a result to the FSM, which could then respond to it accordingly. Similarly, if the agent then needed to make a plan to reach that goal, the FSM could pass that goal to an external GOAP system, which would hand back an action plan.
I'm wondering what you all think of that type of approach, and if there's a reason why you wouldn't want to use it. Also, have you seen this type of problem where you've worked, or elsewhere? I feel like maybe the industry teaches new folks bad ai patterns when they work on projects that over-use ai models like this.
Cheers! Thanks for any opinions.