in game AI, if you want to win the nobel prize, come up with a faster and easier A*.
some devs favor planners.
some favor HFSMs.
some favor scripted rule based behavior.
i'm partial to modular hierarchies of expert systems myself, where each "module" is an "expert" at some type of decision making in a behavior tree (i guess you would say). a module may be a behavior tree, a FSM, a planner, NN, whatever works best for that bit of the overall system.
i've considered all types of AI over the years, but have never found one superior to the hierarchical "expert system" using appropriate AI types. although expert system really describes a behavior, its not a particular implementation type, such as decision tree, although decision trees are a common means of implementing them.
so as you can see, the intersection of popular academic AI topics and game AI is almost the empty set.
we get someone on here about every six moths asking this type of question.
most are surprised to learn that game AI and academic AI research are so far apart. and FYI, I completed the software engineering program at OSU, and i took classes in AI.
i think there are four realities in game AI that one doesn't usually have to consider in academia, which drives the difference in the evolution of AI approaches:
1. realtime - you get 5ms to do all AI for 1000 entities.
2. kiss - keep it stupid simple
3. gotta ship - the design must implemented, debugged, and working correctly in less time than it really take to thoroughly test all cases and combos, yet all must work.
4. underpowered hardware - you must design for the most abysmally under powered processor the game can possibly run on. i have a saying: "put a Cray on every users desk, and i'll build you a REAL game".
another one might be:
5. candy sells - at the 11th hour, your millisec budget is cut in half, because render is slow. now you must update those 1000 entities in 2.5ms, not 5ms - yet their behavior can't be adversely affected.