Advertisement

For those who have asked for examples of techniques in games

Started by August 19, 2009 10:39 AM
3 comments, last by EJH 15 years, 6 months ago
I just saw this article in Artificial Intelligence Review and thought it might be interesting to those who ask the question, "Are there examples of NN/GA/AI in games?" It seems to focus on NNs and GAs for examples - an often requested set - and the focus is really on learning in games as opposed to typical AI techniques (pathfinding, FSMs, etc.). Machine learning in digital games: a survey Now, before anyone starts yelling at me, yes this link is to a journal site that requires you to pay for the article to download. I've searched and cannot find a free copy on the internet at this point. You can probably find this in your local university library or request a copy through your local library. Or, you could probably send an e-mail to the corresponding author (Leo Galloway) and request a reprint. Most times authors are happy to send electronic copies. -Kirk
*or* you could copy up to 10% of the article here for "illustrative purpose".
Advertisement
Maybe every 10th word? 8^P

Quote:

1 Introduction
From the 1950s to present day, the development of game-playing programs has been a major
focus of research for the domain of artificial intelligence (AI). Researchers were initially captivated
by traditional board and card games, such as Chess, Checkers, Othello, Backgammon
and Poker, with eventual successes being achieved in the creation of programs capable of competing with human opponents, in some cases, at world-class level. Pioneering work by
Samuel (1959, 1967) into the development of a Checkers playing program, in particular the
use of symbolic, search-based AI, automated feature selection and an embryonic version
of reinforcement learning, underpins much subsequent machine learning and symbolic AI
research conducted, especially within the domain of game-playing programs (Schaeffer 2000;
Fürnkranz 2001; Schaeffer and Van den Herik 2002; Lucas and Kendall 2006; Miikkulainen
et al. 2006). Initial research focused on perfect-information games (e.g. Chess, Checkers,
Othello), requiring the development of game-playing solutions within large, completely specified,
state-spaces, with later research efforts extending the application domain to the use of
imperfect-information and stochastic games (e.g. Backgammon, Poker), thus expanding the
challenge to include state-spaces with hidden states, probabilistic game-play and multiple
opponents (Schaeffer and Van den Herik 2002; Lucas and Kendall 2006). By the end of the
1990s a number of milestones had been achieved, starting with the defeat ofWorld Checkers
Champion Marion Tinsley in 1994 by the program Chinook (Schaeffer 1997, 2000). Further
successes followed in 1997 with World Chess Champion Garry Kasparov being defeated in
an exhibition match by Deep Blue (Campbell et al. 2002; Hsu 2002), and the defeat ofWorld
Othello Champion TakeshiMurakami in an exhibitionmatch by the program Logistello (Buro
1997). Similarly, a degree of success has been achieved using the game of Backgammon,
with the program TD-Gammon 3.0 (Tesauro 1995) displaying a convincing performance
against the World Backgammon Champion Malcolm Davis in a series of matches during
the 1998 American Association for AI’s Hall of Champions game-playing exhibition. Proficient
game-playing programs have also been developed for the game of Poker, notably Loki
(Billings et al. 2002), which is capable of providing an intermediate-level challenge to human
opponents, yet is still not quite able to play at world-class level (Schaeffer 2000; Lucas and
Kendall 2006). For a substantial review and detailed discussion of traditional game-playing
programs, please refer to Schaeffer (2000) and Fürnkranz (2001).
The development of game-playing programs for traditional board and card games has been
fundamental to the growth of AI research, generating a wealth of innovative AI and machine
learning techniques. Concurrently, the rise of low-cost, high performance home computing
and subsequent growth of the computer games industry has led to the emergence of interactive
computer games (also referred to as video games or digital games) with the creation of
entirely new forms of game-play and corresponding genres of games. Representing a departure
from traditional games, digital games make use of increasingly complex and detailed
virtual environments, often incorporating human controlled protagonists and many computer
controlled opponents (referred to as game agents and opponent agents, respectively).
These agents require an ever more realistic and believable set of behaviours for a game
engine’s AI sub-system (game AI) to generate (Laird and Van Lent 1999; Schaeffer and Van
den Herik 2002; Lucas and Kendall 2006; Miikkulainen et al. 2006). With a wide range of
potential objectives and features associated with game-play, as indicated by the plethora of
genres arising from digital games, such as ‘First-Person Shooter Games’, ‘Action-Adventure
Games’, ‘Role-Playing Games’, ‘Strategy Games’, ‘Simulation Games’ (also known as ‘God
Games’) and ‘Sports Games’, the specifications and requirements for a game AI typically
include both genre-specific goals and general low-level behaviours (Laird andVan Lent 2001;
Schaeffer and Van den Herik 2002). By utilising a percentage of the overall computational
resources used within a game, the game AI facilitates the autonomous selection of behaviours
for game agents through game specific perception, navigation and decision making
sub-systems (Woodcock 2001, 2002; Tozour 2002; Charles 2003; Schwab 2004). Both traditional
and modern AI techniques have been adopted by game developers and incorporated
into game AI, including the use of algorithms such as A* for path-planning (Schwab 2004;
Buckland 2005; Miikkulainen et al. 2006), Finite State Machines (FSM) and neural networks
for behavioural control (Charles and McGlinchey 2004; Buckland 2005), rule-based systems
and hierarchies of task-networks for goal planning (Laird and Van Lent 1999; Hoang et al.
2005; Orkin 2005; Gorniak and Davis 2007), and evolutionary methods for both game engine
parameter tuning and testing (Schwab 2004; Baekkelund 2006; Lucas and Kendall 2006).
However, the majority of existing game AI implementations are primarily constrained to the
production of predefined decision making and control systems, often leading to predictable
and static game agent responses (Woodcock 1999; Charles 2003; Charles and McGlinchey
2004; Miikkulainen et al. 2006).
Typically the amount of computational resources allocated to game AI is game dependent,
ranging from 5 to 60% of CPU utilisation, with turn-based strategy games receiving almost
100% of CPU resources when required (Woodcock 1998, 2000, 2001, 2002). With each
successive generation of gaming hardware the amount of computational resources dedicated
to game AI processing increases (Laird and Van Lent 1999, 2001; Woodcock 2002; Charles
2003; Schwab 2004). Coupled with an improved understanding of game AI techniques over
successive generations of games, leading to a more thorough approach to game AI design
and efficient implementation strategies, the quality of game AI has improved, further increasing
the availability of CPU resources (Pottinger 2000; Woodcock 2000, 2001, 2002). With
the increasing expectations and demands of game players for better, more believable game
agents, the importance of game AI has gained widespread acceptance throughout the games
industry as a differentiator and catalyst for the success or failure of products (Laird and Van
Lent 1999, 2001;Woodcock 2002; Charles 2003; Schwab 2004). Such a desire for improved
game AI is mirrored by the proliferation and adoption of middleware AI tools, such as Kynapse,
1 AI.implant2 and xaitEngine.3 Likewise, custom hardware solutions, such as the Intia
Processor,4 have also been developed for the acceleration of game AI techniques.
Attempts to incorporate machine learning into commercial game AI have been primarily
restricted to the use of learning and optimisation techniques during game development and
between periods of game-play, known as offline learning [also referred to as out-game learning
(Stanley et al. 2005a)]. For example, offline learning techniques has been used for game
AI parameter tuning in the game Re-Volt5 and for the development of opponent agents in the
games Colin McRae Rally 26 and Forza Motorsport.7 Conversely, performing learning in
real-time during game-play, known as online learning [also referred to as in-game learning
(Stanley et al. 2005a)], has only been attempted in a handful of commercial digital games,
including Creatures8 and Black and White,9 however, explicit learning controlled by the
player is the key focus of the game design and game-play for these particular games (Grand
et al. 1997;Woodcock 1999, 2001, 2002; Evans 2001; Manslow2002; Charles 2003; Togelius
and Lucas 2006). Through the use of online learning game agents may be enhanced with a
capability to dynamically learn frommistakes, player strategies and game-play behaviours in
real-time, thus providing a more engaging and entertaining game-play experience. However,
the use of online learning raises a number of issues and concerns regarding the design and
development of an appropriate machine learning algorithm. Subsequently, the integration
of online learning within digital games also gives rise to issues for both game design and
development (Woodcock 1999; Manslow 2002; Charles 2003; Baekkelund 2006; Lucas and
Kendall 2006; Miikkulainen et al. 2006).
The aim of this paper is to provide a review of the key machine learning techniques
currently used within academic approaches for the integration of learning within game AI.
The following section provides an overview of the issues and constraints associated with
the use of online learning within digital game environments. Section 3 presents a survey of
the current academic digital game research literature, focusing on the use of methods from
the computational intelligence domain, in particular the techniques of neural networks, evolutionary
methods and reinforcement learning, utilised for both the online and the offline
generation of game agent controllers. This will be followed by a summary of the literature
reviewed together with conclusions drawn from the analysis of the literature in Sect. 4.



and
Quote:

Table 1 Machine learning techniques used within academic digital game research
Learning technique Game agent representation Game environment Reference
Backpropagation Multi-layer perceptron Motocross the force Chaperot and Fyfe (2006)
Multi-layer perceptron Simulated racing Togelius et al. (2007b)
Multi-layer perceptron Simulated social
environment
MacNamee and Cunningham (2003)
Multi-layer perceptron Soldier of fortune 2 Geisler (2004)
Multi-layer perceptron
(ATAa)
Legion-I Bryant and Miikkulainen (2003)
Multi-layer perceptron
(ATA)
Legion-II Bryant and Miikkulainen (2006a)
Multi-layer perceptron
(Ensemble)
Quake II Bauckhage and Thurau (2004)
Backpropagation (LMb) Multi-layer perceptron FlatLand Yannakakis et al. (2003)
Backpropagation (bagging) Multi-layer perceptron
(ensemble)
Motocross the force Chaperot and Fyfe (2006)
Multi-layer perceptron
(ensemble)
Soldier of fortune 2 Geisler (2004)
Backpropagation (boosting) Multi-layer perceptron Motocross the force Chaperot and Fyfe (2006)
Multi-layer perceptron
(Ensemble)
Soldier of fortune 2 Geisler (2004)
SOM Self-organising map Pong McGlinchey (2003)
SOM & Backpropagation
(LM)
Self-organising map &
multi-layer perceptron
Quake II Thurau et al. (2003)
Evolutionary algorithm Single-layer perceptron Cellz Lucas (2004)
Multi-layer perceptron Simulated racing Togelius and Lucas (2005)
Multi-layer perceptron Simulated racing Togelius and Lucas (2006)
Rule-base Wargus Ponsen and Spronck (2004)
Genetic algorithm Single-layer perceptron Xpilot Parker et al. (2005b)
Multi-layer perceptron Dead end Yannakakis et al. (2004)
Multi-layer perceptron FlatLand Yannakakis et al. (2003)
Learning technique Game agent representation Game environment Reference
Multi-layer perceptron Pac-Man Yannakakis and Hallam (2004)
Multi-layer perceptron Motocross the force Chaperot and Fyfe (2006)
Neural network (modular) Xpilot Parker and Parker (2006b)
Program-based operators Simulated racing Agapitos et al. (2007b)
Rule-base action game Demasi and Cruz (2003)
Rule-base Xpilot Parker et al. (2005a)
Rule-base Xpilot Parker and Parker (2007b)
Genetic algorithm
(parallel, steady-state)
Influence map tree Lagoon Miles et al. (2007)
Genetic algorithm (cyclic) Rule-base Xpilot Parker et al. (2006)
Rule-base Xpilot Parker and Parker (2006a)
Genetic algorithm (queue) Multi-layer perceptron Xpilot Parker and Parker (2007a)
Genetic algorithm (NSGA-IIc) Program-based operators simulated racing Agapitos et al. (2007b)
Genetic algorithm
(case-injected)
Case-base Strike ops Louis and McDonnell (2004)
Case-base Strike ops Miles et al. (2004a,b)
Case-base Strike ops Louis and Miles (2005)
Evolutionary strategies Single-layer perceptron Ms. Pac-Man Lucas (2005)
Single-layer perceptron Simulated racing Lucas and Togelius (2007)
Multi-layer perceptron Ms. Pac-Man Lucas (2005)
Multi-layer perceptron Pac-Man Gallagher and Ledwich (2007)
Multi-layer perceptron Simulated racing Agapitos et al. (2007a)
Multi-layer perceptron Simulated racing Lucas and Togelius (2007)
Multi-layer perceptron Simulated racing Togelius et al. (2007a)
Neural network (modular) Simulated racing Togelius et al. (2007a)
Recurrent network Simulated racing Agapitos et al. (2007a)
Recurrent network Simulated racing Togelius et al. (2007a)
123
Machine learning in digital games 155
Table 1 continued
Learning technique Game agent representation Game environment Reference
Program-based operators Simulated racing Agapitos et al. (2007a)
Program-based operators Simulated racing Togelius et al. (2007a)
Value function (linear) Simulated racing Lucas and Togelius (2007)
Value function
(function
approximation)
Simulated racing Lucas and Togelius (2007)
Genetic programming Program-based operators Pac-Man Koza (1992)
rtNEATd Neural network NERO Stanley et al. (2005a,b)
Neural network NERO D’Silva et al. (2005)
Neural network NERO Yong et al. (2006)
FS-NEATe Neural network Robot auto racing
simulation
Whiteson et al. (2005)
ESPf Multi-layer perceptron (ATA) Legion-I Bryant and Miikkulainen (2003)
Multi-layer perceptron (ATA) Legion-II Bryant and Miikkulainen (2006b)
Multi-layer perceptron (ATA) Legion-II Bryant and Miikkulainen (2007)
PBILg Finite state machine & Rule-base Pac-Man Gallagher and Ryan (2003)
Q-learning Value function Escape Duan et al. (2002)
Value function Battle of survival Ponsen et al. (2006)
Q-learning (HSMQh) Value function Battle of survival Ponsen et al. (2006)
Sarsa Value function (linear) Pac-Man Galway et al. (2007)
Value function (linear) Simulated racing Lucas and Togelius (2007)
Value function (model tree) Settlers of catan Pfeiffer (2004)
Value function
(function
approximation)
Simulated racing Lucas and Togelius (2007)
Sarsa(λ) Value function (linear) Battleground Maderia et al. (2004)
Value function (linear) Foraging game Bradley and Hayes (2005a,b)
123
156 L. Galway et al.
Table 1 continued
Learning technique Game agent representation Game environment Reference
Value function (linear) Pac-Man Galway et al. (2007)
Value function (linear) Tao Feng Graepel et al. (2004)
Value function
(function
approximation)
Battleground Maderia et al. (2006)
Value function
(function
approximation)
Tao Feng Graepel et al. (2004)
"Machine learning techniques used within academic digital game research"
"The aim of this paper is to provide a review of the key machine learning techniques currently used within academic approaches for the integration of learning within game AI."


So its not examples of ML techniques used in released games, its researchers making small games/bots/mods to showcasen thier favorite ML techniques. But they dont have to deal with the reality of making a commercial game, with game designers asking you to change the behavior in very specific situations and an army of testers finding all the rarely-occuring-but-game-breaking problems caused by the system...

However they mention Creatures, Black&White, and the handful of racing games we knew.

also:

"Concurrently, the rise of low-cost, high performance home computing [...]"

I thought this had become a forbidden cliché?
Commercial games that use ANNs (not sure if any actually do learning during the game though):

- Creatures
- Black and White
- Colin McRae Rally
- Forza Motorsport

Non-commercial, but publicly available games you can actually download and play that do machine learning during the game (both use ANNs/NEAT):

- NERO
- Galactic Arms Race

Anyone know of any others? Unfortunately, most academic projects focused on machine learning in games tend to be one of the following: (1) focused on boardgames or non-mainstream game types, (2) are toy simulations and not anything someone can download and play, or (3) are mods of Quake or HL bots.

This topic is closed to new replies.

Advertisement