Learning techniques used in Real Games
Hi,
Can anyone name Real (commercial or not, but preferably not just curiosity games develop to demonstrate fancy AI) PC games whose AI uses neural networks, or any kind of learning AI algorithm? Or a one whose computer opponent was "trained" using a learning algorithm.
I'm just curious wether the techniques apply to real games...
-- Mikko
Very rarely. The creatures in Black & White *shudder* used a simple NN to 'learn'. It was nothing more compilated than a couple layer perceptron network using back-propagation (IIRC). Most games that we'd call having 'AIs' use variations of state-machines or simple hard-coded logic sequences.
The truth of the matter is that games don't need good AI, in the computer-science sense of the word, so much as opponents that play well with the user. As a result, things like ANNs become a problem because they're so hard to predict and control (the whole point of a NN is, after all, to converge on a known solution but using an unknown process).
Learning tends to be done in a controller manner. Eg. some RTS AI would 'learn' where on the map it tended to lose most of it's battles, and avoid those spots. This isn't so much learning in the true sense of the word, as it is 'adapting' in ways that the developers could anticipate.
The truth of the matter is that games don't need good AI, in the computer-science sense of the word, so much as opponents that play well with the user. As a result, things like ANNs become a problem because they're so hard to predict and control (the whole point of a NN is, after all, to converge on a known solution but using an unknown process).
Learning tends to be done in a controller manner. Eg. some RTS AI would 'learn' where on the map it tended to lose most of it's battles, and avoid those spots. This isn't so much learning in the true sense of the word, as it is 'adapting' in ways that the developers could anticipate.
January 04, 2005 01:44 PM
I used a genetic algorithm to optimize ai parameters for a card game ai.
It actually worked quite well, the game got much better after a few overnight sessions of playing tens of thousands of games against itself (with different settings each run, it reached a local maximum in a few hours).
One problem with learning from human opponants is that it goes at human speed. Costs a lot to get some guy to carefully play 1000 games of starcraft, and you can't do it until the game is done (change the game and you have to have the ai learn to play again) and the ai will actually be good against playing that guy on that map as he goes insane from boredom, not necessarily playing the game in general (one of my early card game runs optimized the ai to play with a particular set of 100 deals, it was actually worse at playing random games). A customer is unlikely to every play enough games to make most computer learning methods work on a complicated game.
Another is that the goal of game ai is not to make the best possible ai, but to make an ai which is fun to play against. Compared with generating an automated test for "fun" making a game ai is easy. :) Without an automated success metric learning ai is very time consuming.
You also do not generally want your ai changing after you ship the game, so you don't want it to continue learning. What if it got a lot better? What if it got a lot worse? Either one could ruin the game.
From my experience I think it is a commercially viable option to tune ai for card, board, and similarly simple games (I have also made those professionally, but for young children so making the ai better was the easy part).
In my opinion it is not currently usefull for less constrained games and all attempts I have heard of have failed. I count black & white as a failure. The game sucked and it was all about that creature ai, the marketing was brilliant though. I know I have heard of one or two other games which developed learning ai's but shipped without them.
It actually worked quite well, the game got much better after a few overnight sessions of playing tens of thousands of games against itself (with different settings each run, it reached a local maximum in a few hours).
One problem with learning from human opponants is that it goes at human speed. Costs a lot to get some guy to carefully play 1000 games of starcraft, and you can't do it until the game is done (change the game and you have to have the ai learn to play again) and the ai will actually be good against playing that guy on that map as he goes insane from boredom, not necessarily playing the game in general (one of my early card game runs optimized the ai to play with a particular set of 100 deals, it was actually worse at playing random games). A customer is unlikely to every play enough games to make most computer learning methods work on a complicated game.
Another is that the goal of game ai is not to make the best possible ai, but to make an ai which is fun to play against. Compared with generating an automated test for "fun" making a game ai is easy. :) Without an automated success metric learning ai is very time consuming.
You also do not generally want your ai changing after you ship the game, so you don't want it to continue learning. What if it got a lot better? What if it got a lot worse? Either one could ruin the game.
From my experience I think it is a commercially viable option to tune ai for card, board, and similarly simple games (I have also made those professionally, but for young children so making the ai better was the easy part).
In my opinion it is not currently usefull for less constrained games and all attempts I have heard of have failed. I count black & white as a failure. The game sucked and it was all about that creature ai, the marketing was brilliant though. I know I have heard of one or two other games which developed learning ai's but shipped without them.
Anonymous,
I'm interested on the genetic algorithm you used for card games. I've a bot playing a local card game and have been wondering since long on how to make it learn. I can make it learn about its current known strategies, but unsure on how to make it progress. And even now, its learning time is too slow.
Could you give some details about your method?
Thanks
I'm interested on the genetic algorithm you used for card games. I've a bot playing a local card game and have been wondering since long on how to make it learn. I can make it learn about its current known strategies, but unsure on how to make it progress. And even now, its learning time is too slow.
Could you give some details about your method?
Thanks
_______________________The best conversation I had was over forty million years ago ... and that was with a coffee machine.
Sure.
The card game I was working on is called mas, a scandanavian game. It has two phases, this ai was for the first phase where you collect cards with which to play the second game
The card game I was working on is called mas, a scandanavian game. It has two phases, this ai was for the first phase where you collect cards with which to play the second game
// GeneticAIGenome.h: interface for the GeneticAIGenome class.//// Stores the constants to be evolved for GeneticAIPlayer// in an array of ints.//// Can recombine and mutate them, relies on external random// seeding.////////////////////////////////////////////////////////////////////////#if !defined(AFX_GENETICAIGENOME_H__AFB2669C_A9DF_422D_9A44_3F77924C7EBC__INCLUDED_)#define AFX_GENETICAIGENOME_H__AFB2669C_A9DF_422D_9A44_3F77924C7EBC__INCLUDED_#if _MSC_VER > 1000#pragma once#endif // _MSC_VER > 1000#define N_GENES 64class GeneticAIGenome {public: GeneticAIGenome(); GeneticAIGenome(int arr[]); virtual ~GeneticAIGenome(); int GetScore() const { return score; }; void SetScore(int score) { GeneticAIGenome::score = score; }; void Mutate(); GeneticAIGenome* Recombine(const GeneticAIGenome & other) const; // Accessors for GeneticAIPlayer's use const int* GetWinningCardValues() const { return gene;}; const int* GetLoosingCardValues() const { return gene + 15;}; const int* GetDuckingCardValues() const { return gene + 30;}; const int* GetLeadingCardValues() const { return gene + 45;}; const int GetLeadOffTopOfDeckValue() const { return gene[60];}; const int GetLeadOnBounceMaxLowCardRank() const { return gene[61];}; const int GetLeadOnBounceMinHighCardRank() const { return gene[62];}; const int GetRespondingPlayOffDeckValue() const { return gene[63];};private: int score; int gene[N_GENES];};#endif // !defined(AFX_GENETICAIGENOME_H__AFB2669C_A9DF_422D_9A44_3F77924C7EBC__INCLUDED_)
// GeneticAIGenome.cpp: implementation of the GeneticAIGenome class.////////////////////////////////////////////////////////////////////////#include "GeneticAIGenome.h"#include <assert.h>#include <stdlib.h>//////////////////////////////////////////////////////////////////////// Construction/Destruction//////////////////////////////////////////////////////////////////////GeneticAIGenome::GeneticAIGenome():score(0){ // winningCardValues gene[0] = -100; gene[1] = -100; gene[2] = -52; gene[3] = -40; gene[4] = -30; gene[5] = -30; gene[6] = -22; gene[7] = -20; gene[8] = -12; gene[9] = 5; gene[10] = 12; gene[11] = 30; gene[12] = 38; gene[13] = 50; gene[14] = 60; // loosingCardValues gene[15] = -100; gene[16] = -100; gene[17] = 50; gene[18] = 45; gene[19] = 40; gene[20] = 33; gene[21] = 35; gene[22] = 30; gene[23] = 20; gene[24] = 0; gene[25] = 0; gene[26] = -5; gene[27] = -15; gene[28] = -20; gene[29] = -30; // duckingCardValues gene[30] = -100; gene[31] = -100; gene[32] = 200; gene[33] = 208; gene[34] = 220; gene[35] = 230; gene[36] = 240; gene[37] = 245; gene[38] = 260; gene[39] = 275; gene[40] = 5; gene[41] = 2; gene[42] = 2; gene[43] = 0; gene[44] = 0; // leadingCardValues gene[45] = -100; gene[46] = -100; gene[47] = 90; gene[48] = 75; gene[49] = 68; gene[50] = 55; gene[51] = -2; gene[52] = -10; gene[53] = 20; gene[54] = 32; gene[55] = 30; gene[56] = -10; gene[57] = -8; gene[58] = 10; gene[59] = 40; // LeadOffTopOfDeckValue gene[60] = 0; // LeadOnBounceMaxLowCardRank gene[61] = 4; // LeadOnBounceMinHighCardRank gene[62] = 12; // RespondingPlayOffDeckValue gene[63] = 1;}GeneticAIGenome::GeneticAIGenome(int arr[]):score(0){ for (int i = 0; i < N_GENES; ++i) { gene = arr; }}GeneticAIGenome::~GeneticAIGenome(){}void GeneticAIGenome::Mutate(){// // 1/2 of the time pick a spot to mutate int locus = rand() % N_GENES;// if (locus > N_GENES)// return; // otherwise return // Do the mutation int rnd = rand(); // a random number if (locus < 60) // in the card value arrays { if (rnd % 2 == 0) { if (rnd %3 == 0) gene[locus] -= 5; else gene[locus] -= 2; } else { if (rnd %3 == 0) gene[locus] += 5; else gene[locus] += 2; } } else // in the other constants { // LeadOffTopOfDeckValue and // RespondingPlayOffDeckValue if (locus == 60 || locus == 63) { if (rnd % 2 == 0) { if (rnd %3 == 0) gene[locus] -= 2; else gene[locus] -= 1; } else { if (rnd %3 == 0) gene[locus] += 2; else gene[locus] += 1; } } // LeadOnBounceMaxLowCardRank and // LeadOnBounceMinHighCardRank else if (locus == 61 || locus == 62) { if (rnd % 2 == 0) { if (gene[locus] > 2) gene[locus] -= 1; } else { if (gene[locus] < 13) gene[locus] += 1; } } else assert(0); }}GeneticAIGenome* GeneticAIGenome::Recombine(const GeneticAIGenome & other) const{ int r = rand() % N_GENES; int i; int arr[N_GENES]; for (i = 0; i < r; ++i) { arr = other.gene; } for (i = r; i < N_GENES; ++i) { arr = this->gene; } return new GeneticAIGenome(arr);}
JeffF,
thanks a lot. I will take a carefull look at it :)
This popular Swedish game for three players is also known as Mjölnarmatte or Mas and in Norwegian it is called Mattis. From www.pagat.com
thanks a lot. I will take a carefull look at it :)
This popular Swedish game for three players is also known as Mjölnarmatte or Mas and in Norwegian it is called Mattis. From www.pagat.com
_______________________The best conversation I had was over forty million years ago ... and that was with a coffee machine.
Hey here is some more, I am having some problems making this post for some reason (to big?), and kicked my computer causing it to crash (peice of junk).
Sure.
The card game I was working, mas a scandanavian game, on has two phases, this ai was for the first phase where you collect cards with which to play the second game so basically I wanted to pile up cards such that I had a favorable hand for the second round. The ai was actually playing the second round with a very primitive ai I never got around to improving.
The easiest way to do genetic optimization is to make an ai that uses a bunch of numeric constants then randomly mutate, recombine, and select those sets of constants that win the most.
This ai had a bunch of card values and some special situation modifiers. I stored all of those in an array, that was my chromosome. A number of individuals (about 100 is good) played a number of games (a few hundered worked well) and thier win/loss/tie record was stored. Then those that performed best were allowed to continue to the next round. Some of these were mutated. Most were mated, having thier chromosome mixed with another individual by transposing the first n elements of the chromosome. This continued for a few hundered generations (which seemed to be roughly what it took to reach a steady state).
Some tricks for getting good results from genetic optimizations:
-Mutation frequency should be low. Something like 1 change per individual per generation.
-Recombination (mating) should be high. Almost every new individual should be a new mixture.
-Keep the very best performers from the previous generation exactly as they are so you don't loose whatever makes them good too quickly.
-Have the best performers produce more offspring for the next generation than lesser performers.
-Keep testing the new guys against a small number of your original eyeballed chromosome's, but dont mate with them. This helps prevent the system from entering pathological conditions, eg producing really bad offspring could be good or something.
-Be sure that your selection of games is really random, or it might learn to play the particular set of games you give it.
-Have a way to play your game very quickly (I had a console version as well as a pretty ugly gui version :) )
It is possible to use genetic optimization to make algorithmic improvements in addition to tuning improvements, but a lot more involved. I have only read about that.
Sure.
The card game I was working, mas a scandanavian game, on has two phases, this ai was for the first phase where you collect cards with which to play the second game so basically I wanted to pile up cards such that I had a favorable hand for the second round. The ai was actually playing the second round with a very primitive ai I never got around to improving.
The easiest way to do genetic optimization is to make an ai that uses a bunch of numeric constants then randomly mutate, recombine, and select those sets of constants that win the most.
This ai had a bunch of card values and some special situation modifiers. I stored all of those in an array, that was my chromosome. A number of individuals (about 100 is good) played a number of games (a few hundered worked well) and thier win/loss/tie record was stored. Then those that performed best were allowed to continue to the next round. Some of these were mutated. Most were mated, having thier chromosome mixed with another individual by transposing the first n elements of the chromosome. This continued for a few hundered generations (which seemed to be roughly what it took to reach a steady state).
Some tricks for getting good results from genetic optimizations:
-Mutation frequency should be low. Something like 1 change per individual per generation.
-Recombination (mating) should be high. Almost every new individual should be a new mixture.
-Keep the very best performers from the previous generation exactly as they are so you don't loose whatever makes them good too quickly.
-Have the best performers produce more offspring for the next generation than lesser performers.
-Keep testing the new guys against a small number of your original eyeballed chromosome's, but dont mate with them. This helps prevent the system from entering pathological conditions, eg producing really bad offspring could be good or something.
-Be sure that your selection of games is really random, or it might learn to play the particular set of games you give it.
-Have a way to play your game very quickly (I had a console version as well as a pretty ugly gui version :) )
It is possible to use genetic optimization to make algorithmic improvements in addition to tuning improvements, but a lot more involved. I have only read about that.
January 06, 2005 08:37 AM
It didn't use neural networks and I forgot how it worked, but you may want to check the source to Descent 1.
http://www.gamedev.net/reference/list.asp?categoryid=45#202
http://www.gamedev.net/reference/list.asp?categoryid=45#202
The creatures in black and white used perceptrons and decision trees I believe. I've heard of racing games using NN's moreso than other games, though I don't know any examples offhand. Racing games are more NN friendly because there is better input available to feed the NN like parameters of the cars physics, the surface they drive on, and you have easier methods to provide feedback to the car for learning. Other than that, learning techniques are pretty rare in commercial games, cuz you can almost always implement something simpler that works just as good if not better. Learning algorithms have their own issues that often just aren't worth the hassle.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement