Howto learn AI
Hey folks,
this is my first posting in the AI board. Until now, I just gathered some experience with c++ and OpenGL. Those two fields kept me pretty busy, and I mainly programed for work reasons. But now I would like to start a project that I always had in the back of my mind, and for that I need AI. I want to write a simulation of a soccer or a hockey game, where every player and his actions are simulated. Until now there exist only engines that can produce more or less likely results of a game. Since I dont know anything about AI i realy dont know what to look for, if I want to program my players. I had something like this in mind:
For every timeinterval the every player has some options like for say play a pass shoot run.... In regards to their characteristic the choose one of those options und certain condition more likey than others. Now since it is impossible to create that many ifs and elses, I think an AI is the only answer to this problem.
I'd appreciate any link, idea or suggestion!
cherio Woltan
I think most people use a switch setup.
function onUpdate()
{
checkState();
}
function checkState()
{
switch(state)
{
case "idle":
//do idle script
break;
case "fighting":
//do fighting script
break;
case "dead":
//do dead script
break;
}
}
Then inside the different scripts, you have either more switches, or your iteration statements.
function onUpdate()
{
checkState();
}
function checkState()
{
switch(state)
{
case "idle":
//do idle script
break;
case "fighting":
//do fighting script
break;
case "dead":
//do dead script
break;
}
}
Then inside the different scripts, you have either more switches, or your iteration statements.
Quote: Original post by BUnzaga
I think most people use a switch setup.
function onUpdate()
{
checkState();
}
function checkState()
{
switch(state)
{
case "idle":
//do idle script
break;
case "fighting":
//do fighting script
break;
case "dead":
//do dead script
break;
}
}
Then inside the different scripts, you have either more switches, or your iteration statements.
I think he means, he doesnt' want to go through and systimatically program all the cases, but instead have an ai go through and decide how to react.
All AI is essentially built of those thousands of if/elses, but there are ways of disguising all of those, such as Neural Networks etc. If you are only getting started with AI i would try to avoid those for quite a while, at least until your comfortable with developing stuff like Finite State Machines / Fuzzy State Machines etc.
What you might want to do, is develop a system where for every timestep, for each Agent (ai-controlled player), you go through the list of Actions, each action itself can check the state of the agent and then give itself a priority as to wether or not it should be used. At a simple level, you then pick the action with the highest priority and run that state on that agent.
With this, you are seperating all of your if/elses where they can be hidden, and then you only need to test the most applicable state for the action. (ie, to 'Shoot' you only test 'Do i have the puck?' and 'Am i aiming at the goal?'). The reason to use the priority system, is so you can implement something like an 'Intercept' state, where above all other states the player will try to block an opponents shot or pass. Using a system like this, you build each action seperately in its own contained space as they need to be implemented, sort-of like a Plug-And-Play system.
What you might want to do, is develop a system where for every timestep, for each Agent (ai-controlled player), you go through the list of Actions, each action itself can check the state of the agent and then give itself a priority as to wether or not it should be used. At a simple level, you then pick the action with the highest priority and run that state on that agent.
With this, you are seperating all of your if/elses where they can be hidden, and then you only need to test the most applicable state for the action. (ie, to 'Shoot' you only test 'Do i have the puck?' and 'Am i aiming at the goal?'). The reason to use the priority system, is so you can implement something like an 'Intercept' state, where above all other states the player will try to block an opponents shot or pass. Using a system like this, you build each action seperately in its own contained space as they need to be implemented, sort-of like a Plug-And-Play system.
If you can id recomend getting a copy of Programming Game AI by Example, very good book IMO and it even has an example of a Soccer AI which you may find useful.
hey folks,
thanks for all your advise and input!
Ultimape is right, that I would rather not program all the ifs and elses, since i think that this would be far too much work and with growing complexity the result could become unrealistic in the best case.
@Exorcist:
Neural Networks Finite/Fuzzy State Machines are only words for me. I dont know anything about them. But what you said afterwards sounds more or like what i had in mind. The Agent knows its state and from that it calculates from all his actions, which have a probability for that state, what it might do. However, this also looks like a lot of work.
I would rather develop a system where the agents learn by themselves what is the best at certain situations. I dont know if this would work but i figured it something like this (simple version):
Every agent has the possibillity to shoot/pass or run. It also has its characteristics. Lets say agent A can give very nice passes while agent B fails to pass the puck over 4 feet. But in regard to the outcome of certain situations those agents develop a kind of behavior, that is optimal for them. etc.
I know that this approach can lead to a simulation of anything but a hockygame. But since it is a simulation with its own rules and some sort of a closed system, (thats really what i am after a closed system with its own rules and an optimal solution) I am willing to leave the realistic hockey game to those guys from the NHL.
And who knows, maybe with the right parameters it is possible to simulate a hockey game.
@Julian
Thanks for the bok advice. I'll have a look at it.
@all
thanks again for those postings and i appologise for my english ; )
If you still have some links, advices or suggestions keep it coming!
Regards Woltan
thanks for all your advise and input!
Ultimape is right, that I would rather not program all the ifs and elses, since i think that this would be far too much work and with growing complexity the result could become unrealistic in the best case.
@Exorcist:
Neural Networks Finite/Fuzzy State Machines are only words for me. I dont know anything about them. But what you said afterwards sounds more or like what i had in mind. The Agent knows its state and from that it calculates from all his actions, which have a probability for that state, what it might do. However, this also looks like a lot of work.
I would rather develop a system where the agents learn by themselves what is the best at certain situations. I dont know if this would work but i figured it something like this (simple version):
Every agent has the possibillity to shoot/pass or run. It also has its characteristics. Lets say agent A can give very nice passes while agent B fails to pass the puck over 4 feet. But in regard to the outcome of certain situations those agents develop a kind of behavior, that is optimal for them. etc.
I know that this approach can lead to a simulation of anything but a hockygame. But since it is a simulation with its own rules and some sort of a closed system, (thats really what i am after a closed system with its own rules and an optimal solution) I am willing to leave the realistic hockey game to those guys from the NHL.
And who knows, maybe with the right parameters it is possible to simulate a hockey game.
@Julian
Thanks for the bok advice. I'll have a look at it.
@all
thanks again for those postings and i appologise for my english ; )
If you still have some links, advices or suggestions keep it coming!
Regards Woltan
Before you get too excited about the idea of an AI that teaches itself, be aware that it is more work, not less work. A simplified model for a preset AI goes like this:
Parse Inputs->Choose Action
to make that into a self-learning AI, you need to add to it:
Parse Inputs->Update Knowledge->Use Updated Knowledge to Choose Action
So even in this vastly oversimplified model, you can see that you still need to do all the work you would need to do for the preset AI, plus a lot more. Same set of if/elses, only now they get their values from another chunk of code rather than values you have set. You might be hoping that the learning model would save you from having to tune the parameters of the AI - no such luck I'm afraid. While it would tune it's own parameters for play, you'd still need to tune the learning parameters. Either way you'll end up tuning something by hand, and my exerience is that it's a lot easier to build a good preset AI than one that learns dynamically. Real-time learning systems are very hard to do well.
In the example you gave, you really don't need learning anyway. The logic is self-suggesting. If they are good at passing, make them more likely to pass, if not, then they are more likely to run with the ball. I'm afraid that's the nature of Game AI coding. It's a fancy way of saying 'I code giant if/else trees for a living' ;)
[Edited by - Gibberstein on January 16, 2007 6:48:02 PM]
Parse Inputs->Choose Action
to make that into a self-learning AI, you need to add to it:
Parse Inputs->Update Knowledge->Use Updated Knowledge to Choose Action
So even in this vastly oversimplified model, you can see that you still need to do all the work you would need to do for the preset AI, plus a lot more. Same set of if/elses, only now they get their values from another chunk of code rather than values you have set. You might be hoping that the learning model would save you from having to tune the parameters of the AI - no such luck I'm afraid. While it would tune it's own parameters for play, you'd still need to tune the learning parameters. Either way you'll end up tuning something by hand, and my exerience is that it's a lot easier to build a good preset AI than one that learns dynamically. Real-time learning systems are very hard to do well.
In the example you gave, you really don't need learning anyway. The logic is self-suggesting. If they are good at passing, make them more likely to pass, if not, then they are more likely to run with the ball. I'm afraid that's the nature of Game AI coding. It's a fancy way of saying 'I code giant if/else trees for a living' ;)
[Edited by - Gibberstein on January 16, 2007 6:48:02 PM]
I had a feeling those would only be words to you, the point was self-teaching AI is alot more complex than you would think. That, and it also requires that you have an existing ai system more or less anyway.
The self-teaching comes into play by modifying the system i described earlier, so that the AI agent changes how it prioritizes each action. It still has to know each action, and you do have to present some means of analysing the action anyway.
Self-teaching AI should be a lesson tackled after writing essentially hard-coded AI.
Oh, and the system i described in my previous reply is a type of Finite State Machine (FSM). A FSM is a system where there is one active state, and inside itself lies the deterministic functionality to switch to another state.
A Fuzzy State Machine (FuSM) is sort of the same, but multiple states run at the same time.
The self-teaching comes into play by modifying the system i described earlier, so that the AI agent changes how it prioritizes each action. It still has to know each action, and you do have to present some means of analysing the action anyway.
Self-teaching AI should be a lesson tackled after writing essentially hard-coded AI.
Oh, and the system i described in my previous reply is a type of Finite State Machine (FSM). A FSM is a system where there is one active state, and inside itself lies the deterministic functionality to switch to another state.
A Fuzzy State Machine (FuSM) is sort of the same, but multiple states run at the same time.
"I would rather not program all the ifs and elses"
Unfortunately AI calls for alot of that. Even after generalizing the logic by making functions and mathematical models that removes redundant logic subsets, there is still a huge amount of logic for even the simplest game.
Even when you have a 'self learning' system some part must use hand built if-then logic to process the game situation into logical factors and later convert the decisions into appropriate actions.
Games have used specificly scripted logic (versus generalized solutions) because computers didnt have the resources to process real AI and were also much less work. Of course the games unfortunately are limited to only what the scripts cover and fall apart when you do something unexpected.
Unfortunately AI calls for alot of that. Even after generalizing the logic by making functions and mathematical models that removes redundant logic subsets, there is still a huge amount of logic for even the simplest game.
Even when you have a 'self learning' system some part must use hand built if-then logic to process the game situation into logical factors and later convert the decisions into appropriate actions.
Games have used specificly scripted logic (versus generalized solutions) because computers didnt have the resources to process real AI and were also much less work. Of course the games unfortunately are limited to only what the scripts cover and fall apart when you do something unexpected.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Hey ho yall,
since I dont have any experience with AI programing, what I am about to say might be wrong or at least partially wrong. But I figured my AI to work like this:
First I would establish some world rules everything has to apply to. e.g. noone can run faster than this or that passes can only be played in a certain way etc. Those rules would also include physical parameters. From those set of rules I would give my agents certain possibilities to act. e.g. play a pass, run, shoot or try a trck etc. From all those sets of rules which every agent must fullfill, I would let them play soccer games. And from the outcome of certain actions they would learn if it was good or bad what they did. e.g. they played a pass that got intercepted etc.
Since I have never done anything like this I dont know if it will work, or what the outcome might be. I know that it is highly unlikely that the outcome is a realistic soccer simulation. My goal is a simulation, where the parameters of the players e.g. stamina, passing etc. play a role in the outcome of the game and the behavior of the agents.
I picture myself the behavior of the agents to be a function of multiple parameters and that finding the global minimum with respect to the function of the other agents would be the best behavior. I think it would be a lot of fun to put in different agents and see how they influence the game.
Unfortunatly all this up there seems to be unrealistic?
But if it is, what exactly wont work?
Thanks for your help!
Cherio Woltan
since I dont have any experience with AI programing, what I am about to say might be wrong or at least partially wrong. But I figured my AI to work like this:
First I would establish some world rules everything has to apply to. e.g. noone can run faster than this or that passes can only be played in a certain way etc. Those rules would also include physical parameters. From those set of rules I would give my agents certain possibilities to act. e.g. play a pass, run, shoot or try a trck etc. From all those sets of rules which every agent must fullfill, I would let them play soccer games. And from the outcome of certain actions they would learn if it was good or bad what they did. e.g. they played a pass that got intercepted etc.
Since I have never done anything like this I dont know if it will work, or what the outcome might be. I know that it is highly unlikely that the outcome is a realistic soccer simulation. My goal is a simulation, where the parameters of the players e.g. stamina, passing etc. play a role in the outcome of the game and the behavior of the agents.
I picture myself the behavior of the agents to be a function of multiple parameters and that finding the global minimum with respect to the function of the other agents would be the best behavior. I think it would be a lot of fun to put in different agents and see how they influence the game.
Unfortunatly all this up there seems to be unrealistic?
But if it is, what exactly wont work?
Thanks for your help!
Cherio Woltan
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement