Social Interaction A.I. Based on Human Psychology?
I don't think it is a good idea to use artificial neural networks. ANNs are basically highly parametrized functions that can be tuned (trained) to fit data. I don't see a problem of that type in any of what you described. Perhaps the word "neural" is giving you inflated expectations for the applicability of ANNs (this is a common phenomenon).
The natural paradigm to represent the kind of decision making you describe is expected utility theory, which in some sense is the solution to AI in general. Your soldiers have several actions to choose from, and they need to evaluate how happy they expect to be if they take one or the other. Each action can result in several different outcomes, with probabilities attached (the agent's prediction of what will happen). Then each outcome can be evaluated by a function that will result in a real number (called utility), which describes how happy the agent is with each outcome. The only thing left to do is compute the expected value of the utility of each action, and pick the action where the maximum is achieved.
If you implement the utility function by adding together a bunch of terms, you can model different personalities by changing the weight of each term. You can also get interesting behavior by changing the way the agents estimate the probability of the outcomes: a reckless character could be one that doesn't fear danger (utility of dying is higher than in other agents), or it could be one that doesn't see danger (the probability of death is smaller than in other agents' estimates).
I am not very fond of psychological models that seem completely arbitrary, like the one you just described, or Freud's. Anyway, if this model helps you think of how to organize a utility function, that's great. Just leave the artificial neural networks out of this, so you can actually understand and debug the behavior of your agents.
Oh... and ditch the NNs. They aren't as sexy as they sound.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Alternately if a training set isnt used (or is partially used as a starting point), then you need some mechanism that tells the system whether the response was good or bad. That can be very hard -- especially with temporal cases where something that happened a while back was the key factor leading to the situation resolution and the result being judged.
The other part of NN that is often overlooked is how those simple inputs into that NN mathematical model get generated in the first place. The game situation has to be interpretted and reduced to these numeric inputs. In a complex game situation that is usually quite difficult to do correctly. The situation has to be summerized and factored, and the mechanism itself to find good factor generalizations usually requires alot of manual creations and proving to match the game mechanics (and likely the specific scenarios).
You suddenly find (as you will in real AI) that the program mechanism is less than 10% of the effort and the other 90% is the development of the domain specific logic data. Building a 'self learning'mechanism doesnt mean you can skip out on the subsequent guidance of the training. Creations of test scenarios (random doesnt work because real situations arent just an assemblage of random situational factors, instead are a cohesive pattern which has to be created somehow.
I think you would have a reasonably good result by scripting these behaviors manually. Then, if you really are dedicated in making these more complex, you could use a planning algorithm, able to infer consequences of its actions.
A game design note : I know this is unrealistic and goes against immersion, but I think the battle you describe (one player-controlled monster vs 3 NPC soldiers) could become very interesting if the player knew the personality of his opponents : "you face Krom the coward, Bel the brave and Olmec the loyal". Then it would make sense to use a frightening but less defensive-effective stance to frighten the coward, then concentrate the efforts on the brave.
At the scale of 100+ agents, if done correctly, I think that there could really be a lot of fun. Dwarf Fortress has some basic mechanics like that and the crowd behaviors are sometimes funny, sometimes tragic, but are always explainable.
Quote: Original post by alvaro
I like the idea of having interacting agents with different morals, but I would approach the situation in a very different way.
I don't think it is a good idea to use artificial neural networks. ANNs are basically highly parametrized functions that can be tuned (trained) to fit data. I don't see a problem of that type in any of what you described. Perhaps the word "neural" is giving you inflated expectations for the applicability of ANNs (this is a common phenomenon).
The natural paradigm to represent the kind of decision making you describe is expected utility theory, which in some sense is the solution to AI in general. Your soldiers have several actions to choose from, and they need to evaluate how happy they expect to be if they take one or the other. Each action can result in several different outcomes, with probabilities attached (the agent's prediction of what will happen). Then each outcome can be evaluated by a function that will result in a real number (called utility), which describes how happy the agent is with each outcome. The only thing left to do is compute the expected value of the utility of each action, and pick the action where the maximum is achieved.
If you implement the utility function by adding together a bunch of terms, you can model different personalities by changing the weight of each term. You can also get interesting behavior by changing the way the agents estimate the probability of the outcomes: a reckless character could be one that doesn't fear danger (utility of dying is higher than in other agents), or it could be one that doesn't see danger (the probability of death is smaller than in other agents' estimates).
I am not very fond of psychological models that seem completely arbitrary, like the one you just described, or Freud's. Anyway, if this model helps you think of how to organize a utility function, that's great. Just leave the artificial neural networks out of this, so you can actually understand and debug the behavior of your agents.
The problem with Decision Theory and Game Theory and their circular tautological definition of what rationality entails is that it also has little bearing on how a person behaves. Ergo economic crises and their unpredictability. Economists will argue that a household could be argued to behave more like this rational utility maximizing hypothetical than a single person but even this is a gross approximation. Assuming Bounded Rationality clashes just as much with reality as Psychology experiments and common sense observation show.
Incidentally, Richard Evans, Phil Carlisle, and I are giving a lecture at the GDC AI Summit: "Breaking the Cookie-Cutter: Modeling Individual Personality, Mood, and Emotion in Characters".
Also, I cover the concepts of game/decision theory, its pluses and minuses, and how to model decisions based on multi-attribute decision theory in my book (which I can now link to!) Behavioral Mathematics for Game AI.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Excellent food for thought.
In my post there was inadvertent and somewhat accidental focus on NNs on my part - they're not really something I'm hung up on or overly enthusiastic about. I'm new to tackling complicated A.I. and NNs are just one model to look into. I may have even been using the term incorrectly regarding the way I'd attempt to implement the ideas I was thinking about.
It's great to have these other suggestions for things to read up on and look into actually, which I'll definitely be doing.
Yvanhoe: Now that I think about it, I do really love the idea too of knowing an opponent's personality and using strategy accordingly. I think that could be a fantastic class skill available to some characters - sense emotion, etc. That said, most people are unable to hide extreme emotion anyway, so a perceptive character would generally be able to sense when some one is wavering anyway. Such a lot you could add to an encounter...
InnocuousFox: Can't tell you how much I'd love to be at that lecture. A little far from home sadly.
Cheers!