Advertisement

AI decision making problem

Started by August 01, 2010 10:35 PM
12 comments, last by warhound 14 years, 6 months ago
Morality and risk are seperate factors from gain. Gain is dependent on the end goal. For example, if in the quest of survival, food/water is needed, then the gain is in terms of food/water gained. If weapons/ammunition is needed, then the gain is interms of weapons/ammo. Morality and risk affects what sort of decisions csn or cannot be taken. Each AI has a trait for risk and morality that give it a different limit for the amount of risk/immorality it can take. For example, a unit that is sensible and has many morals will take a moderate amount of risk in an action and will only take actions that are not immoral. The actions have a weightage for morality. Risk is calculated via an equation that also helps to determine initial positioning for units in a engagement. So no, risk and morality do not affect the gain. They affect the decision making process. These factors restrict actions in the same way that one's own sense of morals and amount of risk taking would.

I hope that helps everyone understand my AI a bit better... :)

No one expects the Spanish Inquisition!

You are correct in stating that many of these things are not laid on the same axis. Using the classic "fear and greed" that people talk about... they are not on the same continuum. In fact, you can construct a 2D graph of them. However, defining a threshold of action is often a diagonal line across that graph somewhere above or below which the agent will act.

I wrote an article in AI Wisdom 4 showing a mathematical model (and C++ functions) that you can use to process these issues fairly easily (from a technical standpoint). You can extend this to 3 or more axes, of course, depending on how many competing factors are in place.

Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play

"Reducing the world to mathematical equations!"

Advertisement
This idea of using risk or morality or other elements as filters is equivalent to saying that the agent is infinitely unhappy about taking such actions. This can lead to bad situations, although perhaps these won't arise in your game and you'll be OK. If the survival of humanity depends on you breaking some moral rule (say, stealing the ultimate weapon from the bad guys, or killing an innocent person), you should probably break that rule, don't you think?



That is true come to think of it...........you've gotten me thinking actually about whether or not I should change some of the mechanics. Maybe a bigger bonus subtracted from morality if the gain can really do that? I do hope to use this system in later games, so I will consider using the expected utility equation that you have, because I hope to use this in sequels to this game, and in the eventual end-result that it leads up to (a sort of grand finale, Fallout 3ish game). Thanks for your help, and if anyone else has any input on the matter, then please do add in.

EDIT:
I'm wondering now if alvaro's expected utility is the best way to go, or if the original way, which would be that if the gain suits the character type (say a lot of gain for the unit itself as the character type is individualistic), and the unit has some morals (a moderate immorality limit), a certain amount of morality points are subtracted to make the decision feasible for the unit. I'm open to all input on the matter.

[Edited by - kryotech on August 2, 2010 11:24:40 PM]




No one expects the Spanish Inquisition!

This topic is closed to new replies.

Advertisement