quote: Original post by Kylotan
Would you be able to provide a small example of how you would use decision theory to approach one of the examples I posed above? All the entry-level online tutorials I''ve read give examples of decision theory that are basically nothing more than making a discrete choice based on pre-defined probabilities. For example, how would you represent - or get around the need to represent - a fuzzy rule such as "If enemy unit is Near and enemy unit is offensive then risk is high" for a game like Civilization?
Sure thing...
I believe you are using risk in the sense of the likely loss of the unit. Correct?
I''m going to assume this is the case, and associate risk with the probability of the unit taking damage. Sufficient damage will kill the unit, so damage will be measured on a scale of [0,infinty].
Consider a conditional probability distribution for our unit:
p(damage taken | enemy distance, enemy strength, enemy status, our status)
This function doesn''t have to be discrete... it can just as easily be continuous. Now, we need some prior probabilities for enemy distance, enemy strength and enemy status. The enemy might be spread out over different distances, particularly if they are a large force. The strength might vary depending on the make-up of the enemy unit and finally we might not know its status (which I am going to assume is binary valued). Our status is a decision variable, having several mutually exclusive values (although I guess you could make them not mutually exclusive, if status was actually a set of actions.
So we have:
p(enemy distance)p(enemy strength)p(enemy status)
The probability of taking damage given a particular status is found by integrating (or summing for discrete variables) over all possible values of these variables, according to
p(dam) = intiintjintkp(dam | edisti, estrj, estatk, ostat)p(edisti)p(estrj)p(estatk)
So now we have a distribution over damage taken. We could directly relate this to risk and make a decision, but we might be better off considering the health of the unit before and after encountering the enemy, since presumable we really care about whether the unit dies or lives to heal and fight another day. So, consider
p(health(after encounter) | health (before encounter), dam).
Let''s assume the prior for current health is single valued. It might not be single valued if we were predicting the future and didn''t know what the exact value would be. We have the distribution p(dam) from above so we can compute p(health(after encounter). Now set a value (a measure of utility), U(health), on having the unit at a certain health after the encounter. You might choose a negative value for death and positive, increasing values for increasing health (since one would imagine that the healthier the unit, the less resources it takes to get them back to 100% efficiency).
Then, the expected value (expected utility) of the unit after combat is given by
EU(unit) = p(health)U(health)
So, how do you use this to make rational decisions? For the different decisions our unit could make, in this case its status, figure out what the expected value will be after the encounter. The rational action is the one that maximises this value (given the particular utility function. Given a different utility function we might expect a different rational action).
Now while all of this sounds more complex than a few fuzzy rules, it can all actually be done with a few matrix multiplications (for each action possiblity), making implementation very efficient.
When you implement fuzzy logic to do the above you are more than likely implicitly attempting to model the conditional distributions through the use of the fuzzy logic rules that relate the ''nearness'', ''strength'' and ''status'' of an enemy unit, combined with the current unit''s status (or other factors) into a danger (or as you call it, risk) factor. You would probably then directly correlate this with the damage level likely to be taken and choose an action (which would presumably change our units status) to ''minimise this risk factor'' in a determinstic fashion. The problem with doing it this way is that Fuzzy Logic does not guarantee that the decision made is the rational decision, unless the decision problem is basically stimulus-response and the utility function is linear. There''s no clear way of perform planning, with Fuzzy Logic, in the light of uncertainty, since FL collapses the uncertainty (really vagueness) at each decision step, whereas decision theory does not.
I hope this example has helped the discussion and not hindered or side-tracked it.
quote: Original post by Kylotan
Personally, I have been interested in fuzzy logic because it seems to offer a very quick and easy way of mapping rough heuristics to a reasonably high-quality output. It''s obvious just by the way they are constructed that you can''t expect fuzzy logic systems to be the most accurate way of modelling a system, but they seem efficient in terms of how accurate the system is compared to the developer time invested. They can also use quite simple fuzzy rules which are intuitive to non-mathematicians and programmers, which is potentially a big benefit in the game design world where you may want the designers to be able to shape the AI themselves.
As I said, FL can be made to work in certain domains. If it works for you and does what you want, that''s fine, but people should understand the limitations of the technique and certainly understand what it doesn''t offer, which is what I was trying to explain in my earlier post.
Cheers,
Timkin