Advertisement

What are some games that use fuzzy logic for some part of their AI?

Started by July 31, 2002 12:10 PM
27 comments, last by ares32585 22 years, 3 months ago
quote: Original post by Kylotan
Would you be able to provide a small example of how you would use decision theory to approach one of the examples I posed above? All the entry-level online tutorials I''ve read give examples of decision theory that are basically nothing more than making a discrete choice based on pre-defined probabilities. For example, how would you represent - or get around the need to represent - a fuzzy rule such as "If enemy unit is Near and enemy unit is offensive then risk is high" for a game like Civilization?


Sure thing...

I believe you are using risk in the sense of the likely loss of the unit. Correct?

I''m going to assume this is the case, and associate risk with the probability of the unit taking damage. Sufficient damage will kill the unit, so damage will be measured on a scale of [0,infinty].

Consider a conditional probability distribution for our unit:
p(damage taken | enemy distance, enemy strength, enemy status, our status) 


This function doesn''t have to be discrete... it can just as easily be continuous. Now, we need some prior probabilities for enemy distance, enemy strength and enemy status. The enemy might be spread out over different distances, particularly if they are a large force. The strength might vary depending on the make-up of the enemy unit and finally we might not know its status (which I am going to assume is binary valued). Our status is a decision variable, having several mutually exclusive values (although I guess you could make them not mutually exclusive, if status was actually a set of actions.

So we have:
p(enemy distance)p(enemy strength)p(enemy status) 


The probability of taking damage given a particular status is found by integrating (or summing for discrete variables) over all possible values of these variables, according to
p(dam) = intiintjintkp(dam | edisti, estrj, estatk, ostat)p(edisti)p(estrj)p(estatk) 


So now we have a distribution over damage taken. We could directly relate this to risk and make a decision, but we might be better off considering the health of the unit before and after encountering the enemy, since presumable we really care about whether the unit dies or lives to heal and fight another day. So, consider

p(health(after encounter) | health (before encounter), dam).

Let''s assume the prior for current health is single valued. It might not be single valued if we were predicting the future and didn''t know what the exact value would be. We have the distribution p(dam) from above so we can compute p(health(after encounter). Now set a value (a measure of utility), U(health), on having the unit at a certain health after the encounter. You might choose a negative value for death and positive, increasing values for increasing health (since one would imagine that the healthier the unit, the less resources it takes to get them back to 100% efficiency).

Then, the expected value (expected utility) of the unit after combat is given by
EU(unit) = p(health)U(health) 


So, how do you use this to make rational decisions? For the different decisions our unit could make, in this case its status, figure out what the expected value will be after the encounter. The rational action is the one that maximises this value (given the particular utility function. Given a different utility function we might expect a different rational action).

Now while all of this sounds more complex than a few fuzzy rules, it can all actually be done with a few matrix multiplications (for each action possiblity), making implementation very efficient.

When you implement fuzzy logic to do the above you are more than likely implicitly attempting to model the conditional distributions through the use of the fuzzy logic rules that relate the ''nearness'', ''strength'' and ''status'' of an enemy unit, combined with the current unit''s status (or other factors) into a danger (or as you call it, risk) factor. You would probably then directly correlate this with the damage level likely to be taken and choose an action (which would presumably change our units status) to ''minimise this risk factor'' in a determinstic fashion. The problem with doing it this way is that Fuzzy Logic does not guarantee that the decision made is the rational decision, unless the decision problem is basically stimulus-response and the utility function is linear. There''s no clear way of perform planning, with Fuzzy Logic, in the light of uncertainty, since FL collapses the uncertainty (really vagueness) at each decision step, whereas decision theory does not.

I hope this example has helped the discussion and not hindered or side-tracked it.

quote: Original post by Kylotan
Personally, I have been interested in fuzzy logic because it seems to offer a very quick and easy way of mapping rough heuristics to a reasonably high-quality output. It''s obvious just by the way they are constructed that you can''t expect fuzzy logic systems to be the most accurate way of modelling a system, but they seem efficient in terms of how accurate the system is compared to the developer time invested. They can also use quite simple fuzzy rules which are intuitive to non-mathematicians and programmers, which is potentially a big benefit in the game design world where you may want the designers to be able to shape the AI themselves.


As I said, FL can be made to work in certain domains. If it works for you and does what you want, that''s fine, but people should understand the limitations of the technique and certainly understand what it doesn''t offer, which is what I was trying to explain in my earlier post.

Cheers,

Timkin
quote: Original post by Argus
This is where the formal definition differs. The formal non-deterministic state machine can go to multiple states, and does so (from a conceptual standpoint).

quote: Original post by Timkin
I've never heard non-determinism used in that vein and I would be very surprised to hear it used that way, with regards to this topic! Did you obtain this definition from a book or a lecturer/teacher? I'd be very much interested to know the source. The reason why I am so curious is that I have taught this material at university level and have had other academics review the course material (with no objections to the terminology or the way in which it is used)!
Hmm well as I said you could search for it on google. I'm sure a lot of undergrad textbooks have the details as well - it's fairly standard material for comp sci majors I think. Relatively unrelated to AI, but rather to formal comp sci.
quote: Original post by Argus
I am assuming that the persons asking were referring to the standard sense in which NSMs and DSMs are taught.
quote: Original post by Timkin
...and this is where I would take a stand, because I don't believe that the definition you have given above IS the standard notion of deterministic vs non-deterministic state machines.
Again, check it up on the net or in an undergrad textbook that deals with regular expressions and context-free grammars and suchlike. It is possible that your definition is covering on mine, but I'm fairly sure most undergrads are taught the same definition I hold. I wish I could give you a good example but erm I didn't really learn the material that thoroughly.

EDIT : for quotes and added last sentence.



[edited by - Argus on August 3, 2002 9:35:15 PM]
Advertisement
Just a quick point Argus, you cannot double quote, as the automated formatter cannot handle it... so your last reply is embedded in what you were trying to quote. Would you mind editing it and correcting it yourself. As a last resort, I will undertake the edit, however I would prefer not to edit other peoples posts unless absolutely necessary.

Cheers,

Timkin
On the issue of non-deterministic state machines... I''ve done a little more research (spoke to a couple of colleagues as well) and I believe that we (Argus and I) are talking about the same thing, except form different perspectives.

Argus, you are talking about a state machine that can have several transitions out of any given state, so that at any instant of time, the state machine is in several states at once. You described this form of transition as non-deterministic, since there is not a 1-1 mapping between states. This is in fact the definition of stochastic. So part of our problem was terminology.

The other part was an assumption of mine... in that we were talking about classical systems (in which the machine can only be in one state at any given time). You, on the other hand, were referring to a non-classical system. The stochastic state machine still very much applies in this case... in fact, without enumerating all paths through all sub-machines (difficult for large machines), a probability distribution is the only way to describe the state of the system at any given time (remember its still multiple states... think of a wave function in quantum mechanics).

To elaborate: if you were to consider any particular state, you could ask yourself what the probabilitiy is that the state is active at that time. You would end up with a probability distribution over all states. Furthermore, the transition to the next state(s) can be given by a uniform distribution over possible states. So, for any particular state, you can write down a conditional distribution(status | parent1,parent2,...)

Alternatively, you can consider all possible paths through the machine and display them as a separate sub-machine, which is what you were talking about with the subsets. Each of these paths also has a uniform probability associated with it and the probability distribution over states at any time can be read off by counting over all tapes which states are active at that time (and normalising).

So this is a specific case of a stochastic state machine modelling a non-classical system. Specific because of the uniform transition density.

My apologies for my assumption that we were talking about classical systems. I''ll be more careful in the future!

Thankfully we are both vindicated... I thought I was going nuts there for a minute!

Cheers,

Timkin
Done.
Heh ok I''ll take your word for it - I kinda understand what you''re getting at. As I said before (obscured by the double quote), the NSMs that I (and presumably many other undergrads) was taught may be covered by your definition of stochastic state machines.

I can see where probability distributions might have to be used in large machines, although is uniform transition density an assumption? And if so is it a valid one?

Advertisement
quote: Original post by Timkin
Consider a conditional probability distribution for our unit:
p(damage taken | enemy distance, enemy strength, enemy status, our status)  


This function doesn''t have to be discrete... it can just as easily be continuous. Now, we need some prior probabilities for enemy distance, enemy strength and enemy status.

But how do you obtain those prior probabilities? Obviously you could say that p(enemy distance) is 1 when the enemy is in contact with the unit, but what about at any other range? What do you do, take the reciprocal of the distance?

Obviously some factors can be statistically deduced, but distance is not so simple. I suppose you could go through all the possible unit types checking how far they can move and assigning the probability according to that (picking the score off a cumulative frequency curve). What other ways are there of getting this input data?

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost | Asking Questions | Organising code files ]
enemy distance would not be a probability. its a discrite value that is used in conjunction with a "safe distance" function to determine whether the enemy is in range or a threat. the enemy strength is also a single value and useless unless you create some ratio between our unit and the enemy unit. reletive strength is more useful then the actual strength.

we can then weigh the values of damage taken. actually would be best represented as a 0 to 1 percentage of damage instead of the actual number. so we have a continous flow of values of two states. dead (1.0 full damage taken) to fully repaired (0.0 no damage taken). values in between are the degree of deadness/damage. thats fuzzy logic. since you are using a this continus truthness to help determine risk.

the status of the ship, which i would assume is a set of values that determine whether attacking, retreating, etc. such could be represented by multiple true false values, or if the states are a set of degrees (ie alertness, sheild level, offensiveness, speed, etc) then fuzzy logic helps by using a range of values (again notice fuzzy logic is merely a way to deal with a continous range and not probability).

now granted sometimes the direct values are useful, for instance the fuzzy representation of speed dont help determine the ETA of the enemy ship. though the ratio could help determine whether the enemy is catching up or not (or vice versa). i think that the problem is that some ppl regard the pur math end of things to high vs the actual implementation. sure you could do all this statistical analysis, buts it can be pretty inefficent in many scenarios. a few fuzzy logic rules (maybe even coupled with a neural net) could make things less cpu intensive and possible more accurate. statistical analysis of a situtaion requires that the data state is somewhat uniform. if the data set happend to have many out liers then you could get funky results (not saying that a fuzzy system would not be immune, just seems that a pure statistical analysis of the data would be more susepetable).
Yes, we know how to implement fuzzy logic on the problem, but thanks for the input ''a person''. There are actually many different fuzzy rule sets you could design and each would have its advantages and disadvantages. Kylotan was asking me to explain how I would set the problem up in the framework of decision theory, hence my post.

Anyway, in response to your question Kylotan...

The prior for each of the variables can be built at the time the inference needs to be performed. If there is just one single enemy unit and you know where it is, then the prior will be a dirac delta function about that location (i.e., contribute no uncertainty). Let''s consider that this is not the case and that in fact there are mutliple soldiers in the enemy unit. We don''t know the health of each of them and they are spread out over a small area. The prior on distance then could be expressed in terms of the mean position of all enemy soldiers and the variance, assuming a Gaussian distribution of soldiers around that mean position. That''s good enough for our purposes.

Now, if we don''t know the health of the enemy and we have no additional information available to us then we use a uniform prior for their health. Alternatively, we could cheat and look at their health (as so many AIs do!). Let''s assume again that the health of each soldier is distributed about the mean health with Gaussian statistics. Just compute the mean and the variance given the health of each soldier.

As for their status, we either assume a uniform distribution, since we don''t know what they''re going to do in response to what we''re going to do, or if they''re already doing something... like attacking us... then we set their status to that value and compute accordingly.

Does this make sense?

Cheers,

Timkin

This topic is closed to new replies.

Advertisement