Advertisement

Auxilliary Strategic Information

Started by February 08, 2002 01:10 AM
16 comments, last by Oluseyi 22 years, 10 months ago
I couldn''t decide which of the recent RTS engines to post this in, so I started a new one. I was browsing a short while ago and came across Towards Articulate Game Engines. It struck me as being very similar to bishop_pass'' annotation/annotated cellar experiment, as well as resonating with some recent discussions on unit tactics in RTS games. Some of you may have read it while others may not (I, for example, only skimmed the surface - I have a full multimedia assignment due today [Friday] for my AI in Narrative... class, and I haven''t started yet!) Anyway, I would be interested in hearing reactions - just sitting back and learning as you deconstructed and analyzed the article. Thanks. I wanna work for Microsoft! [ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ] Thanks to Kylotan for the idea!
Let me see if I understood this article correctly, as alot of it seemed to fly over my head.

Basically the objects in the game need to have a conceptual understanding of the game world. In other words, if I have a platoon of tanks, it needs to be able to understand certain things like whether it is in danger, or if it can see a vulnerability in the enemy''s position.

But in order to do that, would the objects need to have some sort of awareness of what other objects are and are capable of? Almost like having a database that it can compare what it knows, and uses it''s sensory capabilities to compare what it has empirically viewed and translate that raw data into "conceptual information"?

He goes on to differentiate quantitative knowledge vs. Qualitative knowledge, and that the key to having qualitative knowledge is to having a conceptual understanding of the domain problem at hand. Once this conceptual understanding is fed into the compiler it generates both quantitative analysis (number crunching), and the "self explanatory authoring" that provides an objects "intelligence" in the appropriate behavior or choice of action.

My question is...how do you teach objects this conceptual understanding? How do you feed an object the information that it needs to be able to understand the priorities and interrelated links that tie objects together? For example, let''s say that I sent a recon unit to what I think is my enemy''s right flank. My recon unit "sees" that his flank is virtually unguarded, and more to the point, sees a valuable supply area that is unguarded. How does the recon unit gain the "conceptual knowledge" that tells it that;

A) it is in no immediate danger
B) the supply depot is a valuable target
C) it will help the battle effort by destroying the unguarded depot?

I don''t think the author tells how this is really possible, and I think that everything else he says makes sense, but he makes it sound so easy. Admittedly, my programming skills are incredibly basic, so a lot of this I''m trying to comprehend, but I don''t understand how some of the things he says can be done...at least from the basic description that he gives. In his little diagram, he even points that there is a "Domain Theory" which I presume is the "conceptual knowledge base", and the scenario. So in my above example, the scenario is the recon unit, and the Domain Theory is the a), b) and c). These two go through his SIMGEN compiler and produce a Qualitative Analysis (in my example, "hey, why not blow up the depot while we''re out here since it will be beneficial?").

The scenario is easy to explain, but what about the "Domain Theory" and "Conceptual knowledge"? Like I said, I would imagine that in order to do that, objects would have to be able to recognize what other objects were, what those objects do, and how they relate to environmental factors. Then it could formulate "judgements". That sounds like a very tall order to me, but admittedly one that I hope that comes about.

The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley
Advertisement
I haven''t read the article yet but going by your
description, Dauntless, I thought there was ai that
did this sorta thing. I''m no expert on ai so I don''t
remember what it''s called.
Well, I glanced at the article, but I really didn't read it completely. That doesn't mean I won't or that I don't find such things interesting. Quite the contrary: I am a big advocate of academic AI and giving common sense reasoning abilities to computers, especially within the context of games. Given that, let me try and answer Dauntless' question about how conceptual understanding arises with my take on it all.

Conceptual understanding arises from inferencing about perceptions and existing knowledge. It is implicit knowledge made explicitly available. Let's look at the microtheory of family relationships as an example.

Let's assume that we already know about mothers and fathers and grandparents and children and aging. Let's assume that we also know particular existing relationships, such as whom Mary's parents are. Now, let's assume we just recently learned that Mary has a daughter named Susan and Mary is the mother of Susan.

Due to our conceptual understanding of this domain, we can now infer all of the following:

Susan is female.
Susan is younger than Mary.
Susan is younger than the parents of Mary.
Susan is the granddaughter of the parents of Mary.
Susan is the child of Mary.
Susan is the grandchild of Mary's parents.
Mary's parents are older than Susan.
Susan is of the same species as Mary.
Susan is of the same species of Mary's parents.
Susan has a father.
Susan's father would be male.
Susan has a mother.
Mary is female.

All of these facts are inferred because of the domain knowledge about family relationships, age, gender, and so on. A new fact, no matter how small, triggers a great deal of new knowledge. Now, what if one of the above inferred facts was in fact the one nugget of knowledge which could save our lives. By having the domain knowledge and the ability to make the inferences, this knowledge is available.

This is why I have been continually advocating the research of semantic nets, predicate calculus, first order logic, and resolution refutation here on these boards for quite some time. Such a system would enable exactly the above, plus the ability for agents to catch contradictions. The system would also enable an authoring system where the common sense coder would not be able to enter logically inconsistent rules. Such a system, augmented with non-monotonic logic cuold be powerful. Unfortunately, resolution refuation doesn't scale well, but research into partitioning common sense knowledge bases using a vertex minimum cut is proving successful.

As an aside, let me discuss non-monotonic logic and one way of implementing it. Look at the truth table below:

    T  = True absolutelyTD = True by defaultU  = UnknownFD = False by DefaultF  = False absolutelyC =  Contradictory   | T  TD  U  FD  F--------------------T  | T  T   T  T   CTD | T  TD  TD U   FU  | T  TD  U  FD  FFD | T  U   FD FD  F F  | C  F   F  F   F    


The above truth table is applicable for determining which fact has precendence over another in the face of conflicting truth values. Look at the example axiom below.

If x is the mother of y AND z is the spouse of x THEN z is the father of y.

If we attach a truth value to this axiom of TD, meaning true by default, it can be overridden by any rule which conflicts with it which has a truth value of T, meaning true absolutely. For example, the two rules below have truth values of T:

If x is the parent of y and x is male, THEN x is the father of y.
Everyone has exactly one father.

Those two rules, if fired, produce T, which would override the TD produced by the first rule about the spouse, thus enabling non-monotonic logic.

Another example might be if we learned that the spouse of Mary was NOT the father of Mary's child. In other words, we learned the fact about z being the father of y with a truth value of F. Well, if we look at the truth table, we see that F gets precendence over TD.

Of course, what a rule implies is only as strong as its premises. Consider the rule below:

If x is the mother of y, x is the parent of y.

That rule has a truth value of T. However, if the premise, which is (x is the mother of y) only has a truth value of TD, the inference, which is (x is the parent of y) only gets a truth value of TD. So, if we know absolutely that Mary is the mother of Susan, than we know absolutely that Mary is the parent of of Susan. If, on the other hand, our knowledge about Mary being the mother of Susan is sketchy, then this propogates to our knowledge of Mary being the parent of Susan, giving that fact a truth value of TD.

___________________________________



Edited by - bishop_pass on February 9, 2002 2:42:43 AM
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
This thread is not getting the attention it deserves. True AI requires a conceptual understanding of the domain it operates in. Think of it as common sense.

The article Oluseyi mentioned discusses battle tactics, and how an effective agent in strategic battles needs a conceptual understanding of that domain. I would appreciate that readers look at my above thread for detail on the aspects of conceptual understanding, but for flavor, I'll provide some simple examples below related to battle.

Let's say an enemy has just unleashed a planet killer bomb on Vega 4. The princess from Orion 2 was visiting Vega 4 to visit the Festival of Colors as a sign of goodwill to the people of Vega 4 at the time the incident happened. The planet is destroyed. How does the general commanding the military powers which are against this enemy use conceptual understanding?

The general knew before hand that the princess was visiting Vega 4. Later, he learns that Vega 4 was destroyed and by whom. He knows that when planets are destroyed, everyone on the planet dies as a result of murder. He is then able to reason that the princess was murdered. And he knows by whom. He knows that the political ramifications of this, and seeks to gain an ally by seeking out the governing body or Orion 4. Only through a conceptual understanding of the situation, incuding murder, blowing up planets, royalty, etc. does all of this occur.

___________________________________



Edited by - bishop_pass on February 9, 2002 12:27:04 AM
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
*bump*. For no particular reason than I think the page 1 discussion is stagnating...

I wanna work for Microsoft!
[ GDNet Start Here | GDNet Search Tool | GDNet FAQ | MS RTFM [MSDN] | SGI STL Docs | Google! ]
Thanks to Kylotan for the idea!
Advertisement
Yeah, well I tried to elaborate on the whole theme. I would have hoped that someone would have entered the thread and either added something, argued about something, or asked about something.
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Bump - Another reason to pay attention in math classes besides matrix operations.
Well, things that go *bump* in the night are all good and well, but how about some content as well?

Yes, that means you specifically. Must I carry the burden of this thread all by my lonseome? Oluseyi introduced it, and I built on it. Now, it''s everybody else''s turn.

Feedback? Questions? Ideas? Criticisms? Discussion? A synopsis?

___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Actually I''m still trying to digest alot of this I think a part of the problem is there aren''t too many real programmers here...myself included (though I could be mistaken).

My level of programming knowledge is very abstract and very much at the theoretical rather than a practical nature. But, logic is logic, and if its spoken in enough one-syllable words, I''ll eventually get it

AI is actually the most intriguing part of programming for me. Since I really want to make a strategy and with my concepts in mind, my game would have to have SUPERB AI. I was browsing www.gameai.com for some tidbits and to familiarize myself with some AI terminology (I''m still not really sure what the difference is between genetic algorithms and neural networks and A*life for example). Well, anyways, I stumbled on a German government sponsored site for game AI believe it or not. The researchers there were going over Autonomous Agents and at first, it looked like it held some promise for what I wanted to do.

However, the more I looked at it, I realized it was totally unsuitable for my game. Indeed, autonomous agents as I understood it seemed much more geared to say, Bots for FPS style games. There was no sort of information exchange between the Agents, nor was there any sort of collective planning and organization...both critical elements to a strategy style game.

I''m interested in this topic precisely because I want to do two things. I want to break down units at their smallest level (I call it an OU for Organized Unit) and each of these will have a Leader. So there must be OU intelligence, and there must be Leader intelligence. Perhaps OU intelligence is a bit misleading, maybe I should say, OU awareness. The OU will react to certain events, but the LEADER is the brains. And more importantly, the OU must be able to pass information that it recognizes is important to the leader, and the leader in turn must pass it to HIS leader. Down the chain of command it goes until the information is passed to what I term an "avatar", which is a physical representation of the player on the map.

In other words, information and decision making is neither automatic nor God-like. The Leaders of the OU must have a comprehensive understanding of what is important in the context of the standing orders given to him. For example, let''s say my Avatar sends an order down the chain of command to a Leader to have his OU "patrol the southern ridgeline, and draw enemy out".

There are three key concepts here:
Patrol- an action which is primarily reconnaisance
Ridgeline- a geographical location
Draw Out- a complex action wherein a unit exposes itself to bait the enemy out.

How does it understand these concepts? As Bishop said, you can give the Leader some basic rules, and then be able to formulaically determine interrelationships from the empirical information it can obtain. However, I think there is a limit here. What happens if you don''t give the Leader enough initial information so that it can "create a logical formula"? In other words, the knowledge is somewhat hardwired, and the unit is not adaptive to situations it has never encountered before nor has any intial information on.

I know that Neural Nets are designed to make programs, "learn" about certain situations so perhaps this is something I should be looking into?

Going back to the original article, I''m still wondering how "Domain Theory" and "conceptual understanding" are done. I haven''t studied Heuristics since my high school calculus days, but I think there has to be some kind of "problem solving" set that are given to programs. For example, let''s say I ask a human what he thinks is true; do more english words begin with the letter K, or have K as the third letter in them. Chances are, he will answer, "start with K". The answer is actually the other one. Our human heuristics uses "shortcuts" to try to provide us with answers that seem consistent with the world around us, but they often fail. Perhaps another example would be to show a child an apple, a pear and a nectarine. Then explain that these are all fruits because they have seeds inside them. Then show a child a tomato and say that it too has seeds inside it. Finally, ask the child if he thinks a tomato is a fruit (I know some adults that can''t accept it). I think any programs made with built in heuristics or monotonic logic (still not sure what that means..."one shape" logic?) will have these same limitations.
The world has achieved brilliance without wisdom, power without conscience. Ours is a world of nuclear giants and ethical infants. We know more about war than we know about peace, more about killing than we know about living. We have grasped the mystery of the atom and rejected the Sermon on the Mount." - General Omar Bradley

This topic is closed to new replies.

Advertisement