Advertisement

The annotated cellar: an experiment

Started by January 20, 2002 07:42 PM
32 comments, last by bishop_pass 22 years, 10 months ago
This whole thing seems to lend itself to OO methods.

MAP_OF_RUSSIA is a kindof MAP which is a kindof PAPER which is a kindof MAN_MADE_ITEM

etc.

Rather than annotating everything manually, annotations should be derived. Because paper is flammable, and a map is a kind of paper, and the map of Russia is a kind of map, the Map of Russia is flammable.

Just my two cents.
I thought it had already been said, but this kind of inheritance could be represented with a basic semantic net, so it is changeable at run-time rather than compile-time (for what it''s worth). You are also not limited to tree structures, and instead could set up some sort of web. This lets you ''derive'' certain objects from an arbitrary number of base types. Any collisions or ambiguities can be handled by your system rather than the compiler.

eg. Both Key and Broom would be have an ''instance_of'' link to Portable, and Portable objects broadcast the "take me" action. Broom might also be an instance of LongObject, which broadcasts the "poke things with me" action. Map Of Russia might be an instance of Paper ("Burn me! Crumple me!"), Written ("read me!"), and Portable.

Apart from the ''instance of'' link, you can have other links, such as ''in state''. A door might have the ''in state'' link to the ''Closed'' object, which broadcasts ''open me''. When an object is opened, it would need to have some way of knowing to change this link to point to the ''Open'' object, so that it would subsequently broadcast the ''close me'' action.

I understand that this doesn''t really contribute anything to solving the problem itself, but such a representation makes it very quick and easy for the game designer to create new objects that broadcast 90% of the required actions. This makes the approach much more workable than if the designer had to explicitly specify the actions for every item. As you add more abstract classes to the net, your system grows in power such that there could be a lot of emergent behaviour in the eventual system.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
Advertisement
I agree completely with the inheritance issue and the semantic net. To me it''s a given. My not mentioning it does not mean it wouldn''t be a feature. The reason I haven''t mentioned it is because for a prototype with 5 items, it is simpler to ignore that and encode all of the features within each object just to get something working.

___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
And another thing regarding inheritance: I would never use the inheritance features of a language like C++ for that. That means putting knowledge into code, which is definitely a poor thing to do with regard to AI. You want to keep everything as non-opaque as possible, facilitating explanation, analysis, and reasoning.

The semantic net is the way to go.

___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
quote:
Original post by bishop_pass

In response to the above about goals and constraints, have you thought about the way I suggested in the other thread? Basically I am proposing to setup ACTION data like this:

Door
action: openDoor(AGENT,DOOR)
changes: open(DOOR)
constraints: closed(DOOR), NOT locked(DOOR), nextTo(AGENT, DOOR)

Door
action: unLock(AGENT, DOOR, KEY)
changes: NOT locked(DOOR)
constraints: nextTo(AGENT, DOOR), holding(AGENT, KEY), locked(DOOR)

Key
action: get(KEY)
changes: holding(KEY)
constraints: NextTo(AGENT, KEY), holding(AGENT, nothing)

Now, the items in the changes field are the world states that will be in effect after the action is executed. The items in the constraints field are the world states that must be in effect for the action to be executed. Both the changes field and the constraints field take the exact same type of item: world states.

Therefore, accomplishing a goal is the process of finding the action that provides the desired changed world state. Then the constraints are checked. If they are not met, the constraints become the new subgoals.

Another advantage is the non-opaque nature of the data, facilitating explanation about goals and constraints.


All this is very familiar actually. I''ve been more or less doing the same thing in a hard-coded way, so the transition to data oriented game world should go pretty smooth.

But I don''t think I''ll worry about that just yet. The next thing on my mind when I get some time to work on this again will be to solve some problems with the algorithm itself (like making sure that an object cannot be used in two contradicting ways during the same plan - the chair used both to climb and to push the trapdoor) But the action/changes/constraints structure looks real good. I can already imagine the nice gameworld scripts waiting to be processed.
quote: Original post by Diodor
... (like making sure that an object cannot be used in two contradicting ways during the same plan - the chair used both to climb and to push the trapdoor)...


I think the key here (and it is a subtle one) is the duration of change to the world state. Minimally, we have at least two kinds:

  • The kind where the change in state remains in effect until something else changes it. It is like a toggle switch.
  • The kind where the change in state remains in effect only as long as it is made to stay like that. It is like a spring loaded push buttom.


An example of the first one is unlocking a door. The door remains unlocked until locked again. The key is only necessary for the change of state, not the maintenance of the state.

The agent standing on the chair is an example of the second one. The agent's height is only in effect as long as the agent is standing (using) the chair, preventing its use for something else.

However, one need not actually encode such knowledge into the system. There is an alternative way of handling it which is more simple. And it boils down to the constraints again. The constraints necessary to allow an agent to pick up a chair conflict with an agent standing on a chair. Therefore, it seems reasonable that the agent could not do both at the same time. Now, the problem is one of preventing the agent from oscillating back and forth between the two states: standing on the chair and getting off the chair to hold it and then putting the chair back down to stand on it. In this case, we have an agent who can't seem to see that each one is preventing the other. This is solvable with constraints also. The agent stands on the chair to gain height. This constraint remains in effect to pursue his goal of opening the trapdoor. Therefore, he would never choose to use the chair as a pushing device to pursue his goal if he needs to stand on it for increased height. But of course, testing is necessary, and a more formal validation of the ideas.



___________________________________



Edited by - bishop_pass on January 28, 2002 9:20:25 PM
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Advertisement
quote:
Original post by bishop_pass

I think the key here (and it is a subtle one) is the duration of change to the world state. Minimally, we have at least two kinds:


The kind where the change in state remains in effect until something else changes it. It is like a toggle switch.

The kind where the change in state remains in effect only as long as it is made to stay like that. It is like a spring loaded push buttom.


Yay! Spring loaded push buttons !! Boy, only to think about the puzzles that can be created. Simple example: two buttons at a distance large enough they cannot be pushed at the same time. Two NPCs. The NPCs can solve this puzzle easily if the second NPC was an object too and would respond to messages just like anything else.


quote:
This is solvable with constraints also. The agent stands on the chair to gain height. This constraint remains in effect to pursue his goal of opening the trapdoor. Therefore, he would never choose to use the chair as a pushing device to pursue his goal if he needs to stand on it for increased height. But of course, testing is necessary, and a more formal validation of the ideas.


Yes, it works, but the original plan of the NPC is flawed anyway. If there was just a chair and nothing to push the trapdoor with, he shouldn't even climb the chair.

I think this cannot be solved unless during the unifying process the objects change their world states too (virtually of course).

Then there's the problem of creating objects: if creating an object is allowed, shouldn't the NPC be able to think a plan involving the creation and the use of an object? This would mean the unifying would not only be allowed to change world states (not the real world states, but a test copy), but to also create temporary test objects for planning purposes only and send them messages just like the real objects. Boy, I just can't wait to get back coding.



Edited by - Diodor on January 28, 2002 9:45:52 PM
quote: Original post by Diodor
Then there''s the problem of creating objects: if creating an object is allowed, shouldn''t the NPC be able to think a plan involving the creation and the use of an object? This would mean the unifying would not only be allowed to change world states (not the real world states, but a test copy), but to also create temporary test objects for planning purposes only and send them messages just like the real objects. Boy, I just can''t wait to get back coding.


This is classical planning based on an agent''s own internal modeling and beliefs about the world. I would call it hypothetical world modeling.

Diodor, why don''t you email me and I can unload a whole boatload of theory, ideas, and implementational details on you plus info on stuff related to constraints, knowledge, state modeling, etc. I could talk about it all here, but it would be a little off topic and get buried in a lot of other posts.




___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Diodor, as an example, one of the things I would like to discuss is that of using resolution for consequence finding in a graph which is automatically partitioned into microtheories to make the inference process very efficient. If this can be done, and there are articles which describe the implementation, then you have a very robust knowledge base of the world state, with truth maintenance.

(It''s not as complex as it sounds; everything can be explained.)

___________________________________

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
My understanding of this system is not complete, so I could be wrong here, but anyway...

If the solution to the problem is a large number of steps (ie, goal completion will not be at the next step), and there are many possible actions (eg, a junk room with 20-odd bits and pieces in it), how does the agent know that a particular action will be part of the solution?

"Game tree expansion" was suggest early on in the thread, which would provide an answer to the question, but such a tree is exponentially big, so a 10-step problem which has 10 actions at every step would require a tree with 10^10 nodes.

What seems more feasable is developing heuristics. The heuristics are possible subgoals, which in turn have their own heuristics. The heuristics can be assigned to a group of objects which fulfil a certain condition, ie, their assigned to a parent element on the semantic net.

Example:

(1) When trying to unlock a door, good things to do are:
- find key

(2) When trying to find something small, good things to do are:
- light room
- open containers

Now, if we had

(3) When trying to light a room, good things to do are:
- flick lightswitch.

this could be an annotation on the lightswitch, which shows us where the two theories are linked.

in a sense, we can have the annotation as follows. the number in brackets is the chance of success (sort of) or at least a factor of how much it will help that goal.

lightswitch
action: flick
- room is lit (100%)
- a small object is found (80%)
- a key is found (30%)
- door is opened (20%)

this chance of success factor is what will aid us in choosing the action. we simply filter all object''s action lists by the goal in question, and pick the one with the highest chance, if it isn''t already taken.

Another point to consider, how are heuristics created? Do we encode them manually, or can they be learned, from past experience.

I don''t know if people have been through this already, but it seems like an interesting point to consider.

This topic is closed to new replies.

Advertisement