Smarts into the world?
I belive it was Will Wright that in an article said that in "The Sims" they put the smart in the world and not in the characters, but what did he mean by that?
The article stated that instead of having the characters test each object in the vincinity they can interact with against their needs, the objects instead "broad-casted" the needs they could satisfy to the character. It was then up to the character to evalutate if they had any needs to fullfill and had to interact with the object or not.
What I'm wondering is; what was the gain with objects broad-casting the needs they could satisfy, against the character polling each object in their vincinity? I'd like your views on this, because it sounds very interesting!
I also remember (although I could be wrong) that "Assasins Creed" used a similar technique with Altairs climbing-ability. Ie, when climbing up a building, the places Altair could put his hands and feet would broad-cast their position to him, instead of him searching for them.
I'm wondering since I'm beginning to build a messaging-system and it would be interesting to know your views on this.
I'm not certain here but imagine this if you will. It's been 3 months and your message system is well under construction with {rand(1,100)} 30 states - Climbable being one of them. It seems logical to me (not that I'm very intelligent) that, instead of testing every object around you for which of the 30 states could possibly be applied to it, allow the objects to broadcast which of the states apply to it. Since, you're going to be checking each object anyways, why waste 29 tests on an object that is only climbable. It can alert the logic to its singular state.
I don't know if this is why it's used, but it seems like a good idea - to me anyway.
-Xy
Edit - If you consider that there could be in upwards of {rand(100,1000)} 250 objects in any scene, there could be (in theory) an overhead of (29 * 250 = )7250 removed from your update logic (if all objects retained only one state). freeing up time for other logic.
I don't know if this is why it's used, but it seems like a good idea - to me anyway.
-Xy
Edit - If you consider that there could be in upwards of {rand(100,1000)} 250 objects in any scene, there could be (in theory) an overhead of (29 * 250 = )7250 removed from your update logic (if all objects retained only one state). freeing up time for other logic.
I read a few papers on this several years ago, but it's been awhile and I can't remember the references, sorry. There's a few computer science papers with this AI described in a few conferences even before the Sims.
What I think The Sims does is it encodes all the relevant AI triggers into the code for the object, not the Sim themselves. This includes things like what actions can be done to the object, what needs it satisfies, the animations Sims need to do to interact with it, etc.
Then the objects in the world can send a message to the Sim about what needs it satisfies or whether it should be paying attention to it. I can't remember if the Sims need to ask the world first, say by sending out a "I am hungry" message, but obejcts that can satisfy hunger can send out a "use me to satisfy hunger" message, whether it's a plate of leftovers, a fridge, or a phone connection to a pizza delivery service.
This way new objects can be imported into the Sims in expansion packs without any change to the core game. You can add in a new doohickey and it'll have all the AI to use it wrapped up inside.
Plus you also distribute the AI across a bunch of different entities. You can break the problem up in simple stages such as: Sim says to world - I am hungry; Word says to Sim - here's some stuff that makes you less hungry; Sim chooses between objects accordning to AI.
What I think The Sims does is it encodes all the relevant AI triggers into the code for the object, not the Sim themselves. This includes things like what actions can be done to the object, what needs it satisfies, the animations Sims need to do to interact with it, etc.
Then the objects in the world can send a message to the Sim about what needs it satisfies or whether it should be paying attention to it. I can't remember if the Sims need to ask the world first, say by sending out a "I am hungry" message, but obejcts that can satisfy hunger can send out a "use me to satisfy hunger" message, whether it's a plate of leftovers, a fridge, or a phone connection to a pizza delivery service.
This way new objects can be imported into the Sims in expansion packs without any change to the core game. You can add in a new doohickey and it'll have all the AI to use it wrapped up inside.
Plus you also distribute the AI across a bunch of different entities. You can break the problem up in simple stages such as: Sim says to world - I am hungry; Word says to Sim - here's some stuff that makes you less hungry; Sim chooses between objects accordning to AI.
Here's one reason why it's a good idea:
Let's say there is a very expensive entertainment device in the apartment which the Sim will prefer over all others, except if the Sim has recently used that device, in which case the Sim will be tired of it and select something else.
In order to implement this sort of behaviour in the AI for the Sim, the AI would need to store a list of all objects in the apartment, and keep track of the last time that object was used. But the "Apartment" class already has a list of all objects, so we might as well just use that one instead of duplicating the list (and then needing to maintain both lists), and thus the AI gets distributed across objects.
Let's say there is a very expensive entertainment device in the apartment which the Sim will prefer over all others, except if the Sim has recently used that device, in which case the Sim will be tired of it and select something else.
In order to implement this sort of behaviour in the AI for the Sim, the AI would need to store a list of all objects in the apartment, and keep track of the last time that object was used. But the "Apartment" class already has a list of all objects, so we might as well just use that one instead of duplicating the list (and then needing to maintain both lists), and thus the AI gets distributed across objects.
The "technnique" is very interesting; do anybody know any other games that use the same approach
And what this kind of "technique" (if it can be called a technique) would be called? I'd like to search for some papers on it.
And what this kind of "technique" (if it can be called a technique) would be called? I'd like to search for some papers on it.
Quote: Original post by Bangladesh
I belive it was Will Wright that in an article said that in "The Sims" they put the smart in the world and not in the characters, but what did he mean by that?
The article stated that instead of having the characters test each object in the vincinity they can interact with against their needs, the objects instead "broad-casted" the needs they could satisfy to the character. It was then up to the character to evalutate if they had any needs to fullfill and had to interact with the object or not.
It's actually a simple technique. Thought it becomes quickly unusable in more detailed environments. Sims had just a few set of equipment with rigidly defined functionality, each equipment in the node radiates its presence. You might see this similar to listening to irritating adds on TV, you have increasing desire to switch TV to different program.
However when developers are creating a large world with detailed environment this technique is rapidly lossing its usability, as there are too many important items. In addition in a detailed world it's often desirable to have a lots of environment defined implicitly (with respect to AI), so queque will not clog itself and memory will not be blasted out. Thought programmer might apply advanced filtering techniques. There is actually some cut off where a quick searching in the environment would be better than a pusher model with filtering.
Quote: I also remember (although I could be wrong) that "Assasins Creed" used a similar technique with Altairs climbing-ability. Ie, when climbing up a building, the places Altair could put his hands and feet would broad-cast their position to him, instead of him searching for them.
That game AFAIK used control points system. AKA Building designers defined special points on the building, and the building presented set of climbable points to the character model.
More importantly, the "smarts in the world" model was about the fact that all the behavior and animation information was contained in the object. That way, the object said, "if you use me, this is what you look like". That is why they have cranked out expansions like crazy... the AI of the Sims themselves didn't change at all. Create an object graphically, embed the instructions on how to use it, and drop it into the world. The Sim automatically knows not only what need it satsifies but has all the animation information, etc. available to him.
Here's more on it at AIGameDev.
Here's more on it at AIGameDev.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Quote: Original post by Bangladesh
I belive it was Will Wright that in an article said that in "The Sims" they put the smart in the world and not in the characters, but what did he mean by that?
The article stated that instead of having the characters test each object in the vincinity they can interact with against their needs, the objects instead "broad-casted" the needs they could satisfy to the character. It was then up to the character to evalutate if they had any needs to fullfill and had to interact with the object or not.
What I'm wondering is; what was the gain with objects broad-casting the needs they could satisfy, against the character polling each object in their vincinity? I'd like your views on this, because it sounds very interesting!
I also remember (although I could be wrong) that "Assasins Creed" used a similar technique with Altairs climbing-ability. Ie, when climbing up a building, the places Altair could put his hands and feet would broad-cast their position to him, instead of him searching for them.
I'm wondering since I'm beginning to build a messaging-system and it would be interesting to know your views on this.
In effect the objects were self/pre classified as to what use they are to the Sim. Sims only had a small selection of motivations and actions that could be done with objects.
Natural AI has the Sim interpreting its world from knowledge it already has ( or previously determines). Patterns in sensor input are recognized and used to classify an object and from that the value/uses are known.
For the Sims game mechanism they could not predetermine every recognition pattern of all future viewed objects. Instead they 'cheated' and had the objects themselves tell the Sim what they were good for (and only for the standard behaviors the Sims were programmed with). It doesnt matter what the object looks like or what its called or what its ID is, the Sim gets all the useful info about the object via only the simplest cognitive function (that of listening to object messages from within a limited area).
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement