My response to Bluefirehawk’s comments lead naturally to a slightly more in-depth discussion of how agents work, so I’ve left them til last.
you need to consider the speed and potentially combinatorial explosion
Quite true. That's why I only plan to simulate a few dozen agents in any level of detail, and as I work on implementation hopefully I'll find a way for each agent to choose only relevant beliefs to process when making decisions. Another point is that agents will be interacting quite infrequently, so they'll be able to spend many millions of operations on each individual decision. I think I can achieve an interesting level of emergent behaviour within that constraint.
The rest could be broken up by faction, e.g. which faction is the individual's personality closest to, simulate them using that faction's motivations. Also do you intend to only simulate the important people? if you want to simulate everybody... I'm afraid the unwashed masses would need to be simulated in a more "down is down" manner.
The faction idea is more or less what I'm planning on doing. Where possible I'll avoid simulating individual NPCs at all, instead trying to work out what sort of consensus a faction would come to. Only those characters who are important enough in their own right will get dedicated agents assigned to them. I'm also considering using a sort of "scaled-back" agent (with few beliefs and goals) to stand in for individual unimportant NPCs the player interacts with, such as merchants, artisans - everyday people on the streets. Such an agent would only need to be used for as long as the player was interacting with it, which in many cases would only be a few seconds.
What are the 7 actions that define your game? Cut it down brutally. For example, forget up/down/left/right, focus on the nuts and bolts. Perhaps "travel to location", "give tribute", "demand tribute", "negotiate", "agree", "disagree", "trade". Not great examples, but that's your job. It's not essential to be as minimal as possible, but until you do that you won't really know what the core of your game is.
Other than "travel to location" and possibly "take / give item", the important actions will all be forms of "give information", whether true or false. That information might be "I would like X", i.e.a request, where X could itself be information, an item or some other form of assistance; it could be "A has asked me to tell you Y", i.e. passing on a message; or it could be "I will (not) do this if you do that", i.e. an agreement, a refusal or a threat.
what I meant is to define all the types of events/systems that exist in the game
Fair enough - thanks for clearing that up. The truth is, other than general world-building, I’ve spent most of the time thinking about how the decision engine for the agents would work, at the expense of other aspects of the game - so I don’t have a clearly-defined list of features for gameplay yet (which is why I was a bit vague about trade, for example). I recognize that this is something I need to sort out before it will be possible to begin developing the game proper, but so far my thoughts have been towards prototyping the decision engine itself and seeing how plausible my ideas for it were.
IMO the player will buy into if it's fun for alot longer than if its just trying to be realistic.
Yeah, you’re right that it won’t make any difference to 99% of players. To be honest, this was more a matter of self-satisfaction because I came at the whole thing from a world-building point of view, and I like things to be coherent.
I'm not sure how realistic cities would enhance this, and it could eat into valuable processing time.
You make a fair point. I’ll try to limit “realism-enhancing” features to the aesthetic side of things, at least until I have an idea of how more important things will use up resources. A realistic early- Bronze Age city doesn't actually need to be that big, so we'll see.
From my point of view, it looks as if having a select few types of NPCs that are dynamic agents […]
I’m completely with you on this (see above comments on factions).
Interesting! Does the player have an inventory for carrying items then if they need to barter? And if so, why is item collecting/trading a necessarily limited part of the game?
I’m embarrassed to admit it, but I really haven’t thought this side of things through in enough detail to give you a good answer. I’m not saying collecting / trading is a novelty feature which will only happen once or twice, but I don’t feel like it should be
too big a part of gameplay because it has the potential to change the flavour of the game quite a lot, and doesn’t feel like something a diplomat would spend
most of his time doing. I’m being a bit vague and wordy, but it’s hard to convey ideas about “amount” when I don’t have that clear an idea myself yet.
Sounds a lot like utility-based agents, if you haven't investigate more on those: http://en.wikipedia....telligent_agent
Yes, that's the basic idea.
I'd had a look at your RPG idea already - it looks interesting, but I can't really think of any input at this stage beyond what people have already said. Good luck with it - I'll keep an eye on your thread, and post if I think of anything useful.
Now for Bluefirehawk’s comments - sorry, this is going to get a bit lengthy.
Sorry if I have sounded a bit aggressive in my last post, this wasn't my intention.
Not at all.
In your previous posts about you wrote about belief differently
Sorry for the confusion - the level of belief that a particular outcome
will occur is based on the levels of belief that various states
are currently true. At this point I should give a more in-depth outline of how each agent works. While still not at the level of a complete model, hopefully it will clarify some issues (please note that I'm not particularly familiar with object-oriented programming and am not using any terminology in a formal sense):
The agent has three main categories of object: beliefs (a percentage assigned to each possible
current state), goals (a desirability assigned to each potential
future state) and planned actions (to
change the state in the direction of something more desirable). When the agent witnesses an event or - more often - communicates with another agent, it updates those three types of object with each incoming piece of information. Its ability to update its own beliefs, goals and plans is based on its ability to emulate other agents' beliefs, goals and plans, which in turn is based on how accurate its beliefs about those other agents are. In effect, the agent will perform lots and lots of cost-benefit calculations, mostly on behalf of other agents - or rather, on what it believes to be the states of other agents.
Say agent A is told something by agent B. A then has to do the following:
1. Update any pre-existing beliefs regarding the subject B is talking about, and - if appropriate - create new belief objects pertaining to the new information; simultaneously, A needs to update its beliefs about all the agents which might have played a part in the message eventually arriving via B, including B itself. To do this, A will make use of its pre-existing beliefs: for example, how likely it thinks B would be a priori to lie about this particular thing, how likely
another agent would be to lie to B about it and for B then to believe it and pass the message on - these things in turn are based on what A thinks various agents, including B, might stand to gain or lose by lying, i.e. what it thinks their goals are.
In practice, what A will do is perform Bayesian inference: it considers each possible
current state (including other agents' beliefs, goals etc.) in turn, imagines that state was true and works out what the other agents would be most likely to have done in those circumstances; then it decides how closely each of the hypothetical outcomes matches up with what it's observed and with its prior beliefs to come up with new belief estimates for the relevant states.
2. Update its goals. The desirability that A assigns to most potential
future states will depend on what events it thinks will follow on from each of those states. That in turn depends on how it thinks other agents will act, which is determined by what it believes their current beliefs and goals to be; so once A has updated its beliefs about those things in stage 1, it needs to imagine what the other agents would be most likely to do in each of the hypothetical future scenarios, just as in stage 1. Then it can assign updated desirability levels to the future states based on updated estimates of the likely outcome of each state.
3. Update its plans. Having already worked out the desirability of the various possible future states in stage 2, A can now restrict itself to looking at those which it can immediately move to via its own actions (including talking). It can pretty much go ahead and pick the one with the highest desirability, but with the caveat that time is also a limited resource - the player might not spend hours travelling from one end of the kingdom to the other, but the characters still do. Ideally, agents' locations and the time it takes to travel between them should be part of the state used when calculating desirability in stage 2.
I'm currently at the stage of working out how all of this will work at a lower level.
Agent A goes to agent B and wants to know if he can trust agent B. […] For now you can also remove the option that B is trying to double cross A.
The thing is, in order for A to decide whether or not B is lying there has to be a possibility that he is. A needs to consider the possible states that could motivate B to lie, the possible states that could motivate him to tell the truth and then work out, on the basis of other prior beliefs, which one he thinks is actually happening. The situation is in a sense irreducible: I might be able to program in a simple scenario, with few possible states and correspondingly simple sets of beliefs and goals, but the decision engine which analyses those states will still have to have full functionality.
on the Dialog thing: You don't seem to be the guy that is happy with easiest solution. I presume this is a time eater, you can put in as much time in it as you want, it is never finished and can always be better. Keep that in mind when you really want to implement this.
You’re certainly right about that. I’m an extreme perfectionist; it’s a habit I’m trying to break, but it’s not easy. I think I’ll be satisfied with anything that works, but I’ll be happier the more I can add to it.