NPC modeling
I'm curious as to whether there are any existing resources (ideally C++, but I'm flexible) for doing something like the following, or if anybody has a much better way to do this sort of thing than I'm describing.
I'd like an NPC to use stereotypes and previous information to interpret new data and choose a response. Suppose the NPC is a merchant, who is supposed to haggle over the price of some good he sells. Consider two cases, first one where the player's initial offer is high (high enough to make the merchant consider it a profitable transaction), the second where the initial offer is low.
First, high offer. There are lots of reasons someone might make a high offer. He might just be in a hurry. He might be generous. He might be rich. He might mistakenly think the item is worth much more than it is. The item may be worth more than the merchant thought. The potential buyer may be attempting some sort of con which will be facilitated by ingratiating himself with the merchant.
How likely the merchant thinks each possibility is will depend on other things the merchant may believe. If the potential buyer is expensively dressed, the rich theory seems more likely. If the merchant doesn't actually know much about items like this, the theory that it's worth more than he thought becomes more likely. If the potential buyer is a stranger, the con hypothesis looks more likely. If he's a revered local hero, the generous theory looks more likely. Any theory looks better if all the others look bad.
Depending on which theory the merchant accepts, he may react in different ways, and his beliefs may change. If the merchant decides the potential buyer is being generous, he'll likely develop a more favorable opinion of the potential buyer, and likely make a counter-offer that isn't much higher (or just accept the offer, depending on local customs concerning how long you're supposed to haggle). If the merchant decides the potential buyer is trying to con him, he'll both develop a more hostile attitude toward the potential buyer (making the con theory more likely in future interactions) and make an inflated counter-offer.
I hope it's obvious how the converse would proceed. A low offer from somebody he likes might be accepted (it may be an emergency, after all), but will lower his opinion of the person making the offer, making it harder for the person to get more favors in the future. A low offer from a stranger reinforces the belief that strangers in general are obnoxious tightwads, and certainly this one in particular is, and thus make it more likely that if the person makes a high offer in the future for some other item the merchant will guess con or decide the item must be more valuable than he thought. And so forth.
Anybody do AIs that model anything like this?
I don't think you are going to find a library or a single technique that does all of that.
The part about assigning probabilities to several possibilities can be implemented using a Bayesian approach. Deciding what would be the best thing to do as a response can be implemented by expected utility maximization. This is a kind of general solution to AI and of course there are many many details to be filled, but this gives you a sort of general roadmap.
I hope that's enough pointers to get you started.
The part about assigning probabilities to several possibilities can be implemented using a Bayesian approach. Deciding what would be the best thing to do as a response can be implemented by expected utility maximization. This is a kind of general solution to AI and of course there are many many details to be filled, but this gives you a sort of general roadmap.
I hope that's enough pointers to get you started.
Quote: Original post by Protagoras
How likely the merchant thinks each possibility is will depend on other things the merchant may believe. If the potential buyer is expensively dressed, the rich theory seems more likely. If the merchant doesn't actually know much about items like this, the theory that it's worth more than he thought becomes more likely. If the potential buyer is a stranger, the con hypothesis looks more likely. If he's a revered local hero, the generous theory looks more likely. Any theory looks better if all the others look bad.
The data here has to be arbitrary - after all, there's obviously no perfect solution to this problem, otherwise it would have revolutionised real-world markets by now. You will never know for sure whether a person is truly rushed or truly rich or whatever, so you have to decide which is most likely, on balance. Given that the data is artificial and specific to your world, you can shape it however you like.
It looks a good candidate for a fuzzy logic solution to me. A few rules like the following might work:
"if offer is high and clothing is expensive, customer is rich"
"if offer is low and clothing is expensive, customer is unpleasant"
"if offer is high and item is unfamiliar, customer is conning us"
...etc...
You combine the outputs of each rule to get values for how much we believe the customer is rich, how much they're trying to con us, etc.
Quote: Depending on which theory the merchant accepts, he may react in different ways, and his beliefs may change. If the merchant decides the potential buyer is being generous, he'll likely develop a more favorable opinion of the potential buyer, and likely make a counter-offer that isn't much higher (or just accept the offer, depending on local customs concerning how long you're supposed to haggle). If the merchant decides the potential buyer is trying to con him, he'll both develop a more hostile attitude toward the potential buyer (making the con theory more likely in future interactions) and make an inflated counter-offer.
It's quite easy to generate a disposition value based directly upon which of the situations has been assumed. No need to overcomplicate that; just make it a modifier that you add to or multiply with the 'average' counter-offer.
Quote: A low offer from a stranger reinforces the belief that strangers in general are obnoxious tightwads, and certainly this one in particular is, and thus make it more likely that if the person makes a high offer in the future for some other item the merchant will guess con or decide the item must be more valuable than he thought. And so forth.
I'd warn against systems that generalise like that, because firstly it's not that easy to do properly, and secondly you probably don't want positive feedback loops in your game, as they tend to increase to infinity.
Quote: Anybody do AIs that model anything like this?
Possibly, but you're asking for aspects of quite a few things to be combined (eg. game theory, pattern recognition, belief networks) when even doing a single one well is often beyond most game developers. You're certainly not going to find any sort of canned solution.
You should check out articles/tutorials on bayesians networks. Im not a specialist on those, but I think they would fit to your goals.
I was not looking for canned solutions, more someplace with some things people have tried, forums where people talk about what's worked better or worse than expected (there's a bit of that around here, but I was wondering if there were any other forums where discussion of AI conversations was a focus; here pathing seems to dominate the threads), and the like. Certainly Bayesian approaches are what I'm thinking of. I haven't quite been able to track down a fuzzy tutorial (fuzzy logic seems straightforward to implement just using ordinary math functions, but again I'm curious if there are any unexpected difficulties one ought to be on the lookout for, or any clever tricks others have discovered), though certainly fuzzy logic seems highly appropriate.
I am curious about your attitude toward positive feedback loops, Kylotan. It seems to me that the mathematical means of preventing them from going to infinity shouldn't be that hard (include a bias against "extreme" results, which increases more sharply than the feedback loop). Is that not as simple in practice as it appears?
I am curious about your attitude toward positive feedback loops, Kylotan. It seems to me that the mathematical means of preventing them from going to infinity shouldn't be that hard (include a bias against "extreme" results, which increases more sharply than the feedback loop). Is that not as simple in practice as it appears?
Quote: Original post by Protagoras
I was not looking for canned solutions, more someplace with some things people have tried
The term 'existing resources (ideally C++)' on here usually implies libraries or the like, and for such a specific question I doubt that exists. Some people aren't aware that the question they ask is quite esoteric and/or specific, so I apologise if you are not one of these people!
Quote: Certainly Bayesian approaches are what I'm thinking of.
Personally I think Bayesian methods could solve part of your problem but not all of it. One reason why junk mail can defeat Bayesian filters quite a lot of the time is because the initial recognition process is too rigid. That can be where something like fuzzy logic could come in (not that fuzzy logic is an optimal solution at all - just an easy one to implement and particularly suitable to arbitrary tweaking).
Quote: I haven't quite been able to track down a fuzzy tutorial (fuzzy logic seems straightforward to implement just using ordinary math functions, but again I'm curious if there are any unexpected difficulties one ought to be on the lookout for, or any clever tricks others have discovered), though certainly fuzzy logic seems highly appropriate.
The actual logical operations are easy to implement, though it's worth noting that there are different models people use, and that different defuzzification algorithms may give you better or worse results.
Quote: I am curious about your attitude toward positive feedback loops, Kylotan. It seems to me that the mathematical means of preventing them from going to infinity shouldn't be that hard (include a bias against "extreme" results, which increases more sharply than the feedback loop). Is that not as simple in practice as it appears?
There will come a point where that bias balances with the results, and you'll just converge on that extreme instead. Typically this means all results will eventually tend towards one of the two extremes, and that is rarely what you want. Sadly I can't explain this much better, but it's the kind of problem you'd face in control systems all the time.
You could try assigning different weights to the different options and then at the end use those values to decide an action...
IE:
//cc = customer cloths
Customer is wearing nice cloths, cc=10
Customer is wearing normal cloths, cc=0
Customer is wearing torn cloths, cc=-10
//ck = customer knowledge
Customer is very knowledgable, ck=10
Customer has normal knowledge, ck=0
Customer has lower knowledge, ck=-10
//cr = customer reputation
Customer has good reputation cr=10
Customer has standard reputation cr=0
Customer has poor reputation cr=-10
Ok then at the end of the checks, you have a switch like this...
merchantResponse = new Array (cc, ck, cr);
switch(merchantResponse)
{
case (10, 10, 10):
//do this reaction
break;
case (10, 0, 10):
//do this reaction
break;
case (0,10,10):
//do this reaction
break;
etc...
I'm not sure if the syntax of this is correct, but the general idea should work.
IE:
//cc = customer cloths
Customer is wearing nice cloths, cc=10
Customer is wearing normal cloths, cc=0
Customer is wearing torn cloths, cc=-10
//ck = customer knowledge
Customer is very knowledgable, ck=10
Customer has normal knowledge, ck=0
Customer has lower knowledge, ck=-10
//cr = customer reputation
Customer has good reputation cr=10
Customer has standard reputation cr=0
Customer has poor reputation cr=-10
Ok then at the end of the checks, you have a switch like this...
merchantResponse = new Array (cc, ck, cr);
switch(merchantResponse)
{
case (10, 10, 10):
//do this reaction
break;
case (10, 0, 10):
//do this reaction
break;
case (0,10,10):
//do this reaction
break;
etc...
I'm not sure if the syntax of this is correct, but the general idea should work.
Well, I do want to have histories of past interactions be a factor. One thing which may help with that is that in the course of my current efforts I've devised a simple weighted value class (basically just two floats, the value and the weight, plus the functions for the class). It's got an updater which will either take a float or a weighted value, and do a weighted average of the new value with the current value (giving a float weight 1 if there's a float input), and give the new value a weight from the sum of the two input weights. Seems like a good way to store NPC attitudes. That way, the longer the PC's track record with the NPC, the greater the weight of the NPC's existing attitude, and the harder it is to shift that attitude with new actions, though perhaps truly dramatic actions (successful quests and such) might have much bigger weights than everyday actions.
Quote: Original post by Protagoras
Well, I do want to have histories of past interactions be a factor. One thing which may help with that is that in the course of my current efforts I've devised a simple weighted value class (basically just two floats, the value and the weight, plus the functions for the class). It's got an updater which will either take a float or a weighted value, and do a weighted average of the new value with the current value (giving a float weight 1 if there's a float input), and give the new value a weight from the sum of the two input weights. Seems like a good way to store NPC attitudes. That way, the longer the PC's track record with the NPC, the greater the weight of the NPC's existing attitude, and the harder it is to shift that attitude with new actions, though perhaps truly dramatic actions (successful quests and such) might have much bigger weights than everyday actions.
You have to build a system that saves a history of interaction data/factors and then analyzes this data to generalize reactions for your NPC. You also need contingencies on how to react when there is insufficient data (ie- like when meeting for first time and there is no 'history' to analyze...) or when events happen that make parts of the history irrelevant/invalid (ie- the new hero who just saved the town is looked very differently upon than the haggard wandering merc he appeared before...).
A significant problem is how many game factors will you be comparing and how are they judged (each NPC would customize what/how factors are judged..) Weights for different factors then are summed up (historic incidents might have a decreasing influence the older they are..) and then a spectrum of responses (policies?) have to be selected from. The reactions themselves might be dependant on other external factors/context (ie- a shopkeeper in a town under siege reserves scarce goods for people he knows will pay 'under the table' or even not offer them to keep them for himself..)
A good reaction system is not easy to do if you want it to behave reasonably to a wide range of situations.
Beware of simple systems, because they can produce unwanted distortions. You weight system would cause exagerations for players who go to the shop more frequently.
[Edited by - wodinoneeye on January 19, 2007 5:19:26 AM]
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
I certainly agree that one does not want to make the system too simple. However, you also can't make it too complex, or it will be impossible to maintain (not to mention taking lots of processing power if you have lots of NPCs). You are correct that there are potential problems with the weight system if the weights get too big, but rather than storing and evaluating the vast amounts of additional information you suggest, my solution was to have weights degrade a bit over time, especially when they get too big. If the system automatically reigns in excessively large weights, then while someone who simply does a lot of business with a merchant will have a fairly highly weighted fairly decent relationship, that relationship will never reach a high enough weight that the sure to be very heavily weighted negative action of robbing the merchant won't reduce the merchant's opinion of the character a lot.
Another of my guiding principles is that the goal is to model people, so there are many cases where shortcuts are appropriate, not just to reduce processor load but also because people use shortcuts. We don't remember every detail of every past interaction and calculate our opinion of a person based on that every time we're considering how we think of someone. Still, it would add to the system if some record of past interactions were stored. People seem to store some fairly minimal data and then construct memories as needed via somewhat error-prone inference from the stored data (with further errors introduced because the data is constantly being revised on the basis of plausibility in light of other data). It'd be extremely cool to model that and have NPCs comment on past interactions in conversation, usually but not always describing a past interaction that actually happened. That would add a lot of complexity, but it would probably be worth it to try to build up to that eventually. It could be added on by making attitudes functions from weighted values rather than simple weighted values, and making individual bits of memory data weighted values.
Another of my guiding principles is that the goal is to model people, so there are many cases where shortcuts are appropriate, not just to reduce processor load but also because people use shortcuts. We don't remember every detail of every past interaction and calculate our opinion of a person based on that every time we're considering how we think of someone. Still, it would add to the system if some record of past interactions were stored. People seem to store some fairly minimal data and then construct memories as needed via somewhat error-prone inference from the stored data (with further errors introduced because the data is constantly being revised on the basis of plausibility in light of other data). It'd be extremely cool to model that and have NPCs comment on past interactions in conversation, usually but not always describing a past interaction that actually happened. That would add a lot of complexity, but it would probably be worth it to try to build up to that eventually. It could be added on by making attitudes functions from weighted values rather than simple weighted values, and making individual bits of memory data weighted values.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement