Figuring out economic problems
Here is my thought: I do believe that emergent patterns that mimic life-like behaviors can and will be produced from a basic simple ruleset. So, along these lines, I am trying to program an AI which, based on a basic Maslov Hierachy filtered through a set of variables which are particular for each agent (lazy-thru-productive, weak-thru-strong, stupid-thru-smart, cowadly -thru-courageous, follower-thru-leader, homebody-thru-adventuresome, favorites: color blue, forests> mountains, fish>meat>fruits>bread, etc,), and watch the agent 'live' in its virtual world, scratching out a meager existence and interacting with other agents. My biggest problem that I have encountered so far is in trying to determine which of multiple possible solutions to pick. I have some background in micro-economics and am trying to incorporate some of that into the decision process, so, one thought is to rate each possible solution based on an estimate of time it would take to complete it, weighted by the different categories of labor involved, and come up with a value which could then compare to other possibilities.
It would work like this: Agent A determines that it is hungry, and knows that it can fish in the nearby stream, or, go to a far tree to pick some fruit, and either method would satisfy its 'hunger' issue. It calculates the distance to the stream, an estimate of amount of time spent fishing modified by how much it 'likes' to fish, and then compares it to the same thing but the steps involved in picking fruit. OK, this works fine.
The problem I have is when economics and interaction with other agents occurs. Suppose all agents are willing to sell any object they possess if the price is higher than their cost to produce. And any agent is willing to buy if the price is less than their costs to produce. I am hoping that market prices will naturally arise from such a system, where the 'seller' initially offers cost + 100%, and the buyers initially offer cost - 50%.... and perhaps a short series of haggling in between until either both agree or decide not to trade. NOW, assuming that this 'price system' produces some sort of market prices, how in the world do I get Agents to take into account the possibility of purchase when making its decision on how to satisfy its hunger issue? I mean, its easy enough to compare to 'like' variables which describe the same 'units' like time and preference, if fishing takes 10 min and berry picking takes 5 minutes then perhaps the choice to fish only occurs if the agent 'likes fishing' sufficiently enough (twice as much?) as he likes picking berries... but what about going to market and buying the Fish? I can total up the distance to market, use the current market price for fish, even make sure that 'fish' are currently available in the market, but how do I equivocate in a money price into the equation?
Besides the interesting nature of the problem itself, the reason I am pursuing this line of simulation is that I feel that the future of MMO's will be entirely dependent on 3 main things: an AI living world that players can participate in (the Sims, Dwarf Fort, etc), a dynamic world which can be affected physically by players (Dwarf Fort, Minecraft, etc) and advanced user interface design (the Wii, virtual eyewear, Xbox Connix, etc). Ultima Online at its release WAS going to go down this path, but they got overwhelmed by a myriad of other unforeseen issues with player interaction which put the kabbosh on doing things like: the players decimate the local deer population (the current food source of the bears), and so the hungry bears start rampaging closer to town looking to players as a replacement food source.... I always was awe-struck by that sort of living, breathing environment.
Well, any comments or help would be greatly appreciated. Thanks for reading my ramblings!
Be that as it may, you hit on one of the major problems of economic utility... that of comparing unlike units. For example, time vs. hunger vs. money vs. security vs. physical effort, etc. The short answer is that there is no correct answer. Much of it is a balance thing. To make matters worse, much of the "exchange rate" is based on the units you define for concepts not typically quantified -- such as "security". The best bet in that case is to make sure everything you are using is normalized (i.e. 0..1). Then, by using your personality-based modifiers, you can come up with coefficients that are used in the decision equations. For example, for PersonA, Security = 2x money. For PersonB, Security = 2.5x money.
Anyway, too much football and beer for right now... I'm digging your project, though. Hang in there!
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Hooo boy, does someone need a copy of my book. ;-)
Be that as it may, you hit on one of the major problems of economic utility... that of comparing unlike units. For example, time vs. hunger vs. money vs. security vs. physical effort, etc. The short answer is that there is no correct answer. Much of it is a balance thing. To make matters worse, much of the "exchange rate" is based on the units you define for concepts not typically quantified -- such as "security". The best bet in that case is to make sure everything you are using is normalized (i.e. 0..1). Then, by using your personality-based modifiers, you can come up with coefficients that are used in the decision equations. For example, for PersonA, Security = 2x money. For PersonB, Security = 2.5x money.
Anyway, too much football and beer for right now... I'm digging your project, though. Hang in there!
thanks! I found your site (IA) a few weeks ago in my searching around and really appreciated your insights and abilities. I am considering buying your book actually, but I fear it may be above me in alot of areas... but, I guess it can't hurt me any though, huh?! You are correct, obviously, in the problem of comparing economic utility - its a wholly subjective thing which is why it is and will always be impossible to model human economic systems to any good degree. I guess that I will have to just trial and error a bit with the variables (normalized, as you said) until I get what I happen to consider 'human-like' behavior to certain situations... it will be a long drawn out process I fear though.
I have started out with a few simple world objects for the agents to utilize (tree has: wood, fruit, branches, stream has:water, fish) and a very basic maslov hierarchy (shelter/food so far), and just getting into all the sub-tasks that even these few things require..... its gonna take alot of typing to get all these various FSMs in... thinking of building a graphical tool to help do alot of this.
Considering multi-component multi-stage actions, you might want to look into a planner architecture rather than a ton of FSMs. The economics of the situation now modify the edge weights of your planner topology in order to determine the "best route" to get what you want.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
The book will definitely not be over your head. It's a pretty straight-forward read.
Considering multi-component multi-stage actions, you might want to look into a planner architecture rather than a ton of FSMs. The economics of the situation now modify the edge weights of your planner topology in order to determine the "best route" to get what you want.
ok ok you got yourself another customer... I will buy the book later today. I hope it has stuff that covers 'memory' as I have determined that each agent will need to remember various things in order to make decisions faster and better - already each agent has, what I call, a 'Task Stack' which allows tasks to be interrupted by more pressing needs (something dangerous comes into the area while trying to fish so run away for a bit until safe, then go back to fishing.... wood axe breaks while chopping wood so go get another from storage then go back to chopping for whatever reason, etc) and I dread having a more space hogging memory for these agents because I want to keep storage down to a minimum to allow for lots of agents. Plus I don't know how to do a memory yet.
One of the goals I have is to have one large AI routine which is sophisticated enough to, just by having a large set of agent definer variables (Lazy, leader, strong, etc) can be used for every type of agent in the world from animal to more intelligent, more abilities human-like agents. I notice that there is some focus on re-using FSMs for different agents in literature, but usually grand structures have separate routines. This brings up yet another problem that I haven't tackled yet: can an individual agent transcend into a 'master planner' agent? Meaning, once a lowly agent has fulfilled its Maslov tree pretty good, and has the proper gumption (Leader, etc), can it start exerting influence over other agents - think town planning/layout, etc.
Anyways, I appreciate your feedback and look forward to being overwhelmed by something with such a scary title as 'Behavioral Mathematics'!
For example if someone buys wild rabbits at the farmer's market today for $5 and you expect to catch one per hour of hunting, a $20 restaurant meal is worth 4 hours of rabbit hunting: you might prefer to eat one rabbit at home (only 1 hour of hunting, plus cooking) or not. Whether you need to go hunting before eating or you already have $20 in your pocket isn't relevant, unless the unpleasantness of hunting depends on when you do it.
Of course there might be different available jobs; hunting and cooking one rabbit might be less unpleasant than 4 hours of rabbit hunting, but more unpleasant than a 2% quota of a quick $1000 armed robbery.
The market system shouldn't try to find "true" prices, because they don't exist.
Every agent can decide ask and bid prices, based on the other options it has; for example hunting x rabbits (sold for the needed $20) = hunting 1 rabbit (H) + cooking it (K)+ the pleasure of eating better at the restaurant (F): assuming you have numbers for H,K and F you have (x-1)H=F+K, x=1+(F+K)/H, a rabbit can be sold acceptably for $20/x and above (presumably with some greed and haggling heuristic, e.g. asking twice the reserve price at first and trading at the average between the asking and bidding prices).
Omae Wa Mou Shindeiru
It would work like this: Agent A determines that it is hungry, and knows that it can fish in the nearby stream, or, go to a far tree to pick some fruit, and either method would satisfy its 'hunger' issue. It calculates the distance to the stream, an estimate of amount of time spent fishing modified by how much it 'likes' to fish, and then compares it to the same thing but the steps involved in picking fruit. OK, this works fine.
The problem I have is when economics and interaction with other agents occurs. Suppose all agents are willing to sell any object they possess if the price is higher than their cost to produce. And any agent is willing to buy if the price is less than their costs to produce. I am hoping that market prices will naturally arise from such a system, where the 'seller' initially offers cost + 100%, and the buyers initially offer cost - 50%.... and perhaps a short series of haggling in between until either both agree or decide not to trade.
I would assume Dave has some good ideas around how to implement this in a video game.
However, since you're interested in simulating a complex financial environement here are some ideas you might find interesting.
In the real world price dynamics emerge because people are agreeing on prices before anything actually changes hands. In other words, oil is sold before it's even pumped from the ground. When you are told that 'the price of oil is $xxx a barrel' what you're getting is an estimate based on contractual agreements for a future delivery. Think of it as an auction for something that might not actually exist.
The general (and over simplified) idea is this:
- Assume a commodity, R, (oil, gas, fish, fruit) has some fixed cost, Xf, to collect at time T.
- Buyers offer a price, Xb, for commodity R based on what they think the value will be when the commodity is actually available for pickup at T+i.
- Sellers offer a price, Xs, for commodity R based on what they think the value will be when the commodity is available for delivery at T+i.
Here comes the fun bit:
- Xb and Xs are affected by what the buyer and seller preceive as the supply and demand of the commodity. This perception is based on what both buyer and seller think future demand will be.
- The perception of supply and demand is based on what has already been agreed on for prices (a contract).
-- If Company A agrees to sell 10 tonnes of ore in 2 days at $4 /KG, the supply in 2 days set at 10 tonnes, and the demand at $4/KG. The contract is set, and someone is paying.
-- If Company B agrees to sell 10 tonnes of ore in 2 days at $3.50 / KG, the supply in 2 days is now set at 20KG, with a demand of $3.75.
-- The flip-side could be true. Someone might REALLY want ore, and offer Company B $4.50/KG, then the supply is now 20KG, with a demand of $4.25/KG
-- Note that the list demand is in no way a reflection of the cost of a new contract; it's only a reference.
This perception can be rigged. Sometimes a fish seller could buy his own fish (or his competitors fish) to trick the market in to thinking there is a surge in demand. This happens in World of Warcraft sometimes, where someone selling a potion will buy up all of the same potion from the auction house only to relist at a higher price.
The important thing to remember is that all sales are final REGARDLESS of what may become available on the date of delivery, and that buyers and sellers both have an incentive to make deals as far ahead in the future as possible.
So, Xf and Xs will determine production, Xb is a function of buyers need and their own sense of (Xf,Xs), and Xs is typically based on what the seller knowns about the compeition and the dynamics of their given market. Even Xf is unknown because anything can happen before the commodity is delivered. (Edited): This is also what drives cyclical price fluctuations-- producers continue to produce more until this is a glut, driving prices down, reducing production, which in turn causes prices to rise again, and the cycle continue
If done properly you should see wars between fish producers errupt as a way to inflate fish prices.. which should in turn force ore prices up do to increased demand of weapon sales, which should drive wheat prices up do to the increased cost of building tractors (ore is being diverted in to tanks), which should then cause the creation of government and regulatory boards as a way to bring down wheat prices, which in turn will slowly inflate the cost of wheat (due to price fixing) at the expense of fish prices. You might even see the invention of the corporation, cartels, international trade agreements, and unions! Maybe add a 'lawyer' character class so you can resolve trade disputes and mediated resolutions. (I'm joking of course)
In terms of a video game: If you have shop keepers, you might want to keep a global commodities index running, and have the shop keeper essentially act as a broker that gets him/her a premium (a function of the index price * volatility/risk * profit expectation). If you assigned production of certain regions of the world, and adjust output/cost based on how much combat (or other type of events) occures in those areas then you provide a way for player dynamics to indirectly affect global prices.
I see you have at least some economics background (using the term 'reserve price', which most un-economic folks don't know), and you are looking at the problem through economic lenses, which is how I am as well. Instead of some 'master AI' trying to central plan the economy, I would like the individual agents to have natural market incentives/disincentives to determine how to 'grow' or otherwise increase their well-being.
IF such a system could be developed, then some very interesting game situations could arise AND content would be dynamically created instead of hardcoded by developers. for instance, a village which has easy access to fishing would naturally become a hub of fishing, trading with the mining village with its particular comparative advantages. This fluctuating market prices also end up in creating 'quests' for players who happen to be participating in the world. Like, being hired as a guard to protect the trade caravan which keeps getting robbed on the way to trading with the fishing village.... although that seems very specific and 'high-level', my feelings are that once I can get agents to evaluate possible known solutions to problems, this can be applied to a variety of different situations, from determining whether to buy a fish or go fishing, to determining how much to offer to pay for a 'guard' position of a caravan. Not too mention the decision to enter into arbitrage and take fish from one village worth $5 and sell them in another village for $20 (with all associated costs with such travel included).
lofty goal, just starting out with the basics at first and seeing if I can uncover 'patterns' and other methods to tackle the larger problems - plus its fun watching a bunch of agents on screen doing their own thing (even if it is only chopping wood or fishing at the moment!).
I hope it has stuff that covers 'memory' as I have determined that each agent will need to remember various things in order to make decisions faster and better...
Not directly, no. However I do deal with "decision momentum" -- which is not the same. It is simply to prevent "strobing" of behaviors from one to the other and not accomplishing anything. This, of course, will be important to what you are doing anyway.
- already each agent has, what I call, a 'Task Stack' which allows tasks to be interrupted by more pressing needs (something dangerous comes into the area while trying to fish so run away for a bit until safe, then go back to fishing.... wood axe breaks while chopping wood so go get another from storage then go back to chopping for whatever reason, etc)
That is a common approach. However, you have to be aware of the possibility that a remembered task is now no longer valid by the time you return to it. Make sure you re-check upon returning to each level.
...and I dread having a more space hogging memory for these agents because I want to keep storage down to a minimum to allow for lots of agents. Plus I don't know how to do a memory yet.
Some of the shared information can be held in blackboards not only for memory purposes but for speed in calculation. For example, don't make each agent calculate the "current market price of rabbit meat" every time in may need that information.
One of the goals I have is to have one large AI routine which is sophisticated enough to, just by having a large set of agent definer variables (Lazy, leader, strong, etc) can be used for every type of agent in the world from animal to more intelligent, more abilities human-like agents.
Again, a powerful technique. However, be aware that many of your higher level thought processes are simply going to be NULL for the lower-level agents. Not everything scales!
I notice that there is some focus on re-using FSMs for different agents in literature, but usually grand structures have separate routines. This brings up yet another problem that I haven't tackled yet: can an individual agent transcend into a 'master planner' agent? Meaning, once a lowly agent has fulfilled its Maslov tree pretty good, and has the proper gumption (Leader, etc), can it start exerting influence over other agents - think town planning/layout, etc.
This is similar to above. Don't think of it has "transcending" but rather switching certain aspects on and off. Through communication of information and biasing influence, you can quite easily have one agent affect others by subtly "turning their knobs."
Anyways, I appreciate your feedback and look forward to being overwhelmed by something with such a scary title as 'Behavioral Mathematics'!
Meh... I just chose the title to impress chicks.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
Since you seem to already have a good grip of comparing and evaluating plans of concrete actions, you might treat money as worth the expected amount of work needed to earn it back, based on possibly wrong projections of market prices.
Excellent point... all of them can be abstracted into a generic "unit". Labor, desire, money, etc. Jeremy Bentham did something similar with his "hedonic calculus" -- something which I touch on in the book.
Of course there might be different available jobs; hunting and cooking one rabbit might be less unpleasant than 4 hours of rabbit hunting, but more unpleasant than a 2% quota of a quick $1000 armed robbery.
Remember to include the potential for success and failure -- and the ramifications of each. For example, what is the risk-reward ratio of the armed robbery? If the penalty is negligible, then give it a shot. If the penalty far outweighs the gain, then pause. But what is the potential for getting caught? If the penalty is death, but you have a 0.001% of getting caught, it doesn't matter. Fun with math, folks!
The market system shouldn't try to find "true" prices, because they don't exist.
Another excellent point. This is how you can build true market systems. Everything is relative. A well-designed system will automatically self-balance based on the environmental supply and demand, the numbers you used for time costs of production, the satisfaction rates of the goods, etc.
Interestingly, while it isn't hard to create a system that self-balances based on these factors, it is actually hard to make it balance in such a fashion that things make sense to us when we compare them to reality. For example, if I told you the going price for AbstractObjectA was $100, you would have no frame of reference and just accept it. If I told you that AbstractObjectA was rabbit meat per pound, you might be mildly alarmed. We have just broken the suspension of disbelief because it doesn't match up with our pre-conceived notion of what rabbit meat should cost.
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"