Advertisement

AI priority theory

Started by October 23, 2005 04:53 PM
10 comments, last by chuck22 19 years, 1 month ago
i've been interested in Artificial Intelligence for a long time but i am not an experienced AI programmer. what i am experienced at is writing programs that solve puzzles/problems in Java. i was the best (got assignments done fastest) in my computer class because i had an effective method about writing the programs. first, i would write down step-by-step how i went about solving the problem. then i would recognize any patterns/equations that i used to get from step to step. then i'd write a program to do exactly what i did...a pretty standard way of writing (unintelligent) programs. then a few months ago i looked into this AI stuff and read about neural networks and decision trees and machine learning...and as i read about that stuff i was just thinking to myself that that is not how i go about solving problems, most people don't sit down and draw out the various AI methods to solve their daily problems either. so i started observing my everyday actions, and those around me, and i tried to figure out why we do what we do. and at some point the idea of priorities jumped at me and it made perfect sense. everyone does what they do because that action leads to a result that accomplishes a higher priority than what would result from the action not being taken... in smaller words... Person A has 2 priorities: 1. live (high priority) 2. watch that football game (lower than #1) a guy walks into Person A's house with a gun and says 'the only way you can stay alive is by not watching that football game'. the person evaluates his priorities and agrees to not watch the football game in order to stay alive. all right, stupid example, but you get the picture. the point is there is a logical process that a computer can follow using priorities. - Modify Priorities - includes creating, adjusting level of importance, and deleting - identify environment and observe which priorities may be affected - evaluate the environment and take actions to accomplish higher priorites that's more of a thrown-together rough draft and isn't really a process at all...more like just steps the computer can take. but i hope i'm getting the idea out there. everyone does something for a reason. if someone does something illogical then maybe being center of attention is higher on their priority list than the want of respect. if a guy takes off his jacket and give it to his girlfriend on a cold night, maybe the safety and health of those he cares for is a higher priority than his own safety and health. i believe that making the computer think like this would create very intelligent AI applications in computer games. even navigating around maps, instead of creating waypoints or checkpoints just let the computer decide what is the best way..what route will i (the computer player) take to be safest, most hidden, and able to retalliate to an attack. by weighing which priorities are higher to the computer then it will choose a good path to navigate across the map on (let's say it's a war game and the AI needs to run across the street safely) the idea would not be hard to apply and i hope to start working an a simple java example using priorities when i get some more free time. this would also be good for creating variability in opponent AI. in most games you can beat a certain type of enemy very easily partly because they all think the same and you have developed an effective way to beat them. using priorities..just throw in some random numbers into the priority managing process and each opponent AI will act different because what is important from enemy to enemy is different. hiding in the shadows may be more important to computer A and using rockets and brute force may be more important to computer B. ...really that's just what people are...logical thinkers with very different and diverse priorities. [Edited by - chuck22 on October 23, 2005 10:55:40 PM]
I think most people can work out that much, the problem is in the implementation. You need effective and efficient ways to:

1) Describe knowledge
2) Store/retrieve knowledge
3) Evaluate input based on knowledge

Easy to say, damned hard to do. Personally I think #2 (closely related to #1) is needed the most right now as current "memory" structures I've seen don't scale very well.

-Cam
Advertisement
Regarding the way most people don't think about how they'll go about making that tune-and-raising sandwich: in AI programming, you can consider "make sandwich of type X" an "atom," and the higher-level goals you want to achieve are reached by performing some number of atoms. There's a section of research called simply "planning" which treats these problems formally.

On a simpler scale, there's the structure that's been documented from within the Sims: Each object in a Sim's house advertizes some degree of need fulfillment of some specific need, out to some specific radius. A Sim has a number of needs (hunger, boredness, pee-o-meter, etc) which fluctuate. Every once in a while, when a Sim is idle, this Sim will check which advertized object would best improve the situation, as measured by the need-o-meters, and will go to interact with that object.

You can get interesting behaviors when an object (say fridge) advertizes a need fulfillment (say, snack), but then the interaction doesn't actually fulfill the need (say, Sim is out of simoleons). To avoid locking on to this one object forever, you need to have some amount of learning, or perhaps just a quarantine period during which the same object can't re-capture the interest of your agent.
enum Bool { True, False, FileNotFound };
Quote: Original post by chuck22
then a few months ago i looked into this AI stuff and read about neural networks and decision trees and machine learning...and as i read about that stuff i was just thinking to myself that that is not how i go about solving problems, most people don't sit down and draw out the various AI methods to solve their daily problems either.

so i started observing my everyday actions, and those around me, and i tried to figure out why we do what we do. and at some point the idea of priorities jumped at me and it made perfect sense. everyone does what they do because that action leads to a result that accomplishes a higher priority than what would result from the action not being taken...


You're right, but you're also wrong. :) Read on...

Quote: the point is there is a logical process that a computer can follow using priorities.
- Modify Priorities - includes creating, adjusting level of importance, and deleting


A neural network (for instance) is one very good tool to adjust priority levels. Based on corrective feedback, a neural network can learn which inputs are important and which are not, adjusting their weights accordingly.

Quote: identify environment and observe which priorities may be affected


Neural networks play a large role in image processing and computer vision, in helping to pick out objects of interest from the background. They work well in any situation where there is an environment of various inputs and it is necessary to pick out points of interest and map them to appropriate outputs or responses.

Quote: - evaluate the environment and take actions to accomplish higher priorites


This is really the same as above.

So, what I'm saying, is that things like neural networks and machine learning are tools to accomplish exactly what you describe. It's just that your description of the process looks at the higher level without thinking about how to implement these things, whereas the things you have read about are concerned with the low-level implementation that is important for the higher level plan to function.

Quote: i believe that making the computer think like this would create very intelligent AI applications in computer games. even navigating around maps, instead of creating waypoints or checkpoints just let the computer decide what is the best way..what route will i (the computer player) take to be safest, most hidden, and able to retalliate to an attack. by weighing which priorities are higher to the computer then it will choose a good path to navigate across the map on (let's say it's a war game and the AI needs to run across the street safely)


Again, this already happens. It's just that the usual approach is to only include distance as the 'priority'. There's nothing wrong with the algorithms used that prevents the inclusion of other requirements. But you still need waypoints to control the low level movement.
It sounds like you may be tying together different issues. Try doing a google search on 'utility theory' and 'decision theory'. I am sure someone else will have a better suggestion, but these are the these seem like part of what you are getting at.

(sorry about the short post, have to get to work)
As BrianL mentioned, utility theory gives the right framework to look at these issues. In some sense, all decision taking can be expressed as maximization of the expected utility. In your example of living and watching the game, one could see four possible outcomes of the situation:
A- You watch the game and you live
B- You watch the game and you die
C- You miss the game and you live
D- You miss the game and you die (boy, that sucks).

A utility function is a mapping from those states into real numbers, expressing some sort of "happiness" value (when I was in college I developed parts of utility theory independently and I called the function "happiness", not "utility"). For instance:
A |-> +1
B |-> -100
C |-> 0
D |-> -100

B and D could have different values, but the religious beliefs of our agent play a role in what they are, and I won't discuss this here. In any case, when the guy with the gun enters the room and says that it's either the game or the agent's life, the agent has to come up with a list of plans, or actions (I'm not going to define these things here in any precise way), and then it has to estimate the probability of each of the outcomes given a certain action was taken:
1) Don't watch the game and live.
2) Watch the game and die.
3) Kick the gun, pick it up from the floor, kill the guy, watch the game and live.

So we know the expected utility of action 1 (0) and action 2 (-100). Now the exotic plan 3 has a probability of success of 0.002%. If it fails, you die and you don't watch the game. So its expected value is 0.00002*(+1) + 0.99998*(-100) = -99.99798. So the best action to take is 1. Of course if the agent is Rambo the probability of success for action 3 would be much higher and then option 3 might be the best action. Or maybe the agent is such a rabid fan that the utility of watching the game dwarfs his love of life, and then again action 3 might be the right thing to do. Or maybe the agent practices an obscure religion in which deatch after watching the game guarantees a quick admission into heaven, and the utility for B is very high, making option 2 the most desirable.

You see how utility allows the description of any rational behaviour. Now one could organize the AI of an agent in several levels. The top level has a fixed utility function that makes the agent want to live, have money, children or whatever. The second level could take care of more immediate things, like going to work. The top level's "actions" could be just modifications to the utility function used by lower levels. For example, we could determine that we can either get a real job, play the banjo in the subway or deal crack. Each one of those actions have different risk profiles (you might make more money if you deal crack, but you are more likely to get killed). Once you have decided that you are going to get a regular job, the next level will have in its utility function: "if you have a job, you get 20 brownie points". The second level doesn't really know why having a job is good, but it will try really hard to have one (assuming 20 is a big number of brownie points compared to other things). In the same way you could indicate to the lower levels that you want to be married (to have children, or sex, or economic estability, or all three, but that doesn't matter to the lower levels).

One thing that should be noted is that scaling of all the brownie points by a constant doesn't change any decisions, and neither does adding a constant to all the values of the utility.

Learning can be implemented in the models of what the likely results of our actions are. For instance, if you read in the newspaper about crack dealers being sent to jail or dying in gang violence more often that what your model would predict, you may want to adjust your model, and as a consequence you might end up looking for a job. You can use ANNs for this if you are an ANN kind of guy. Linear models, Adaline or other methods might be good enough and easier to understand, debug and tweak, but this is all details.

I think this way of organizing an agent's behaviour is very flexible. You can make an agent very cautious by assigning a very negative value to death, or a very concave function to the "money" section of the utility. You can also make it irrationally fearful by using pessimistic models of the outcomes of actions. You can make it generous by having a strong component of others' well-being in its evaluation function. You can make him hate someone by having a negative coefficient for that person's well-being, etc.
Advertisement
You could simply add dice roll to the priorities. So there might be situation when he would say "aww screw you guy this is a final match..."
Look at PnP RPGs, for example in GURPS, there might be roll, on iniciative, and a roll on a shock table with fobia as a result.

You might like to look at the Markov chains. These are kinda important for this aproach.

BTW imagine a women that is cooking, there is something that needs her attention, she walks to it. Sudently there is more, and more things that needs her attention, she is turning wildly. Then fire emerges and she finally decides to run for fire removing device.

Simulation of behaviour of a person isn't, or rather shouldn't be deterministic. Yes majority of people would act predicably in most situation, but there are still some surprises.

It's fine that you are trying to analise people's behaviour, many people can't correctly describe all needed steps to do some action. They expect that computers would do it somewhat intuitively. And yes majority of people are not using logic, or are using logic just for cheating.
thank you all for your replies


to alvaro: going the statical route isn't that bad of an idea, but everyday people have 1,000's of desicions/problems/and actions that need to be taken. and people do not weigh their decisions as specifically as you explained in my football game or die scenario. what i'm still trying to aim for is making this program solve problems like people solve problems. we have very sophisticated minds and make decisions in less than a second depending on how important the decision is. but for the sake of sanity, people will not sit down and punch numbers into their calculator everytime they are required to think. this is where mistakes come from, people sacrafice efficiency of weighing priorities for time. but this time saving method has proven effect for most people, and applying it to computers, saving memory won't be a bad thing at all. and i'm sure no one will care if their enemy AI isn't a perfect thinking machine. the user will make mistakes so the enemy AI should too.

to kylotan: i was thinking about it and agree that neural networks would have to be involved. i mean afterall, if i understand correctly neural networks try to mimic the processes of our brains..or the function of neurons? i know that it is not on a conscience level. but i still think that neural networks should not be the big picture, they should just be one of the functions of it.

to talyssan: knowledge may be hard to interpret in the real world but i do not believe so for the computer world. referring to games, the 'environment' is pretty much made up of easily attainable numbers such as number of units, size of object, rgb values, coordinates, etc. given that a computer person is looking in a given direction in a game it could easily store what it sees and where those objects are relative to it's own position, possibly by estimating to make it realistic, but it's do-able.
also

let's say the guy trying to watch the football game and live is actually Rambo..this is part of the identifying and evaluating the environment processes. if it's Rambo vs. Bambi with a gun..no doubt it is different than average Joe vs. random gangster.


come to think about it... a Rambo vs. Bambi with a gun fight would be pretty cool to watch
Quote: Original post by chuck22
if i understand correctly neural networks try to mimic the processes of our brains..or the function of neurons?


Many people get caught in this trap, but neural networks are only very loosely based on biological ideas.

While artificial neural networks (multilayer perceptrons) are indeed inspired by the study of the brain, the earliest (single perceptron) were much closer in operation to linear regression models.

Even though neural networks are named as such, don't think of them as something which attempts to simulate the brain. Rather, think of them as a tool for certain problems: function approximation, classification, pattern recognition, and time series prediction are typically the applications where neural nets are the right tool.

So, as Kylotan suggested, you may find neural networks to be very useful to implement some of the ideas you have. Certainly you won't be simulating a "brain" with them, but they could perform well as part of an overall model.

This topic is closed to new replies.

Advertisement