Advertisement

Emerging Behavior

Started by February 23, 2006 06:45 PM
11 comments, last by MWr2 18 years, 9 months ago
Im trying to identify what encourage the apparition of emerging behaviors in multi-agents AI systems . By emerging behavior I mean behaviors that are not explicitly coded by the developpers, but appears from a correct but implicit application of the coded behaviors. Until now, Ive identified three criterions: 1)-Learning (obviously) 2)-Behavior inheritance 3)-Behavior generalisation 1) doesnt need much explanation. 2) means that distinct agents implicitly share part of their behavior 3) I mean that complex behaviors are expressed a a product of simple behavior as much as possible. So, got anything else? Does someone have a good ressource of experience about emerging behaviors?
You might want to take a look at Stephon Wolfram's volumous tome in which he considers in some detail emergent and complex behaviours of systems, particularly automata. It's certainly an interesting read... and a long one!
Advertisement
That one?

I also found another reference to a book named "A new kind of science" by the same author on the same subject...

Holy Macaroni, 477.56$can ?

I sure hope I can find it in my college science's library.
A new kind of science is available to read online for free. Or a large portion of it is at least.

CLICKY
Woot!

That guy is awesome. I can't remember how many times Mathworld saved my neck.
Actually sometimes what is seen as emergent behavior is just the natural solution, or the inherent solution to a problem.

I wrote a simulation of a set of mechanical legs that tried to stay balanced by shuffling around and keeping the projected center of gravity at the midpoint between the two feet. It was a crude simulation and I basically treated the problem as an optimization problem. During the simulation, the center of gracity would randomly shift around and the feet would move around to try to keep balance. After a while, the two feet would naturally take up a one foot forward and one foot back stance, which interestingly is how real people would stand when the surface they stand on is shaky.

So, what I'm trying to say is that emergent behavior is sometimes not something unexpected, but rather shows us that many real life solutions to problems are a result of the problem itself, which at times are much simpler in nature than we would think they are (or something like that, somehow words escape me right now).

Wait...I think I'm trying to say what you stated as Behavior Generalisation....
Advertisement
Quote: Original post by WeirdoFu
...


I wouldnt really classify your example as emergent behavior. You say you treated the problem as an optimization problem, and your system really did found a good local extremum, which is the goal of an optimization scheme...
It was a dynamic optimization problem, where the only concern was optimizating the fitness function. All the system had to do was to constantly shift the feet around 1 at a time to keep the center of gravity at the midpoint between the two feet. The system itself has no knowledge of how changes would occur and when. For the most part, the fitness was a black box to the system, which only queried it for results. So, in some ways, it may not even be too aware that things are changing. But no matter how you test the system, it would eventually settle on the same one foot forward, one foot back configuration as opposed to anything else.

It could have settled on any number of other configurations, like the initial natural standing position, but after all tests, it settled on a specific one. Given the randomness of a particle swarm optimization, it is hard to believe that it would favor one local optimal over all others. The fact is, though it was never specified in the fitness, it found the most stable setting in real life and chose it over all other possible solutions.

So by your definition that " behaviors that are not explicitly coded by the developpers, but appears from a correct but implicit application of the coded behaviors.", I don't see why this is not an emergent behavior, be it a very simple one.

It should be noted that many times, emergent behaviors are just manifestations of behaviors that were never considered in the first place and could easily be explain upon further inspection of the code and the interaction between various components. However, just because we can explain it after we've seen it, doesn't make it less emergent, or we would just eventually fall into the AI trap that AI researchers created for themselves. It isn't often that there are completely inexplicable emergent behaviors as the system can't do more than what the program tells it to do.
Steadtler, I was suggesting 'A new kind of science', since it discusses the issues in simple language and with reasonably clear examples, such that a non-expert in automata can pick up the concepts... although I have no doubt you'd be fine reading the collected papers on automata given your experience.

WeirdoFu, I disagree that what you have suggested is emergent behaviour. You designed a system to solve an optimisation problem without an explicit encoding of the objective function within the solver... the system only had access to the objective function through its own operation/action... this is called blind search... thus, the outcome looks like an emergent behaviour, because it wasn't explicity programmed, but it isn't really emergent... it's just finding a solution through functional interaction with the domain, much as a GA does (which is a classic example of a blind search algorithm).

Some good examples of emergent behaviour are those involving system control using only local rules... for example, flocking boids. Here, only local rules relevant to atomic elements of the system are used to control those elements. We know that the rules are designed to keep the boids together, but we see emergent behaviour when they are faced with isses like turning as a flock (wheeling) or avoiding static obstacles (where they flow around the object and merge on the other side).

We tend to see emergent behaviour most often in systems comprised of parallel atomic elements (such as societies, crowds, hives, etc.) because the inherent complexity associated with parallel computation makes it nearly impossible to predict all of the consequent behaviours.

Cheers,

Timkin
Quote: Original post by Timkin
Some good examples of emergent behaviour are those involving system control using only local rules... for example, flocking boids. Here, only local rules relevant to atomic elements of the system are used to control those elements. We know that the rules are designed to keep the boids together, but we see emergent behaviour when they are faced with isses like turning as a flock (wheeling) or avoiding static obstacles (where they flow around the object and merge on the other side).

We tend to see emergent behaviour most often in systems comprised of parallel atomic elements (such as societies, crowds, hives, etc.) because the inherent complexity associated with parallel computation makes it nearly impossible to predict all of the consequent behaviours.

Cheers,

Timkin


I may very well be wrong to say that my system had emergent behavior, but I don't really feel strongly about flocking being an emergent behavior either. The fact is, to me, the whole definition of emergent behavior is just as fuzzy as the definition of artificial intelligence.

It is true that flocks of birds behave in very interesting ways, but by understanding the underlying rules, you will find that its really just a product of the many factors. Each bird follows a specific set of flying patterns and yet, they like to fly together with others. So, the end result is the flight pattern that we see as flocking. And the larger the flock, the more seemingly organized it becomes. Its the same with people. Its hard to predict what 1 person will do in a given emergency, but we can fairly accurately predict what a mob of people would do. And the larger the mob, the more accurate the prediction will become. So, what may be seen as emergent behavior in multiagent systems may just be the reduction of local rules because of the limit of the population. I wouldn't call it complex behavior either because at some point, you reach the population size where the general behavior of the crowd is much simpler than that of the individual, which is why the increased predictability, as the choices of the individual gets reduced to almost nothing.

This topic is closed to new replies.

Advertisement