Quote: Robots can evolve to communicate with each other, to help, and even to deceive each other, according to Dario Floreano of the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology. Floreano and his colleagues outfitted robots with light sensors, rings of blue light, and wheels and placed them in habitats furnished with glowing “food sources” and patches of “poison” that recharged or drained their batteries. Their neural circuitry was programmed with just 30 “genes,” elements of software code that determined how much they sensed light and how they responded when they did. The robots were initially programmed both to light up randomly and to move randomly when they sensed light. To create the next generation of robots, Floreano recombined the genes of those that proved fittest—those that had managed to get the biggest charge out of the food source. The resulting code (with a little mutation added in the form of a random change) was downloaded into the robots to make what were, in essence, offspring. Then they were released into their artificial habitat. “We set up a situation common in nature—foraging with uncertainty,” Floreano says. “You have to find food, but you don’t know what food is; if you eat poison, you die.” Four different types of colonies of robots were allowed to eat, reproduce, and expire. By the 50th generation, the robots had learned to communicate—lighting up, in three out of four colonies, to alert the others when they’d found food or poison. The fourth colony sometimes evolved “cheater” robots instead, which would light up to tell the others that the poison was food, while they themselves rolled over to the food source and chowed down without emitting so much as a blink. Some robots, though, were veritable heroes. They signaled danger and died to save other robots. “Sometimes,” Floreano says, “you see that in nature—an animal that emits a cry when it sees a predator; it gets eaten, and the others get away—but I never expected to see this in robots.”My qustion is how would one even begin to program something like that. The article makes it sound like its a well known theory that can be put into place with varying variables and use "evolution" to "grow" more intelligent AI. How is it done? How much could they learn? With a series of blinks could a language develop? I need to know more about, mostly how to program AI like this.
Robots Evolve And Learn How to Lie
Heres the link
http://discovermagazine.com/2008/jan/robots-evolve-and-learn-how-to-lie
Heres the text
From the language used in the article I'd wager it's some kind of genetic algorithm. Genetic algorithms are method for exploring a range of possible solutions using a process modelled on evolution. You have a pool of possible solutions and a method for ranking them according to their fitness for the task. Then you "crossover" solutions by combining their simulated genes, with a bit of mutation thrown in to explore other possibilities. The Wikipedia site goes into more detail.
The tricky part of any genetic algorithm is you need to figure out two things first; a way of representing the solution in the form of a genetic code, and a fitness function for ranking them in order. For these robots, it's suggested that the genetic code maps to functions for responding to light, but it can be anything you like as long as it works with the crossover and mutation steps.
One of the downsides of genetic algorithms is that you may end up with a big population of duds, such as in this case robots that don't do anything except sit there and "die". This happens a lot if you don't set good rules for what your genetic code is and what the fitness function does. In theory if you have mutation you'll eventually get a better solution, but "eventually" can mean a very long time. From my dabbling with simple genetic algorithms I found it takes a lot of intuition to choose a good genetic representation and to design a fitness function that gives a good range of scores.
The tricky part of any genetic algorithm is you need to figure out two things first; a way of representing the solution in the form of a genetic code, and a fitness function for ranking them in order. For these robots, it's suggested that the genetic code maps to functions for responding to light, but it can be anything you like as long as it works with the crossover and mutation steps.
One of the downsides of genetic algorithms is that you may end up with a big population of duds, such as in this case robots that don't do anything except sit there and "die". This happens a lot if you don't set good rules for what your genetic code is and what the fitness function does. In theory if you have mutation you'll eventually get a better solution, but "eventually" can mean a very long time. From my dabbling with simple genetic algorithms I found it takes a lot of intuition to choose a good genetic representation and to design a fitness function that gives a good range of scores.
Quote:
With a series of blinks could a language develop?
yeah only if the programmer's programmed for it?
Quote:
I need to know more about, mostly how to program AI like this.
I find it a facinating research area, not so much because of what little has been achieved in it by way of "intelligence" or how little I know about it, but because of what it aspires to achieve: - The title(s) say it all -
evolutionary robotics, AI, artificial evolution.....
Even the language in the article title is so provocative and as anthropomorphically misleading as ever to the layman:
"robots evolve and learn how to lie"
What does that mean? The robot has a switch statement that gets switched based on some highly contrived and complex trigger modelled on our neural networks?
I am sure it means something else to the AI expert.
And my goodness a robot that lies! bit scary: I wouldn't buy a robot if I
couldn't be sure it will not lie to me. I have more than enough (human)
liars around me already including myself( Now why did I have to complicate things like that?).
Great title: robots evolve and learn how to lie.
Certainly enhances the potenial attraction of future funding.
Obviously, I am not an AI expert.
From the outside I may appear as an ignorant cynic, but the robot might see me more respectably as the devil's advocate? problem is: will the robot be lying or not?
Quote:
I need to know more about, mostly how to program AI like this.
enjoy the research
For a genetic algorithm then I'd say it all comes down to the fitness function. Presumably the fitness of each colony was based on how much "food" each colony consumed. If causing robots from other colonies to die will mean that they get a greater share of the food then that would score well in the fitness function, hence it make sense that such an ability would likely evolve.
Furthurmore, if the fitness function for an individual includes the fitness result of the group, then I would highly expect the sacrificing behaviour to also emerge, especially if the group fitness is treated as more important than individual fitness.
Furthurmore, if the fitness function for an individual includes the fitness result of the group, then I would highly expect the sacrificing behaviour to also emerge, especially if the group fitness is treated as more important than individual fitness.
"In order to understand recursion, you must first understand recursion."
My website dedicated to sorting algorithms
My website dedicated to sorting algorithms
You might like DarwinBots. You can see some interesting behaviors emerge.
Of course, steven's cynicism isn't unwarranted. The media always plays to the public's imagination, that the kinds of AI seen in hollywood movies are really happening in some lab somewhere. But there's no "intent" in these bots. They don't use lights to guide, warn, or deceive the other bots. They don't even know there are other bots. They flash them because the genes say to flash lights when such'n'such happens. It's "instinct". If the genes for that behavior remain that way it's because it worked for the previous generation.
Of course, steven's cynicism isn't unwarranted. The media always plays to the public's imagination, that the kinds of AI seen in hollywood movies are really happening in some lab somewhere. But there's no "intent" in these bots. They don't use lights to guide, warn, or deceive the other bots. They don't even know there are other bots. They flash them because the genes say to flash lights when such'n'such happens. It's "instinct". If the genes for that behavior remain that way it's because it worked for the previous generation.
No one ever said that robots couldn't lie in the first place. Assuming that robots are no more than intelligent agents, then based on the definition of an intelligent agent, it wouldn't be surprising if an agent could lie. Simply put, an intelligent agent is one that perceives its environment and acts on its perceptions to reach a specific goal. So, if to reach the goal it must use misdirection or actively hide information from other agents, then it will, in essense, lie or state the truth that it wants others to believe. Conceptually (theoretically), it is as simple as that. Implementation, of course, is much different.
Exactly. So some bots flash in response to food. Others flash in response to poison. Are these just simple linkages, the inevitable outcome of a tiny genome designed to allow a specific set of behaviors? Mercy, no! The ones that flash in response to food are HEROES, valiantly giving their lives for the strength of the colony. And the ones that flash in response to poison? Eeevil liars, conniving to put themselves in power.
I always hate these articles, because I think they're bad for the field. A situation like this might use genetic algorithms and whatnot, but it's basically just a hill-climbing algorithm. The researchers set up a situation where different behaviors have different degrees of optimality, sic their NNs and their GAs on them, and then act astounded for the reporters when the system converges on the optimum. The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough. The result is that people think of AI as an attempt to ape the vagaries of human behavior, that if we can just program a system to like the same foods as we do it'll somehow become as smart as we are. It's regressive, pandering, and a waste of time and resources.
I always hate these articles, because I think they're bad for the field. A situation like this might use genetic algorithms and whatnot, but it's basically just a hill-climbing algorithm. The researchers set up a situation where different behaviors have different degrees of optimality, sic their NNs and their GAs on them, and then act astounded for the reporters when the system converges on the optimum. The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough. The result is that people think of AI as an attempt to ape the vagaries of human behavior, that if we can just program a system to like the same foods as we do it'll somehow become as smart as we are. It's regressive, pandering, and a waste of time and resources.
Fun. I think this kind of thing already existed in software form for at least a decade, but it's fun to see it applied using real-world robots. Nothing really interesting or new, though.
When you support BFS, you support creationism.
Quote: Original post by Sneftel
The fact that a BFS might have produced the same or better results in 0.3 milliseconds? Not sexy enough.
When you support BFS, you support creationism.
The thing I find fascinating here is that they used actual robots with only 30 genes instead of simply using software agents with a more complex environment and 100 genes. By using robotics and the type of physical sensors and locomotion techniques you are limited to in that arena, they trimmed down the potential for research on the GAs. In fact, they even lengthened the itteration time by not being able to "speed up the world." Put it into a simple 2D world with software agents and they could have had 50 generations in minutes.
The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?
The question is, were they doing robotics research (i.e. physical) or GA research (i.e. mental)?
Dave Mark - President and Lead Designer of Intrinsic Algorithm LLC
Professional consultant on game AI, mathematical modeling, simulation modeling
Co-founder and 10 year advisor of the GDC AI Summit
Author of the book, Behavioral Mathematics for Game AI
Blogs I write:
IA News - What's happening at IA | IA on AI - AI news and notes | Post-Play'em - Observations on AI of games I play
"Reducing the world to mathematical equations!"
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement