Quote: Original post by SneftelQuote: Original post by Timkin
An article I read about a year ago threw a singularly large spanner into Noam Chomsky's (famous linguist for those that don't know) beliefs that grammar is hard-coded in the human brain (and cannot be learned)...
Huh... very cool. Link?
It was a New Scientist article iirc... I'll try and track it down for you.
steven: lets not get into a discussion of 'what is intelligence' just yet... the year is still too young (and we've had that discussion many times over during the past decade). As for your belief that intelligence must arise from the creator/designer, I disagree. Mostly because I believe intelligence is a functional property of systems and so it can be learned (and improved) through adaptation of the system. Provided the designer/creator gives the system the capacity to try new strategies and evaluate the quality of them, the system will develop what we might call 'intelligent strategies'... i.e., those that best suit the system (taken with respect to its performance measures and beliefs).
owl: no, that's not what I'm saying. If you gave a bot/agent a sensor with which to observe an environment and a means of applying labels to objects detectable by the sensor, then you gave it the capacity to communicate these labels to another bot/agent that could observe the same environment... and then finally gave them a means of inferring the meaning of a label they receive from the other bot, then it is conceivable that you could devise an evolutionary strategy that permitted the bots to evolve a common language.
In the given experiment, the communication channel is made of both the sensor and the blinking light. The labels can be anything, but they map directly to positive and negative reinforcements in the environment. In this context it doesn't matter what one bot calls them... only the label they send to other bots (how they blink... or not at all).
The evolutionary strategy is 'survival of the power-eaters'... i.e., those that receive the most positive reinforcement are more likely to survive. However, this isn't guaranteed, since the GAs implementation includes stochastic factors (mutation and selection). Thus there will be situations in which bots will gain more by helping everyone to recieve more power, rather than just themselves (altruism benefits weak individuals the most). There will also be situations in which those with a strong strategy are better off treading on the weak (altruism does not benefit the powerful).
For those interested: Kevin Korb from Monash University, along with some of his honours students, has investigated evolution of various social behaviours in software simulations. He has noted, for example, that in certain populations, euthenasia is a viable and appropriate strategy for ensuring the long term strength and viability of the population. If you're interested in his work you can find more information online at Monash's website.
Cheers,
Timkin
[Edited by - Timkin on January 29, 2008 7:08:38 PM]