Advertisement

Neural nets plus Genetic algorithms

Started by February 19, 2002 04:56 PM
58 comments, last by Kylotan 22 years, 8 months ago
quote: Original post by Kylotan
Why is it that nobody can seem to present a compelling example for the use of GAs? Not to criticise your site, as the tutorials themselves are very good... it's just that the 'sample application' for GAs is usually something very artificial, whereas the sample app for a NN is often something that makes sense, such as character recognition.



I don't know as I agree with that. I think there are severa compelling examples for the use of GAs; they're just somewhat specialized and as a result they're only compelling in certain circumstances.

For example, both Cloak, Dagger, and DNA and the Creatures series use GAs for a number of functions; heck, CDDNA is built on them exclusively. One could argue that The Sims could be done along these lines though I'm not sure about how quickly things would converge in those circumstances.

The problem is that GAs take a long time to converge (potentially) and they won't necessarily always give you an optimal result. Mutations can go down routes that are self destructive and make whole lines of breeding a waste. When we do computer AI we have some advantage over nature in that we can just wave our hands and create new generations out of thin electrons but we've still potentially lost time and CPU cycles.

The right tool for the right job, I always say.



Ferretman

ferretman@gameai.com
www.gameai.com
From the High, Cold, Snowy Mountains of Colorado




Edited by - Ferretman on March 1, 2002 2:27:05 PM

Ferretman
ferretman@gameai.com
From the High Mountains of Colorado
GameAI.Com

I meant in the context of tutorials, really.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
Advertisement
quote: Original post by MikeD
The basic idea is similar to a subsumption architecture (See Rodney Brooks, "Intelligence without Reason") where you start out with the simple behaviours and build them up. First getting the bot to walk from A to B, quite happily using A* for the path finding and the neural net for the path traversal. Then get the bot to avoid dynamic obstacles while walking, then avoid being shot by other bots and finally to hunt down other bots and shoot them. Brooks'' original idea had each level frozen once its functionality was complete and the next level built on top of it. By allocating a neural network to perform function (a), evolving it, then freezing it (or not, I''m not sure if this is truly necessaey) before adding another section to the network (either interconnected or separately depending on whether the functions are connected) and evolving for function (b), while continuing to keep evolutionary pressure for the first function, you should, in theory, end up with a network that can perform all the functions having been built up incrementally. If I remember correctly that''s how the fighter jet example was done by the people at creature labs.

Before you ask, I haven''t implemented this kind of idea, however it has been shown, in academia, that it is quite difficult to evolve 8 functions all at once from scratch and that incremental evolution makes the job a lot easier.

Mike


I am a big fan of Rodney Brooks and Subsumption Architecture (going back to 1995 and
the RTS Enemy Nations in which I was greatly influenced by subsumption). Correct
me if I am wrong, but isn''t the above using an ANN for higher level decision-making and
the A* continues to be relied on to perform the pathing function?

Eric


Eric
Geta: That was exactly my point, A* for path planning (node generation), neural network for path traversal (the realtime walking of the path using the current node as part of the input). As usual different usage of words gets in the way of identical reasoning ;-)

Mike
I''m confused.

Confused because I don''t know if I''ve got the wrong end of the stick, or if you''re misinterpreting Brooks.

This is what I''ve gathered from reading litterature on the topic; there''s in fact a paper that clarifies common misconceptions about behaviour based agents (robotics too).

Brook''s paper is called "Intelligence Without Representation" and not "without reason". Essentially that means they are all reactive behaviours. Brooks suggests a subsumption hierarchy, which is a way of getting the behaviours to arbitrate.

I''m not a fan of subsumption at all, since i don''t like the way the lower layers get completely turned off!
I''m not a fan of behaviour based agents as the desired intelligence is expected to arrise from reactive behaviours.

What''s been described here for the bot is a hybrid architecture, whereby some components are deliberative and not just reactive. The fact you''re building them up incrementally doesn''t imply subsumption, since you''re not subsumming lower-level behaviours. It''s just modularity, i.e. splitting the problem into sub-parts.

Does that make sense?


Artificial Intelligence Depot - Maybe it''s not all about graphics...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

I don''t see the two approaches as being contradictory and I did suggest their similarity not state that they were identical. Subsumption architectures were implemented by Brooks using FSMs not neural networks, though as a methodology the ideas can be translated over. Brooks froze each layer before building the next layer on top of it, each layer, operating ansynchronously, performing one behaviour (which is a pretty subjective statement). I don''t think I''ve dived a million miles away from the subsumption architecture Brooks proposed in comparing it to what''s being discussed. If you could point out the differences I''d be happy to admit I''m wrong (though that happens sooooo rarely ).
As to the papers, Brooks wrote intelligence without representation first (back in ''87) but both papers "Intelligence without reason" and "intelligence without representation" were published in ''91. Both are quite similar in content but I can''t remember the exact differences as I read them back to back over a year ago. Look here for a reference http://citeseer.nj.nec.com/cs?q=intelligence+without&cs=1

Freezing the lower behaviours when building the higher behaviours seems mad to me as well. Hence my statements about having everything work as a cohesive unit and evolving the choice modules together as a final step with building a quake bot. Brooks also never used GAs as far as I know and builds everything by hand. It seems like many people in this field do some good work then go a bit mad and screw it up by re-adopting old methodologies or crazy ideas (Brooks, De Garis, Warwick). But then I am indoctrinated to the Sussex approach to evolutionary science and behaviour based robotics and they''re pretty darn indoctrinating in terms of the _right_ way of doing things.
I do love the idea of intelligence starting with reactive behaviours as a rule though. From the day I first understood Braitenberg vehicles and how complex behaviour could arise without any intelligence at all I fell in love with this approach. It''s true that Deep Blue will beat any ANN hands down in a game of chess using representation but this doesn''t make it the right (most robust, most potential filled) approach.

A question though, what makes my description a hybrid architecture? What are the differences between what I described and what you consider subsumption to be (in an absolute, precise sense).

I look forward to your reply.

Mike
Advertisement
It''s hybrid in more than one way

Both reactive and deliberative algorithms are used -- obstacle avoidance and A* path-planning respectively. I think the behaviour based robots are purely reactive, officially.

Secondly, there''s no clear layering, either horizontal or vertical. The PP may sit on top of the OA, but then you have the firing behaviour which is independant and controls a different part of the bot.

Freezing one aspect of a system, and getting the others to adapt is known as incremental evolution. Very usefull indeed The search space gets reduced drastically!

Mike, I''m just trying to make sure people don''t misinterpret this architecture as subsumption or as behaviour based...


Artificial Intelligence Depot - Maybe it''s not all about graphics...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

To make it clear, I don''t disagree Alex, you''re right. It doesn''t mean what I''ve said isn''t valid for creating a GA based neuralbot (not that you''ve disagreed there ), but you''re totally right in making the underlying ideas of subsumption obvious and distinct.

Mike
quote: Original post by fup
Timkin: what do you mean by "multi niche"? Can you elaborate a little for us plz?


Sorry for the delay in reply... I''ve been away for a few days with my wife celebrating our wedding anniversary! Back at work now... semester has started and there are these little kids all over campus... oh wait, they''re 18! Damn... I''m getting old! 8^(

Anyway... a multi-niche artificial neural network can be thought of as one large ANN that has several regions that are weakly interacting. Imagine say 5 ANNs - each with a differing number of nodes (say between 20 and 30) - that have just a few connections between them. When input arrives from the environment it arrives at a subset of the input nodes. These don''t all have to be in the same niche. The network computes the response in the usual manner. You might think that one large network would suffice. The benefit of niching is that you can do different things to the network in different regions... perhaps use a different response function in different niches. In the creatures series, a kind of computational neuro-chemistry was introduced that affected the activation and response behaviour in niches. For example, if the Creature had not eaten in a while the chemicals related to the emotional part of the brain would be altered (as would those related to fatigue) and the emotional response of the Creature would be affected. So, you could place the creature in exactly the same situation and give it a stimulus and it would probably act differently depending on whether it was hungry or not.

There''s been a lot written about Creatures from various perspectives. Try a web search for literature!

Good luck,

Timkin
Thanks for clarifying that Timkin. I''ve been meaning to experiment with a similar approach myself but alas I''m too busy at the moment to devote the amount of time required.



Stimulate

This topic is closed to new replies.

Advertisement