Advertisement

Neural nets plus Genetic algorithms

Started by February 19, 2002 04:56 PM
58 comments, last by Kylotan 22 years, 8 months ago
quote: Original post by alexjc
Groovey, maybe that''s what I''m missing... Can you point out some references to papers, links or something?


Schraudolph & Belew, 1992.

http://citeseer.nj.nec.com/schraudolph92dynamic.html

That''s a start.

One day I''ll finally get around to putting my old Honours thesis into electronic format and let you all know... It implements a faster method than S&B and gives a much better mathematical analysis of asymptotic convergence. I know I promised a copy to Mike last year... sorry I''ve been so slack MD... but hey, I''ve NEARLY finished writing my PhD thesis!!! Maybe then...

Cheers,

Timkin
>The basic idea is that small populations perform hill-climbing, large populations perform more random search (though still directed) and varying the population size, making it smaller when fitness constantly increases over a period and larger as evolution slows down, will mean that you''ll traverse areas of fitness landscape at the (more) optimal rate for those types of landscape.
I also see varying population size as being a safer alternative for diverging a population to altering mutation rates or simulated annealing. Only my humble opinion mind.

How can you make the garauntee that larger populations span a larger space than a smaller population? Assuming you meant to use increasing the population size to get you out of a plateu. Also why would you characterize increasing the population size as safter than varying the mutation rate? I can see how you could say varying the population size AND the mutation rate is safer, but not just the former.
Advertisement
I can''t say that increasing the population size is safer than increasing the mutation rate or safer than changing both at once. I hope I was clear that I would be carrying out experiments to look at the effect on the speed of traversing a fitness landscape using several different methods. I merely stated my gut feeling.
As to whether increased population size allows divergence of the genome of that population. Well, that''s also a statement to be tested, however, the larger the population, the less likely an individual is to reproduce with the optimum member (considering that, with noise in the evaluation, there might not be a single optimum member) dependant on the method of parent selection (for instance, rank selection). The less optimal the parents of a child the less optimal the child (generally speaking, but not specifically, else evolution, beyond hillclimbing, wouldn''t work) and the more likely the child will become diverged from the population, hence the local optimum. Due to this, the larger the population, the less optimal the average child, the more diverged the population becomes.
This is logical theory, I will put it in to practice, though feel free to argue with it.

I also have it on good information that, in general, a good simulated annealing algorithm will beat a GA hands down 99 times out of 100. I may manage to show this or show the opposite, either way it should be fun playing with the ideas.

Mike
quote: Original post by edotorpedo


A good example is the quake bot. Each bot will have its own neural net to determine his action. At first these neural nets are random. You let a number of these bots battle a while in some sort of level. After this you pick the best two bots (least damage on itself, most kills), perform some sort of crossing over/mutation on their neural nets, and let the bots battle again.

search for neuralbot in google.


I thought I''d have a look at these Neural Bots and even after 1000 generations, they''re pretty stupid. What factors would you say influence their behaviour? Eg: How would you tweak the bots to make them actually useful?


Oli


All the best problems start with C

.:: K o s m o s E n g i n e ::.
Funny you say that - I seem to remember the original Quake Reaperbots used a similar architecture and they were superb. They didn''t act human, but they certainly acted (apparently)intelligently.

[ MSVC Fixes | STL | SDL | Game AI | Sockets | C++ Faq Lite | Boost ]
Anonymous:

I''ve been personally using the toroidal GA approach recently, using short random walks to find the best individual. That works great for creating "fitness islands", which isolate genetic pools and provide great diversity. The bigger the torus, the more diversity.... guaranteed!

MikeD, Timkin:

Comd to think of it, that''s probably why i find the hill-climbing works better. Convergence takes much longer to happen with such a model.

I''ll have to see if Schraudolph & Belew''s ideas can apply here too.

downgraded:

You''d have to redesign the whole thing. I''m willing to bet a lot of money that such a neural network model with those inputs/outputs would never perform well no matter how much you tweak it. MODULARITY is key imho.


Artificial Intelligence Depot - Maybe it''s not all about graphics...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

Advertisement

I just thought of something on combining NN+GA that might improve a bot like the neuralbot. I don''t know if this is possible though. My idea is this one:
How about we have multiple (separate) NN''s to control the actions of one bot. For example, we would have a seperate NN for walking and pathfinding, a separate NN for firing, a seperate NN for choosing weapons etc.

If you can provide a fitness function for each of these NN, you could test all bots on all their NN. That way for the offspring you can pick the NN for walking from bot 1, firing NN from bot 4, and the weapon selection NN from bot 23. The offspring the would have multiple parents.

Can any of you guys comment on if this will work, and what the best technical approach would be.

Edo
Edo
Yes IMO combining several ANNs (and techniques, ANNs aren''t the answer to everything!) is the best way to go.

It is much easier and faster to train networks to just perform one task than to train an enormous network which has dozens of factors contributing to its fitness function. That''s the problem with the neuralbot. I agree with Alex. I don''t believe that you''d get decent behavior from that type of setup even after a zillion generations.

To add to the modularity idea, you may even evolve a ANN which acts as a switch. Its job is to choose which ANN(or combination of) should be switched on at any one time. So you could evolve a network for pursuit, one for flight, one for defence etc and let the switching ANN chose whichever it thinks is the most appropriate for any moment in time.





Stimulate
edotorpedo,

Yes. It does work!! More on that very soon

The interesting part will be to get the bot to generate the fitness functions itself!

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

quote: Original post by edotorpedo

How about we have multiple (separate) NN''s to control the actions of one bot. For example, we would have a seperate NN for walking and pathfinding, a separate NN for firing, a seperate NN for choosing weapons etc.



How would one go about applying a ANN to pathfinding? How would its world representation
be different (or the same) as what one would use with an A*? Would it actually perform
better or worse than an A*?

In enquiring mind would like opinions. Even better if you have actually done it.

Thanks,

Eric

This topic is closed to new replies.

Advertisement