Advertisement

Question about genetic algorithm

Started by May 22, 2006 10:25 PM
10 comments, last by sidhantdash 18 years, 6 months ago
Has anyone ever made a genetic algorithm where the mutation rate and/or method of crossover actually evolves with the organisms? It would run a few generations, see how much the fitness was increased, then change the mutation rate, run a few more generations and compare how much the fitness was increased to the previous run. If the new mutation rate was better it keeps it and the process continues. You could also do this for the method of crossing over.
Bugboy
As part of my Bachelors Degree I created an artifical asexual population and studied the evolution of various 'genetic' factors. One of the experiments performed was varying the mutation rate.

With a high mutation rate your population diverges quickly and therefore selection pressures will act more effectively however they will also quickly diverge from any 'ideal' solution/genetic sequence.

If the mutation rate is low then the creatures will become very specialised but also very suceptible to changes in their environment.

I would predict that in a stable environement the mutation rate would decrease and in an unstable environment the mutation rate would increase.

In fact would this not reinforce local solutions. Say a population evolves to a local maximum. A new set of solutions are generated with 10% difference. Now say a better solution is not within 10% deviation, the algorithm will realise that the best solutions are those with a small mutation and therefore decrease the mutation rate to say 9%... ad infinitum until there is no mutation.

Edit: Ah but of course you could do the opposite to avoid local maximum and increase the mutation rate if a better solution is not found however overall, for the population, this would be worse?
Advertisement
The idea sounds kind of like running Simulated Annealing on a genetic algorithm.
"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote: Original post by bugboy
Has anyone ever made a genetic algorithm where the mutation rate and/or method of crossover actually evolves with the organisms?
Yes, this was explored in a paper or two that came out, I think, around 1987 or so. Since it was so long ago, I'm afraid I don't recall any actual details.
I havent done it, but have read about it. Evolving mutation rates as ur GA progresses takes the GA into the realms of Evolution Strategies. There are a number of ways to do it. Some of them are heuristic, others self-adaptive.

Like for instance if you have x1,x2,...,xn genes in your chromosome, you could add a n+1th gene, m, which will be the mutation rate for that particular individual in your population. This gene, is evolved along with the other genes in the chromosome. This is when you want to let your GA also control your mutation rate.

In the heuristic process, we usually keep track of a 'success_rate' parameter. It tells us what percentage of mutated chromosomes have shown an increase in fitness. After a fixed no. of generations, depending on the value of this parameter, you either increase or decrease the mutation rate. Like if you see that the success_rate is more than say .2 (the 1/5 rule) you reward the mutation operator by increasing the mutation rate. On the other hand, if you see the mutations are actually decreasing the fitness of the individuals, you penalise the mutation operator by reducing the mutation rate.

Look up in citeseer for evolving parameter GAs. You should find many interesting papers to read on the topic.
I'm being asked if I want to do a PhD on this subject. So I might do what you describe one day. :)
Advertisement
Ugh, the main reason I wanted to know if this was done before is because I was planning on making an algorthimn like this for my science fair project.

I was hoping nobody had done it before.

Do you think the judges at my regional science fair will have heard of this process before?
Bugboy
I think for real bonus points tie it to a real world application (like solving complicated equations, object recognition or any of the other common applications)

One thing you'll find with any GA type stuff is that even with only a few variables you can create extremely complicated behaviours. I wouldn't worry about finding enough to investigate.

I worked on a project for 3 months with a short 2 week design phase. When it came to writing up results I had enough new questions to do 2 years more worth of work. All from a short design cycle.
You're in luck and out of luck. Self adaptive evolutionary computation is one of those things that everyone in the field touches on somewhat. It started in simulated annealing, then got picked up by evolution strategies. It then stemmed over to evolutionary programming, and ut just kind of became a ubiquitous idea. You have people doing similar things in particle swarm optimization too.

Here's a reference:

Angeline, P. "Adaptive and Self-Adaptive Evolutionary Computation." Computational Intelligence: A Dynamic Systems Perspective. IEEE Press, pp152-163, 1995.

Most people talk about self adaptation for ES and EP, but some have tried it with GA (I'm pretty sure). You might also look at the papers that were written where a generally pretty good set of parameter settings were found for GAs. Sort of like a guideline as to a starting parameter setting if you're lost. They basically optimized the parameters of a GA with another GA over a series of different problems.
Quote: I wrote
Yes, this was explored in a paper or two that came out, I think, around 1987 or so. Since it was so long ago, I'm afraid I don't recall any actual details.
Here's a paper by Spears from 1992 which references some of the earlier works that I was thinking of (Greffenstette; Schaffer et al; Lawrence; etc):

http://www.cs.uwyo.edu/~wspears/papers/adapt.crossover.pdf

I didn't look but I'm guessing most of these papers are probably not online, since they predate the WWW by about 5 years.

This topic is closed to new replies.

Advertisement