Quote:
Original post by willh
Crossover IS a mutation operation. This is a mathematical certainty.
The two concepts are different in the algorithmic sense. Of course it would be possible, if you wrote your mutation operation in a certain way, to alter a solution such that it could change in a similar way to applying a crossover operation. But the chance of a typical mutation creating the same change as a typical crossover would make would be tiny. The point of crossover is to make use of the information elsewhere in the population, and the point of mutation is to introduce new information. The fact that these two can possibly overlap in their result is not terribly relevant.
Quote:
Quote:
Original post by Kylotan
That member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations.
The logical conclusion of this is that all members will eventually become nearly identical. At this stage cross-over will have little effect as it becomes no different than a 'simple' mutation operation.
Yes, they typically become closer together. That's convergence, and is exactly what you're looking for. Compare to other optimisation algorithms which typically just follow a single solution from start to end - they only had 1 possible option all the time, not just near the end. And of course convergence slows down as the algorithm progresses - this too is quite common in optimisation problems. Once you home in on the optimum there is often less room to improve.
But it doesn't become the same as a simple mutation operation. Quite the opposite in fact - as the solutions converge, mutation will produce quite different results to crossover.
Quote:
2. If you look at convergence as TIME spent (i.e. milliseconds) and not generations required, then having an effective crossover scheme (which may work very well for the first few generations of a population) could actually result in a decrease in convergence time. This is due to the number of poor candidate fitness evaluations that are required for each generation, resulting in wasted cycles. It's like exploring obviously dumb moves in a gametree-- wasted time with no return.
Compare this with many other optimisation algorithms, which don't even have a game tree - they just explore one promising looking route and often need some sort of added stochastic element or restarting procedure to get them out of local extrema.
Fitness evaluations in GAs are not particularly different from heuristics in other search or optimisation algorithms. Their usefulness is proportional to the quality of the heuristic.
Selection is only as expensive as you make it, also - tournament selection is incredibly cheap for example.
Quote:
We'll have to agree to disagree that crossover is required for a GA; it's not a requirement in nature so I don't see why it should be a requirement when talking of algorithms.
Genetic algorithms use a biological metaphor to explain their constituent parts, that's all. It doesn't mean that anything that may exist in biology makes up a valid GA.
Quote:
In practical terms I do not think it is important to always include crossover in to a framework. You could go a step further and include multiple different crossover strategies (one point, two point, etc..) but in many cases I think the effort would be better spent on implementing more intelligent mutation operators (or any number of things).
Without crossover you're not sharing useful information across the population, so there is little point maintaining that population. If you have a problem where there is no useful information to share, that's fine, but I can't think of one myself. If such a problem did exist then it would probably be better to run many instances of a cheaper single-member optimisation algorithm serially rather than dressing it up as a GA.