Advertisement

Genetic Algorithm vs Neural network

Started by October 11, 2006 04:58 AM
14 comments, last by Timkin 18 years, 1 month ago
Could anyone please give me a brief answer on when we are better off using genetic algorithm or neural network? I've seen that the same type of problems can be solved by either one of them, but I'm not sure when we must choose one or the other. This is an abstract question, so any quick example would be really helpful. Thank you, wonjun
Brief?

Neural networks work well when you want to try and recognise certain patterns in a group of inputs, even when some of those inputs vary somewhat.

Genetic algorithms work well when you want to find the best combination of several parameters to achieve a certain result, but can't easily predict which combinations are best without trial and error.

I don't think I've ever seen a problem that could be equally well solved by both.
Advertisement
Both are (roughly) regression methods.

The only reason to use neural networks is if you MUST feed each item in your training set one by one.

The only reason to use GA are... I cant think of one, sorry.

For more serious regression methods, if you are interested:

If your model is a mixture of several distributions, use EM-algorithms

If you can compute the derivative of your fitness model, use Levenberg-Marquad

If you cannot compute the derivative, but have a lot of processing time, use Nelder-Mead.

If you cannot compute the derivative or cannot describe the model, use SVM for regression. Its almost always a good option too.
You don't have to choose one of them since you can do both at the same time(applying a genetic algorithm to the weights of a neural net). The primary difference that I note when choosing one is that a genetic algorithm solves for one solution and a neural net solves for a function of solutions. A genetic algorithm just finds one (semi)optimal set of numbers and doesn't actually do any of the AI-ing itself. A genetic algorithm could be used to find the optimum value of each piece in a chess AI though the chess AI is primarily some type of search tree algorithm. A neural net on the other hand is a standalone AI that takes inputs and gives outputs. Neural net gives function, genetic algorithm gives a single set of solutions.
Quote: Original post by SteadtlerThe only reason to use GA are... I cant think of one, sorry.

I'll try to think of one:

The only reason to use GA is if your parameter space is a set of Turing machines.
Staedtler,

You consistently say that genetic algorithms are regression methods, but that just isn't true. A genetic algorithm is a (weak) heuristic search method. Applied to the traveling salesman problem, for example, we're essentially looking for a best path which is a path-finding/graph problem. Or, used as a feature selection method feeding into a linear/non-linear regression, the GA is again acting as a search method.

Also, the methods you mention (for "serious" problems) are optimization methods applied to regression problems, not regression methods themselves. SVM regression is iteslf a regression method using kernal projection and quadratic programming as the optimization method.

To the OP, if you're having trouble deciding whether to use a GA or NN, you probably need to understand both more thoroughly before using either one. They are very different techniques and have their applications in potentially very different domains. There is some cross-over but his is usually the result of changing the formulation and representation to achieve the desired result.

-Kirk

[Edited by - kirkd on October 11, 2006 12:07:35 PM]
Advertisement
Depends on what you're doing and what you're looking for.

Neural nets are a form of regression, that is they are used for predicting a continuous value. They are a linear combination of functions (often sigmoids).

Genetic algorithms are essentially a not-so local stochastic hill-climber. Depending on the problem they can be applied in a number of ways.

Quick example:
One interesting experience I have had with both of them, is on the reinforcement learning problem: the mountain car problem.
Here we applied standard reinforcement learning techniques, with a neural net as an approximation method rather than using a Q-tabel.
We then compared it to using a GA to directly search for an optimal policy.
The GA performed much better, finding solutions that got the car out of the valley in fewer iterations.

Why did it work better in this case? Not entirely sure, but because the search space is easily described as a series of "accelerate right"'s or "accelerate left"'s, is makes sense that just looking for a good combination of these should be at least as easy as trying to fit some function.

Just remember that there's no silver bullet, nothing is going to always work better than any other thing (pretty much).
Quote: Original post by kirkd
Staedtler,

You consistently say that genetic algorithms are regression methods, but that just isn't true.
-Kirk


Classifying those algorithms is difficult. GA are (almost always) used to search a parameter space, looking for a better fitting, which is regression in the general sense. Hill-climbing is another example of a non-linear regression method that is also a search method. If the OP is considering using either GA or ANN, he could probably use other regression or optimization methods.

In any comparative studies of regression methods Ive seen, GA and ANN are on the bottom end, so its probably a good idea to at least look at the methods Ive mentionned.
Quote: Original post by Steadtler

Classifying those algorithms is difficult. GA are (almost always) used to search a parameter space, looking for a better fitting, which is regression in the general sense. Hill-climbing is another example of a non-linear regression method that is also a search method. If the OP is considering using either GA or ANN, he could probably use other regression or optimization methods.

In any comparative studies of regression methods Ive seen, GA and ANN are on the bottom end, so its probably a good idea to at least look at the methods Ive mentionned.


I agree 100% that people should have a variety of tools available in their toolbox, and the methods you mentioned are indeed the pick of the litter for numerical optimization. But, Nelder-Mead and Levenberg-Marquardt are not strictly regression methods but rather numerical optimization methods. You could apply them to any numerical optimization problem given the requirements you described, including regression. They can also be applied to neural nets.

But I think we're mixing terms and contexts. I just want to make sure that beginners like the OP know that GAs are a search method that can be formulated as numerical optimization, but is not necessarily such a method.

-Kirk


Quote: Original post by kirkd
But I think we're mixing terms and contexts. I just want to make sure that beginners like the OP know that GAs are a search method that can be formulated as numerical optimization, but is not necessarily such a method.

-Kirk


Alright, Im not arguing with that. I just wish people would take the time to consider other options instead of going straight to overhyped methods.

This topic is closed to new replies.

Advertisement