Quote:
Original post by Alrecenk
Many times a weight matrix, simple search tree, or even just a set of simple hardcoded instructions will be more effecient and easier to code and read.
Actually, I can write an ANN in a single line of code representing
y=WTΦ(x)
and a multilayer ANN is no more difficult than nesting additional terms in the right hand side...
...but that's not the point. Actually, I do agree with the above posts as well... that one should understand the problem first before thinking of the solution. When the problem is translated into a set of tasks, the solution methods are generally self-evident (and don't generally involve over-hyped methods like ANNs and GAs).
On the GA classification front... from my perspective...
In the strictest terms, a GA is an optimisation method relying on
blind search, in that any given iteration of the algorithm improves the quality of a candidate solution (in this case a population of candidates) relative to an objective function and uses blind search to do so. Blind search being search where no differntial surface information is available, only function evaluations.
In this sense, it's not regression, since regression seeks to identify a mapping between two manifolds. In the optimisation problem, we know what the mapping is; it's the objective function. However, you
can express a regression problem as an optimisation problem in the parameter space of some fixed functional form and hence solve it using a GA.
Cheers,
Timkin