Advertisement

Current State of Self-Organizing Neural Networks

Started by November 18, 2003 12:17 PM
4 comments, last by Prozak 21 years ago
Hi all, I''m continously working with a concept in which neural networks grow from a single neuron to dozens, thousands of neurons, when they enbrace a problem, and that through genetic algorithms, once the problems seems solved, "dead weight" neurons, neurons that really don''t affect the solution process for the problem, are destroyed... This is what I understand as self-organizing, maybe it has another name in more AI hardcore circles. Can anyone inform me, link me up to news, or running projects, that work based on this system? I want to know what others are doing with this... Thanks for your feedback

[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

The only similiar technique I know about is where you start with several neurons. In a 3 layer network these would be the middle ones, since generally the inputs and outputs are fixed. So you start with many and when you are updating the weights, you throw another term in there, that causes some of the neurons to move their weights towards 0. For a link that has a weight of zero, your basically killing that links. So do it for all of the weights, and then effectivly that node isn''t there.

This doesn''t grow the nodes, but does controll their number. I''m sorry I don''t remember the details, but I can ask someone who can explain it if you want them. The value added had the effect of doing something like

If(this weight didn''t contribute more than a certain ammount to the output)
move the weight towards =

I think.

As far as growning the network, I''m not quite sure how it would work. Really adding nodes is just a way of adding complexity, and what the network could represent. So you''d kind of have to train it to the max, then add a node, then retrain it on training data, then test it on some test data, and see if the performance increased (Critical that the test data is 100% different than the training data, or your using cross validation or osmething). It wouldn''t be a very fast method. There might be some other way of hacking it out to estimate if another node would add performance, without training the whole network. But you really can''t determine (AFAIK) how well it will do untill you are completly done training.
Advertisement
there are variations of the SOM algorithm being used on growing nets. check kohonen''s main book on this ...

another algorithm I know for this problem is cascade correlation ... google for it !

/@$3.1415rin
Well, me and a small University team came up with a scheme where you allways start with one neuron per output, and that neuron spawns more neurons, etc...

Therefore there isnt a human element in designing the interior network. We just supply the outputs and the inputs, and the training rules (if any)...

From there the network grows till it reaches a solution (won''t grow to infinity), and dead axons are automaticly eliminated.

Its all in prototype phase, and i''m supposed to rewrite some of the code to use (if possible/eficient) MMX/SSE/SSE2...

This was more what i had in mind...

[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

In the usual modeling context, I don't know how useful automatic growing/shrinking is, in itself, but there are several neural architectures which do this as a means of fitting model complexity to the data.

Polynomial networks work by adding small, relatively simple polynomial nodes iteratively. A very quick introduction can be found here: http://mayaweb.upr.clu.edu/~jechauz/vg-pnns.pdf


Some feedforward neural networks include hidden node growing and pruning, such as BrainMaker, from California Scientific Software (http://www.calsci.com/).

Cascade correlation neural networks learn by adding hidden nodes one at a time, training new hidden nodes to covary with the current residuals. See: http://gunther.smeal.psu.edu/9077.html

Two other neurode-adding architectures I can think of off the top of my head are RCE (see, for instance: http://www.warthman.com/images/Ni1000c2.pdf) and Dystal.

-Predictor
http://will.dwinnell.com


[edited by - Predictor on November 20, 2003 10:28:24 AM]
Have a look at NEAT.

http://www.cs.utexas.edu/users/kstanley/



My Website: ai-junkie.com | My Book: AI Techniques for Game Programming

This topic is closed to new replies.

Advertisement