Advertisement

SoM, growing axons and Hebbian learning

Started by May 13, 2007 07:53 PM
34 comments, last by laurimann 17 years, 4 months ago
Quote: Original post by Palidine
Computer neural networks are certainly a useful tool in that approach. However, a computational neural network is not the same thing as an actual biological neural network. This is because we actually don't understand even some of the basics of how the actual biological ones function.

I actually disagree with this last statement. There does exist a great deal of knowledge and understanding of how some of the biological neural circuits work. It's just not common knowledge. For example, I've been reading up recently on the vistibulo-ocular system (it controls, among other things, the coordination of your eye movement given your head movement to maintain visual fixation). Current understanding of this system is able to replicate its functionality in biologically plausible models (based on spiking neural nets and hebbian learning) and the behaviour of these models under stimulation is close to that observed in animal studies.

Another example would be the olfactory system. It's architecture has been known for a long time and the way in which it encodes smell information has been known for 20 years.

We're getting far closer to the day where we'll understand the basic functionality of most of our neural circuitry. At the moment our understanding is restricted to a handful of subsystems within the brain. Of course, we may find that the atomist approach to neuroscience, as with other studies of complex systems, wont actually provide us with an understanding of how the brain works as a whole, complete system.

Anyway, that's just my two cents worth on the subject. 8)

Cheers,

Timkin
I've been thinking on this subject and taken some steps back, cleaned the vision and re-defined what i'm doing here.

So far i've been able to create a system that's fully associative and adaptive with input from the environment. I've also been playing with learning models and achieved the network to learn actually to find food in 1D environment. I tested this with pen and paper, so it's pretty simple and fast. I know the problem is ridicuosly simple, but maybe that's the right place to start.

So what i have now is a neural network with dynamical connectivity and synaptic plasticity which learns by reinforced learning (rewards & punishments).

The neurons have linear summing and send only exhibitory pulses at the moment. I think inhibitory impulses also need to be implemented for future tests to work.

Traning of the network goes like this:
1) Run the network. Track every single neural spike that is being sent and record them.

2) Run until an output is read. If the output causes an action that leads the network to be rewarded or punished, goto step 3, otherwise goto step 1.

3) Pick apart those impulses that lead to the chosen output neuron to be activated. For each neuron apply the feedback to those neuron's incoming synapses that caused the impulse to happen and apply reversed feedback to the rest of incoming synapses. This leads to strenghtening of those synapses that caused neural activity that caused reward and weakens synapses that didn't cause neural activity which leads to reward.

4) clear the history record and go back t step 1.

What do you think? As far as i've tested this mdel has been working out great!
Advertisement
Sounds like you are on a better track now. Once you've got a few simple tests working you can just keep adding new elements until your AI is quite fun to watch. I really enjoyed playing with pole balancing and optical character recognition when I first started with this stuff. Though I never really focused on reinforcement learning because I got better results with supervised learning and GA and regression stuff.

Also, as always, I have to point out that neural nets are one of many, many ways to solve these types of learning problems. Right now I'm working on a polynomial approximator that groups the input into a growing BSP tree to evolve the complexity. I'll be happy to show it off as soon as I get it working....in a few months lol...
Quote: Original post by Alrecenk
Right now I'm working on a polynomial approximator that groups the input into a growing BSP tree to evolve the complexity.


Hehe... sounds like a locally learning RBF I designed a few years back which was based on a k-d tree partition of the basis space according to the density of input data, which itself is related to curvature of the input space hypersurface (giving a constant basis density wrt to learning difficulty). My rbf was used for learning control functions for hidden processes and worked well. The only reason I didn't persist with it was that I had other directions in that research problem that had to be solved before the funding ran out. I really should write up a paper on what I did.

Cheers,

Timkin
I'd probably read it, but I still haven't finished reading that MML thing you gave me 3 weeks ago.
Hehe... I completely understand that. I find my reading goes in bursts... and unfortunately lately has only been directed toward work. I've run out of time for reading for fun/interest. 8(
Advertisement
I wonder what people think about this kind of project.

Is it not facinating to develop a fully dynamic neural network?
We all know that neural networks aren't the most efficint way of solving all kinds of problems, far from it, and that's why some people are studying stuff like BSP-or-such trees and function approximation, but isn't there something more facinting in neural networks than making yet anoter problem solving system?

I'm pretty excited to develop a XOR-problem solving neural network that would have dynamic connection growth, dynamic synaptic strengths and online unsupervised learning as its properties.

It's easy to create an associative network, challenging to make it learn and difficult to make it solve the XOR problem when it's a network that works on spiking neurons and with dynamic connections.

Any ideas and discussion about the subject are welcome. :)
Quote: Original post by laurimann
Is it not facinating to develop a fully dynamic neural network?
We all know that neural networks aren't the most efficint way of solving all kinds of problems, far from it, and that's why some people are studying stuff like BSP-or-such trees and function approximation, but isn't there something more facinting in neural networks than making yet anoter problem solving system?


I'm inclined to say no. As far as I'm concerned neural net just means a function approximation method based on a connective approach with some of the abstract qualities of real neural net. I thought they were really cool when I discovered them about 3 years ago, but I kept playing with them, trying to make a "better" neural network. I found that removing the firing conditions and using continuous functions seemed to make it easier for the system to learn. Then when I realized I just had a linear model (which works better than you'd think) I changed to using a matrix representation because it was easier to manage. Then when I got snagged on the limitations of a linear model (didn't take too long)I moved to a polynomial model for some added complexity. Then I started using piecewise functions contained in spheres for unsupervised learning, which allowed me to do some pretty good unsupervised topological evolution type stuff. Then I thought it'd be cool to evolve the topology based on supervised training, and that eventually lead to the BSP tree idea since I thought I'd be able to pull my dividing surfaces to high rates of change in the training set. The moral of the story is that neural nets aren't really like the human brain they're just a method of modelling systems. They have some good uses, but I highly doubt the next great advancement in AI will have anything to do with them.

The key to demystifying artificial neural networks is to realise that the most common architecture - the Multi-layer Perceptron - is just a function approximator built from a linear combination of simple basis functions. Thus, they have the same potential and same pitfalls as other basis function methods. Most of the research into MLPs has been toward trying to shore up their shortcomings.

If you want a reliable, robust and very general framework for function approximation, then move to either B-spline networks or generalised kernel methods.

If, however, you want to explore the possibilities of complex computation in simple networks, I recommend a move toward recurrent systems, such as networks of coupled oscillators. I personally find this area quite interesting. I've just spent a couple of years working on proving stability of these systems under adaptation... so you can learn and guarantee that the network wont crash, or that if the network is controlling a plant of some kind (e.g. a motor) that it wont drive it beyond a safe operating range while learning an optimal control policy. Learning per se isn't that interesting (Hebbian learning and other methods are well understood and easy to implement)... making sure your system can learn safely and optimally... now that is interesting!

Currently I'm investigating the calculus of network processing; for instance, how mutual inhibition of internally excitatory sub-networks provides robust integration of input signals. This is moderately well understood from biological studies... I'm looking at the larger framework of integral and differential calculus performed by recurrent networks and how we can design these (or drive learning toward favouring these sub-network architectures)... very useful when you want to control motors/muscles with these systems.
So what you're basically saying that no matter what it's useless to even try to model the biological networks, because they're just too complicated? I think that everything should be tried and all ideas tested. Maybe one of them leads to something.

Also, as far as i've understood recurrent networks are different from what i'm doing here. I do have some type of recursion, but it's chaotic, because every cell can form a connection to any other cell...

I don't know if there's anything this kind of a network could do that either of you couldn't do also with your mathematical systems.

GO problem remains unsolved. But if you are experts in the AI field then why can't you make an AI that could be trained to beat atleast an average beginner? So far not even the best GO programs in the world can win an intermediate beginner who has been training just for a few months...

But if you can't, then should i not try a new approach to solve the problem?

This topic is closed to new replies.

Advertisement