Yeah, the idea that neurons would have different roles is facinating! Maybe the shooting pattern indicates to post neuron what kind of pre neuron is? And in infant's head there's a lot of different kind of neurons which, due to learning, organize themselves so that there's clear "areas" in the cortex. :)
But i doubt i'll take that idea any further, only because i have no idea why it would matter, in terms of efficiency, that neurons had different shooting patterns? Could it be that a neuron with e.g. XOR shooting pattern equals to two AND and one OR, total of three, neurons? But isn't that more a question of incoming voltage summing patterns, not spiking patterns?
I know not, therefore i implement not. ;)
Besides it's only a sidepoint when making independent, dynamic networks.
The thing i'm more concerned of is the ACCURATE connection weight adjusting. It's easy to get it going to the right direction, but when we're talking about an exact value (is there ever a situation where an exact (as in absolute, not subjective) connection weight is required?) we face the problem when it might be exceeded or the opposite. So how does a neuron determine its connection strenghts, i.e. how does it determine what neuron strength would be optimal (if synapse is punished, connection weight is moved away from the goal, otherwise towards the goal value) and how does it alter the goal value? (Yes i believe there's a goal value - otherwise stabilization of any kind would not be possible, which is the case in classical Hebbian learning systems, or, if fixed, then the goal is static, which again is not optimal at all.)
SoM, growing axons and Hebbian learning
Well Lauri, I think no matter which way you go with NN. You will always be faced with the one-dimensionality of the cost function, which severly limits (but is necessary) the state space.
If you have a cost function that can change over time without destabilizing the network you would be one step closer to your goal :D After all, if you take your input, unleash a function on it (be it a concatenation of other functions, but this is not relevant) and hope to get a certain output, preferably the most optimal way, then you're actually always looking at a fixpoint determined by your cost function and the way your network is set up. (what is fired where and when, symmetric, asynchronous, etc...)
In other words, I do not think you can achieve much more then what exists today with a static cost-function.
I remember from other posts you wrote, that you would ideally like to create neurons on the fly, but wouldn't know when to create them. You cannot determine this in a way it makes a difference, this is a logical consequence of the cost-function. Find a way to map your cost function in function of your network, not vice versa. Still, this would at best lead to a self-organising network that would need a reinforcement learning framework ever to solve anything.
Anyways, keep trying missy, if I had more time I probably would :D
If you have a cost function that can change over time without destabilizing the network you would be one step closer to your goal :D After all, if you take your input, unleash a function on it (be it a concatenation of other functions, but this is not relevant) and hope to get a certain output, preferably the most optimal way, then you're actually always looking at a fixpoint determined by your cost function and the way your network is set up. (what is fired where and when, symmetric, asynchronous, etc...)
In other words, I do not think you can achieve much more then what exists today with a static cost-function.
I remember from other posts you wrote, that you would ideally like to create neurons on the fly, but wouldn't know when to create them. You cannot determine this in a way it makes a difference, this is a logical consequence of the cost-function. Find a way to map your cost function in function of your network, not vice versa. Still, this would at best lead to a self-organising network that would need a reinforcement learning framework ever to solve anything.
Anyways, keep trying missy, if I had more time I probably would :D
Quote: Original post by Rhun
Still, this would at best lead to a self-organising network that would need a reinforcement learning framework ever to solve anything.
Well that's what i have at the moment, a self-organizing (connections and their weights) network that uses reinforced learning that is not a general feedback but individual for each neuron independently.
But making it learn efficient... There's nothing like Human mind, and only thing it can do by itself is to adapt and learn from experience, real or simulated inside our minds. But neural network in itself can't produce consciousness or emotions. It's a genetical property.
I guess... :D :D
Okay, I finally have a few minutes to spare to post the response you requested...
Sorry, bad terminology on my part... I meant that either the closed loop system (plant+controller) is driven to either a degenerate state (no dynamic change) or to an instability (system runs off to infinity). The latter doesn't happen for many machines as they typically suffer catastrophic failure at some upper limit of their input signals.
During regular operation a move outside safe operating limits would almost certainly be due to unexpected outside forcing of the system. Certainly there are clear examples of recurrent network dynamics being driven close to (and even over) stability boundaries; epileptic siezures are a good example.
The examples I've studied tend to be conservative in their adaptation strategies. Do they learn optimally? I don't know. Certainly there are networks that learn an optimal strategy, but do they do it in the shortest time, or with the fewest training instances... I think that's an open question that will be hard to answer for some time.
This is really a question about dynamical systems with feedback and unknown dynamics. As soon as you plug in some arbitrary functional translation of a subset of the signals, you run the risk of having created an unstable closed loop system. There has been extensive work on the design of stable closed loops where one component (the plant) is known to be linear. The past few decades has seen a focus on the design of closed loop control of nonlinear plants. Feedforward networks have been used for this purpose extensively during this time, but there has been little success in proving the stability of the resulting closed loop system.
Recent work has focused on proving stability in these systems, but it turns out to be extremely difficult. I'm currently writing up a paper on a new control architecture for which stability can be guaranteed, even under adaptation of the system dynamics.
Positive feedback loops provide interesting processing capabilities in neural systems. For example, you can perform integration of an input signal using positive feedback. However, this is only possible if the gain applied by the loop to the input signal is greater than 1, which would result in an unstable loop. Biological neural systems overcome this by using two coupled subsystems; each is self-excitatory (has positive feedback) but they are each inhibitory of the other. Neither subsystem would be stable in isolation, but the coupled subsystems are stable together.
One of my current interests is neural signal processing; that is, how generalised recurrent systems perform signal processing. Integration has been shown to exist in certain biological systems. I'm hoping to spend some time later this year and next year looking in more depth at learning integral and differential calculus using recurrent networks.
Yes, in the sense that biological neural networks are functionals. That is, they operate on functions to produce other functions. Thus, you could call them 'functional approximators'! ;)
I suspect by 'best' you actually mean robust. If you don't, my apologies for the misinterpretation. They're robust because they've had a few hundred million years to evolve to be robust. If they weren't, they would have died out. It's a bit of an anthropomorphic thing really... we see them as robust because we can see them (i.e., we're alive to see them because they're robust).
Cheers,
Timkin
Quote: Original post by laurimann
I was wondering today, what do you mean by network not crashing?
Sorry, bad terminology on my part... I meant that either the closed loop system (plant+controller) is driven to either a degenerate state (no dynamic change) or to an instability (system runs off to infinity). The latter doesn't happen for many machines as they typically suffer catastrophic failure at some upper limit of their input signals.
Quote: Don't biological networks sometimes drive the controlled plant beyond safe operating limits?
During regular operation a move outside safe operating limits would almost certainly be due to unexpected outside forcing of the system. Certainly there are clear examples of recurrent network dynamics being driven close to (and even over) stability boundaries; epileptic siezures are a good example.
Quote: Do biological networks learn safely and optimally and why?
The examples I've studied tend to be conservative in their adaptation strategies. Do they learn optimally? I don't know. Certainly there are networks that learn an optimal strategy, but do they do it in the shortest time, or with the fewest training instances... I think that's an open question that will be hard to answer for some time.
Quote: Why not ANN's or other systems learn safely but crash?
This is really a question about dynamical systems with feedback and unknown dynamics. As soon as you plug in some arbitrary functional translation of a subset of the signals, you run the risk of having created an unstable closed loop system. There has been extensive work on the design of stable closed loops where one component (the plant) is known to be linear. The past few decades has seen a focus on the design of closed loop control of nonlinear plants. Feedforward networks have been used for this purpose extensively during this time, but there has been little success in proving the stability of the resulting closed loop system.
Recent work has focused on proving stability in these systems, but it turns out to be extremely difficult. I'm currently writing up a paper on a new control architecture for which stability can be guaranteed, even under adaptation of the system dynamics.
Quote: Can you tell me more about the mutual inhibition of internally excitatory sub-networks? What do the biological studies say about this model?
Positive feedback loops provide interesting processing capabilities in neural systems. For example, you can perform integration of an input signal using positive feedback. However, this is only possible if the gain applied by the loop to the input signal is greater than 1, which would result in an unstable loop. Biological neural systems overcome this by using two coupled subsystems; each is self-excitatory (has positive feedback) but they are each inhibitory of the other. Neither subsystem would be stable in isolation, but the coupled subsystems are stable together.
Quote: What do you mean by looking at the larger framework of integral and differential calculus?
One of my current interests is neural signal processing; that is, how generalised recurrent systems perform signal processing. Integration has been shown to exist in certain biological systems. I'm hoping to spend some time later this year and next year looking in more depth at learning integral and differential calculus using recurrent networks.
Quote: Also, just popped to my mind, are biological networks also function approximators?
Yes, in the sense that biological neural networks are functionals. That is, they operate on functions to produce other functions. Thus, you could call them 'functional approximators'! ;)
Quote: If yes, then why are the best function approximators in the form of (biological) neural networks?
I suspect by 'best' you actually mean robust. If you don't, my apologies for the misinterpretation. They're robust because they've had a few hundred million years to evolve to be robust. If they weren't, they would have died out. It's a bit of an anthropomorphic thing really... we see them as robust because we can see them (i.e., we're alive to see them because they're robust).
Cheers,
Timkin
About the natural rules of bio-systems and ways to decrease net crush probability to the absolute zero.
I think this problem can be solved by creating a food spending sub-system as the part of the neuron cell model.
All kind of elementary actions neuron can execute costs a bit of food. When the food counter is low, neuron disables until the food counter reaches higher value.
Homeostasis simulation, nothing more. Nature uses huge amount of positive feedback loops working in its systems, but homeostasis control is primary system, it physically never allow any infinity loops because resources are not infinite.
By the way. I have an ideas, how are biologically inspired network models could grow (during phylogenesis and onthogenesis processes) and calculate the integral and differencial values of input functions. >1 values are accepted and calculating well in the preliminary model.
I think this problem can be solved by creating a food spending sub-system as the part of the neuron cell model.
All kind of elementary actions neuron can execute costs a bit of food. When the food counter is low, neuron disables until the food counter reaches higher value.
Homeostasis simulation, nothing more. Nature uses huge amount of positive feedback loops working in its systems, but homeostasis control is primary system, it physically never allow any infinity loops because resources are not infinite.
By the way. I have an ideas, how are biologically inspired network models could grow (during phylogenesis and onthogenesis processes) and calculate the integral and differencial values of input functions. >1 values are accepted and calculating well in the preliminary model.
Thanks for participating Timkin and Ath!
It was especially reviving to read Timkin's comment about network stability and robustness: I can see why people are more facinated about creating a stable learning system than just a learning system. My suggestion to solving the infinite loop problem with neurons is threshold: Every time neuron shoots it's TH equals the voltage that made neuron shoot. Therefore the same input can't cause AP twice in a row. Also TH decays over time...
Anyway i'm at the dead end with my neural network project: As Timkin pointed with his post, there's no point in creating yet another learning system when there's already myriald of them.
Therefore i hereby release all my source codes of my project "relatively dynamic ANN". There's a zip that has two zip files: one containing the old, WORKING, yet slow, code, and the zip containing a LOT improved version, that has some pointer problems that i myself have felt just too lazy to fix! But if you want to talk with me about this, just mail to lauri.m.viitanen at gmail dot com.
I'm developing this project to more specialized direction: direct GO problem solving. I try to implement memory and different neuron models in order to avoid the common problem of over fitting to a certain game style.
Here's the files:
http://www.ai-forum.org/data/63-projekts.zip
It was especially reviving to read Timkin's comment about network stability and robustness: I can see why people are more facinated about creating a stable learning system than just a learning system. My suggestion to solving the infinite loop problem with neurons is threshold: Every time neuron shoots it's TH equals the voltage that made neuron shoot. Therefore the same input can't cause AP twice in a row. Also TH decays over time...
Anyway i'm at the dead end with my neural network project: As Timkin pointed with his post, there's no point in creating yet another learning system when there's already myriald of them.
Therefore i hereby release all my source codes of my project "relatively dynamic ANN". There's a zip that has two zip files: one containing the old, WORKING, yet slow, code, and the zip containing a LOT improved version, that has some pointer problems that i myself have felt just too lazy to fix! But if you want to talk with me about this, just mail to lauri.m.viitanen at gmail dot com.
I'm developing this project to more specialized direction: direct GO problem solving. I try to implement memory and different neuron models in order to avoid the common problem of over fitting to a certain game style.
Here's the files:
http://www.ai-forum.org/data/63-projekts.zip
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement