Quote:Original post by laurimann So what you're basically saying that no matter what it's useless to even try to model the biological networks, because they're just too complicated?
No, I'm not saying that at all. But what you're doing isn't anything like a biological network... it's just a very complicated, disorganised network in which you're trying to learn some organisation.
Biological networks are actually made up of many small computational subnetworks, with many copies of these running in parallel with loose coupling to provide robustness. It's a very structured, organised system and it's computational ability is not something learned at the global scale, but rather locally at the subnetwork scale.
Quote: I think that everything should be tried and all ideas tested. Maybe one of them leads to something.
Absolutely. I'm not trying to say don't do it. I'm merely trying to suggest that you direct your efforts toward things that will show some results. The likely outcome of your current approach is frustration, rather than success.
Quote:Original post by Timkin Currently I'm investigating the calculus of network processing; for instance, how mutual inhibition of internally excitatory sub-networks provides robust integration of input signals. This is moderately well understood from biological studies... I'm looking at the larger framework of integral and differential calculus performed by recurrent networks and how we can design these (or drive learning toward favouring these sub-network architectures)... very useful when you want to control motors/muscles with these systems.
So basically i also should be studying those things you mentioned above - learning that favours sub-networks, the organisation of sub-networks and structure of such recursive system - to cover myself from frustration and lack of results?
But are you sure my approach would not at all create such systems? I mean that if the neurons can connect to each others without limitations then could it not be that with a right growth rule such parallel sub-systems would occur? I know that it's not something one comes up with every monday, but still.
The thing that frustrates me most at the moment is the lack of positivity. Lack of results hasn't been a problem with this project so far.
But thanks anyway, the fact that you guys even bother to take part in this project via this forum is more than awesome. :)
Quote:Original post by laurimann So basically i also should be studying those things you mentioned above -
Where did I say (or even imply) that? I provided my own research directions merely as an indicator of what I find interesting. If you find your current project interesting, stick with it. You might not get useful results, but at least you'll enjoy yourself.
Quote: But are you sure my approach would not at all create such systems?
I expect you'd prefer an honest answer from me... based on my experience and knowledge, I would reply with "not in your life time". I.e., without any significant guidance to the learning, the chances that you'll happen to learn such an architecutre are so miniscule that it would take a time longer than you will be alive before your algorithm would stumble across it. Am I being negative? No. I'm being realistic. That's not to say you can't learn a useful architecture, but that's a different question and revolves directly around the question of 'what proportion of the network structural state space contains useful solutions'.
[quote The thing that frustrates me most at the moment is the lack of positivity. Lack of results hasn't been a problem with this project so far.
You haven't reported any results here so how would we know that? Perhaps you should enlighten us as to the problems you've tested this methodology on and provide some indication of the learning and operational performance. That would certainly provide some support for your beliefs that this is the right way to go.
Okay i'm sorry, i didn't consider all parts of the situation. You're right.
But the results: They're nothing i could show you, because i've been developing rules by which the network runs. There's these things new since my last report:
1) it is possible to manually build a working XOR solving system with my network 2) the network can also be successfully taught to solve the XOR problem 3) it is a lot optimized and i can now run several thousad neurons in real time on a laptop (1,2ghZ, 256MB ram, with full dynamicity) 4) the network is robustly associative 5) the network is very adaptive to changes in input and also cabable of stabilizing when the input is constant. 6) network has the properties of both short and long term memory
it has spiking neurons, negative weights, dynamic connections (growth, birth, death), dynamic weights and reward/punishment based learning.
that's some improvement.
i've thought about the following also: dynamic neuron birth and death (no idea why or how, but it's just an idea) neural grouping (just for visual sense of synchronous groups)
also, i haven't been able to test it yet on a real situation, but the system should be able to create recursive sub-networks, and on my pen & paper theoretical tests this is excatly the case.
[Edited by - laurimann on June 12, 2007 6:38:54 PM]
I recommend a move toward recurrent systems, such as networks of coupled oscillators. I personally find this area quite interesting. I've just spent a couple of years working on proving stability of these systems under adaptation... so you can learn and guarantee that the network wont crash, or that if the network is controlling a plant of some kind (e.g. a motor) that it wont drive it beyond a safe operating range while learning an optimal control policy. Learning per se isn't that interesting (Hebbian learning and other methods are well understood and easy to implement)... making sure your system can learn safely and optimally... now that is interesting!
Currently I'm investigating the calculus of network processing; for instance, how mutual inhibition of internally excitatory sub-networks provides robust integration of input signals. This is moderately well understood from biological studies... I'm looking at the larger framework of integral and differential calculus performed by recurrent networks and how we can design these (or drive learning toward favouring these sub-network architectures)... very useful when you want to control motors/muscles with these systems.
You got my interest, Timkin.
I was wondering today, what do you mean by network not crashing? Don't biological networks sometimes drive the controlled plant beyond safe operating limits? Do biological networks learn safely and optimally and why? Why not ANN's or other systems learn safely but crash?
Can you tell me more about the mutual inhibition of internally excitatory sub-networks? What do the biological studies say about this model? What do you mean by looking at the larger framework of integral and differential calculus? You've been studying these stuffs for two years so i guess you have something to say. :)
Also, just popped to my mind, are biological networks also function approximators? If not, then what are they? If yes, then why are the best function approximators in the form of (biological) neural networks?
I'm looking forward to understanding these subjects and comparing them to the current function of my system: maybe developing it even more or realising that there's nothing more i can do...
cheers. :)
P.S. i'd appreciate examples, i understand them best. Just if you feel like giving a noob some lecture.
I like the motivation you have to pursue such a high level learning system. I think you should visit http://www.mixel.be/ and try to get into his world if that is your goal.
I am very sure you would be vastly interested in his work around what he calls "Appropriation Behavior", sadly for you, his thesis about it is in Dutch.
He basically starts by explaining why the techniques we use in AI are not intelligence at all, but merely mimic a fraction of intelligent behavior.
People that start with attempting to create intelligence by learning algorithms are using an approach that starts from an inverse perspective. (We do not want to REproduce intelligence, we want to produce it).
His paper is about a completely different approach where he seperates learning from intelligence. Learning is defined as a conscious process (he does not call conditioning - e.g. Pavlov - learning).
Central pillars are stated as: Learning - acceptance, Conditioning - repetition, Consciousness - attention and knowledge. Learning in general uses a mix of these factors, e.g. learning a trade or skill (painting) implies Learning and Conditioning.
Anyway, to the point, he divides the environment as a set of processes, where each process consists of a set of processes. He then formalizes this into a mathematical theorem and makes it discrete by introducing fixpoints, who incidentally happen to be equal to some concept that could be learned...
I hope you're still with me :D
The key behind is theory is time. Time as an extremely important variable, a factor that is embedded in the processes leading to all the processes having a continuity. The fact that they are continuous makes them in a very liberal interpretation "Living functions". A process is not said to behave the same during it's entire lifecycle.
He delivered 2 implementations of this (in smalltalk), where he successfully proves his theory (though rather minimalistic). One simulation puts a population of agents divided in 4 rooms. They have a 'motivation' (call it instinct) to go to a door, if they already are at a door, go to another door. This creates, after a relatively short time, a very smooth movement stream).
What happens is that every agent has a knowledge base, and each element in that knowledge base takes in a process (again, think "living function"), the set of the processes on the knowledge base will produce a reaction, which will imply action => behavior.
A 2nd simulation adds keys and locked doors, agents quickly learn to chase after a key to unlock a door, and this again in a short time.
The most impressive simulation though is the gathering and opening of "nuts" that fall of a tree by a population of primates. This has been run after a major upgrade to the platform as opposed to the 'rooms' simulation.
Knowledge is now represented by processes as well, a process stack exists that will be filled with perception (viewing, feeling, ...) and will be executed through a catalyst (usually the agent itself). A new notion is introduced, called 'frustration'.
In short the nut simulation shows open nuts and closed nuts (and all gradations of semi-open nuts). But there is no relation between them for the agent. The agent will soon learn which nuts he can eat (open ones), once they are gone, he will get frustrated... (no food) and find out that he can exert force on semi-open nuts to open them (a state that he knows now, as well as the process he can execute on it, which will de-frustrate him). When he is left with a population of nuts he cannot open with his own strength, he learns to use a rock to open them, until all the nuts are gone.
So Lauri, I tried to summarize things a bit for ya, hope you liked it :D It's pretty impressive, but still in a young stage, but I thought it might at least give you some inspiration. And don't be afraid to write him, he's a nice fella :D
Cheers
PS: Most of this is of the top of my head, don't shoot me for writing any untrue facts or half-assed explanations, but it's not that easy to sketch this in such a short space :p
Sorry for the lack of response... I'm flat out at work at the moment and haven't had much time to check the site... I'll try and post something useful by the end of the week.
And Timkin, i thought that you had busy with works and stuffs, but i never realised you're that busy, so i just wish that May the Force be with you. :)
Anyway, Rhun, that what you descriped sounds a bit like what i tried once: I put a pacman in 2D grid with some obstacles, some food there and tried to teach it find food. Pacman had memory base with each memory unit consisting of three parts: old situation it had been in, what action it had excecuted in that situation and what was the result of that (in terms of reward and punish). But the thing is that i rewarded it wrong, because i just let teh pacman run for a while and then rewarded it manually based on the OVERALL performance. This way it never understood what it excatly did right or wrong so it never learned anything. But it was a fun experiment. :)
Also, trying to implement that to GO would be impossible simply because of the amount of memory needed.
But my point also, why i chose NNs, is because of the efficient encoding of information. Also my latest idea, talking about neurons, is that are there different types of neurons, like action producing neurons, input filtering neurons and memory neurons? Or are they all the same type with just different area of importance? Hmm, tricky. ;P
Heheh, it's probably not the most memory efficient approach, but if you look at the human brain, the only thing it has going for it is the vast amount of information it can store, and the fact that it works parallel. The brain works in fact rather slow...
As for your different types of neurons, I think you are looking at a lot more overhead if you are going to be categorizing them. It immediately reminds me of a 3-tier architecture where you have:
action ------- filtering ------- memory
Besides, I think it is already (at least partially) implicitly true that neurons will be taking on different roles in a NN. A mature NN will have a nice landscape with lovely peaks and valleys and the role of the neurones will be dependant on their place in the (n-dimensional) landscape.
I do like the idea of having a measurable means of identifying this, so perhaps you can give a neuron different ways of firing... 3 in your case. What this makes me think of straight away, is more something of passive and active neurons, where you put weights on activity, negative for being fired upon and positive for firing.
Try it, build the network, feed it information, then manually fire your heaviest neurons and see what happens ^^
Anyway, time for bed, 2:30 and I have ot do the dishes and I'm rambling