Neural Nets learning by experience in runtime?
If you where to create a neural network, starting from zero, with random initiated structure and random initiated weights, could you somehow do one long simulation of the network and get it to learn in runtime? Like the human brain, it must never be turned of. Still, the brain is learning. I supose it is not by back propagation, cause it does not know the right answear (BP otherwise, actualy would work on neural nets in runtime supposed it is feeded with the right answears).
Does anyone know how to make a neural network learn without having the right answears? Or without having to make a change in the entire network, hoping to find out whether it was a good change or a bad change? Is there some way to, more like how the brain works I guess, make the network 'understand' whether a change would be profitable or what changes would be profitable? (This is probably one of the great mysteries of the brain, but I am hoping we are going to solve it here!)
[Edited by - TriKri on April 29, 2006 6:49:45 PM]
One flaw in your initial assumptions is that you seem to think that the brain starts out randomly connected. There is an emmense amount of pre-wiring done in the brain during fetal development. It is this initial construction of the brain that allows you to instantly start: thinking, seeing, heading, etc upon birth. No randomly created neural net will "learn like the human brain".
That said, there are various mechanisms by which you can train a "raw" neural net without doing a genetic algorithm. The key is that you need a function that can evaluate the "correctness" of the networks output so that you can train the neural weights appropriately.
-me
That said, there are various mechanisms by which you can train a "raw" neural net without doing a genetic algorithm. The key is that you need a function that can evaluate the "correctness" of the networks output so that you can train the neural weights appropriately.
-me
Check out genetic algorithm, neuroevolution, and mutation. www.natural-selection.com have done a lot of successful projects with evolving neural networks that taught themselves how to play checkers, chess, and other various conditions.
I was thinking of maybe not just a function to judge the correctness of the net's output, but a function to tell whether a single neuron or a single synapsis is working accordingly to it's task, and how to change it. Maybe we don't have to look at the whole net's output, maybe we could concentrate on a single bunch of neurons and synapsis. Don't you think the brain works in a similar way? Just by reflecting ower the fact that when you find yourself doing something wrong, you can sometimes trace the exact place in the brain where the error has occurred; you know what process in the brain that caused you to make that mistake. Do you agree? Or am I talking through my hat? (obsolete expression?)
As I mentioned, the brain is organised into many discreet processing units. Their "function" has been defined over billions of years of evolution. Science doesn't really know what the parts do really, we just have a vauge idea of possible functionality of some parts.
How do you define the "proper function" of each neuron, cluster of neurons in your network? You cannot just have a random network that will figure it all out. That's absolutely not how the biology works upon which neural network models are based.
No you don't. You have no idea which part of the brain made a mistake. You only know where your logic was mistaken. That's not the same as knowing how or why the "network" physically made a mistake. Science has basically no idea how thinking works. You are trying to approach "making a neural network" from the perspective of "how I understand my thinking".
Understanding your thought processes gives almost zero information about how the underlying biological network works. You cannot assume that because your thinking works a certain way that the function of the network is at all similar. The thinking is an output of the network. For example: "thinking" seems linear. It's execution is likely to be higlhy parallel, but you would never know that from how it seems to work from inside your head.
-me
How do you define the "proper function" of each neuron, cluster of neurons in your network? You cannot just have a random network that will figure it all out. That's absolutely not how the biology works upon which neural network models are based.
Quote:
Just by reflecting ower the fact that when you find yourself doing something wrong, you can sometimes trace the exact place in the brain where the error has occurred; you know what process in the brain that caused you to make that mistake.
No you don't. You have no idea which part of the brain made a mistake. You only know where your logic was mistaken. That's not the same as knowing how or why the "network" physically made a mistake. Science has basically no idea how thinking works. You are trying to approach "making a neural network" from the perspective of "how I understand my thinking".
Understanding your thought processes gives almost zero information about how the underlying biological network works. You cannot assume that because your thinking works a certain way that the function of the network is at all similar. The thinking is an output of the network. For example: "thinking" seems linear. It's execution is likely to be higlhy parallel, but you would never know that from how it seems to work from inside your head.
-me
Then, what WILL work? In a gigantic network like the brain, how do you improve your skill to do one of the many things the brain can manage by testing to do a random change? (Still the question remains: is it possible to make non-random changes wich would more likely, or maybe most likely, be profitable?) Maybe one approach is the fact that the brain has an ability to turn off sertain areas, or maybe more correctly, the brain only uses a very small part at the same time, making the rest inactive. In that sense, the change is probably going to be in the right part of the brain at least, if you want to learn while acting, and the change will probably also show up more likely.
Quote: Original post by TriKri
In a gigantic network like the brain, how do you improve your skill to do one of the many things the brain can manage by testing to do a random change?
The short answer is we have basically no idea how a giant network like a brain trains itself. That's what thousands of scientists worldwide devote their entire careers to understanding.
Quote: Original post by TriKri
Maybe one approach is the fact that the brain has an ability to turn off sertain areas, or maybe more correctly, the brain only uses a very small part at the same time, making the rest inactive.
That's not really true. No area of your brain is ever "off", just less active compared to the rest. There's always some baseline activity pretty much everywhere.
Quote: Original post by TriKri
Then, what WILL work?
It's unclear what you are trying to accomplish. If you are trying to "simulate the brain" that's just not going to happen. We don't know enough about how the brain works to attempt to simulate it. If you are just looking for mechanisms by which you can train a neural network in runtime there are plenty of methods as Alnite pointed out in his/her post: genetic algorithm, neuroevolution, and mutation.
In order to train a neural network you need to have a really good idea of what you are trying to train it to do. You can't just create a general learning mechanism because there is no method by which you can distinguish output A as good and output B as bad if there are no bounds on what that output is supposed to be. The brain itself isn't a general learning network. There are highly specialized sub-parts of the brain devoted to very specific tasks: motion processing, facial recognition, language processing, language producing, etc, etc, etc. Each area is a little part of the brain that has evolved over billions/millions of years to accomplish a specific sub-section of what you call "thinking".
Basically if you want to design a neural net to drive a car in a racing game then train one to do that. If you want a neural net to drive the AI for a FPS then do that. If you outline your specific task we can help you out much better. There is no single solution for training neural nets at runtime. The proper solution is going to depend on what your specific problem is.
-me
[Edited by - Palidine on April 27, 2006 7:40:00 PM]
Quote: Original post by Palidine
Science doesn't really know what the parts do really, we just have a vauge idea of possible functionality of some parts.
I would disagree with that statement. Functional MRI is a particularly useful tool for identifying which parts of the brain are active and correlated under external and/or internal stimulus. We also know at a base level how computations are performed in certain subsystems within the brain... we just don't know this for all areas, or how high level functional complexity is achieved... but I believe that has more to do with our lack of understanding of computation in parallel systems rather than a lack of understanding of the brain (in other words, we should be concentrating on solving these more general problems of understanding computation if we want to understand the brain).
Back on topic... ;)
There are some very fundamental issues that arise when you want to perform unsupervised learning (allowing a learning agent to teach itself). Some of these have simple answers (like which paradigm to use... trial and error works well, so does predictive inference and model conditioning)... while others are more problematic (how do you recover from learning an incorrect model of the world, or what do you do when evidence contradicts your knowledge base).
There are architectures for co-evolving structure and parameters in learning systems (such as artificial neural nets) and this can be done online. However, one of the most significant issues is the proper conditioning of the network on the observed evidence. You don't have all of the evidence relevant to a situation at any given time, so you simply cannot build the network that will correctly predict the future and/or choose the right action to execute. To do this you need to have all information from all time relevant to the current scenario and this is simply not possible when performing online learning.
So, when doing online learning, the best we can do is learn an approximate solution valid over a local horizon of time (and actually only valid at times prior to the current time). Yet the brain manages to come up with answers that allow us to function in the real world. Part of its success comes from its ability to predict the possible outcomes of a scenario based on experience and analogy.
Now, if we could just work out how the brain performs accurate analogy... ;)
Cheers,
Timkin
I've been doing a research project on this very topic for more than a year. I have to go work(serving at the cafeteria) but when I get back, I'll post a new thread on this, and hopefully you can read my paper that I have written on the matter and the approach I took(I was going to post a thread anyway to get help in preparing to present it at the International Science and Engineering Fair).
Abstract:
This project deals with the design and implementation of control systems for bots in a
virtual simulation program, consisting of a virtual species that learns to adapt to a hostile,
dynamic and evolving environment, and whose members are differentiated only by their
control system. The project tests whether a genetic algorithm can successfully evolve the
weights of two neural networks which evaluate a tree path for a domain for which an
optimal solution cannot be determined. Thus training is indirectly a result of long term
reinforcement learning. If the model proposed is successful, it will help researchers in
constructing larger, generic models that may eventually lead to new technologies with
applications in nanotechnology, autonomous military vehicles, computer networking, and
autonomous automobiles and transit systems.
Control systems are composed of two neural networks: the prediction network, a
recurrent neural network that employs a derivative of the standard back propagation
algorithm, and a situation analysis network, a feed-forward neural network trained by a
genetic algorithm, enabling the evolution of a species of bots. Together, the two networks
generate a large look-ahead tree, and evaluate it. The control system is a means to employ
time series analysis.
Four different architectures for the prediction network were tested and the bots were
found to behave initially as predicted, developing elementary adaptations in response to
low level stimulation and displaying motor skills development.
Current research is directed towards developing customized simulations to further
develop applications and studying biological network architectures.
Edit: And yes, I'm very excited to see that someone has the same questions I do. I find this question very intriguing, and what I found was a very very very very very difficult system to work with. I've learned to be humble: human brain is one of the most complex systems humankind has ever encountered. And I personally believe that the answer to your question will be one of the fundamental questions we will answer that will lead to the birth of strong AI.
Abstract:
This project deals with the design and implementation of control systems for bots in a
virtual simulation program, consisting of a virtual species that learns to adapt to a hostile,
dynamic and evolving environment, and whose members are differentiated only by their
control system. The project tests whether a genetic algorithm can successfully evolve the
weights of two neural networks which evaluate a tree path for a domain for which an
optimal solution cannot be determined. Thus training is indirectly a result of long term
reinforcement learning. If the model proposed is successful, it will help researchers in
constructing larger, generic models that may eventually lead to new technologies with
applications in nanotechnology, autonomous military vehicles, computer networking, and
autonomous automobiles and transit systems.
Control systems are composed of two neural networks: the prediction network, a
recurrent neural network that employs a derivative of the standard back propagation
algorithm, and a situation analysis network, a feed-forward neural network trained by a
genetic algorithm, enabling the evolution of a species of bots. Together, the two networks
generate a large look-ahead tree, and evaluate it. The control system is a means to employ
time series analysis.
Four different architectures for the prediction network were tested and the bots were
found to behave initially as predicted, developing elementary adaptations in response to
low level stimulation and displaying motor skills development.
Current research is directed towards developing customized simulations to further
develop applications and studying biological network architectures.
Edit: And yes, I'm very excited to see that someone has the same questions I do. I find this question very intriguing, and what I found was a very very very very very difficult system to work with. I've learned to be humble: human brain is one of the most complex systems humankind has ever encountered. And I personally believe that the answer to your question will be one of the fundamental questions we will answer that will lead to the birth of strong AI.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement