Neural Nets learning by experience in runtime?
May 02, 2006 04:55 PM
You always need some kind of judgement of right or wrong for the new situational case (I refer to it sometimes as the knowledge of Good and Evil) to form the information in the Neural Net such that it can judge similar information later.
A major problem for 'on-the-fly' NN is determining the rightness/wrongness of a action/result. The determination method has to be preprogrammed to decide a good/bad outcome -- it must be versatile enough to handle all the situations the NN is expected to process. Calculating/judging/realizing a good/bad result can be a significant task.
Most/many NN are grown from manually build/filtered data and subjected to controlled training sessions (and their formation still can frequently fail, requiring alot of hand tuning to build successfully). It is fairly hard to build them in realtime with partial data. Decision imbalances usually result, with pendulum swings as the internal weights shift as situational cases are aggregated.
The self-learning program also has to judge the link between action and result and discard unimportant factors and try to link/trace some action done many time steps back to a current result. A great deal of filtering/classification must be done -- this functionality is also required to be prebuilt. Boiling down situational data into the symbolic values fed to the NN is often a larger task than the NNs execution.
Some NN methods are very random in the individual nodes data -- the functional logic is spread out across all the nodes (making it hard to 'work on a few nodes') , and others actually add a new node that is tailored to the specific case.
You may be able to use multiple Neural Nets, each assigned to a seperate/different problem space (small NN are easier to program/build/form/grow than large ones).
Also when you're a baby, you got feedback on what's a good answer and what is not a good answer: it is done by emotions (feeling pain is a "bad answer", feeling good by eating, or some other action is a "good answer").
check out NERO (neuro evolving robotic operatives) that uses NEAT
and
http://www.cs.ucf.edu/~kstanley/
have phun :D
Marty
and
http://www.cs.ucf.edu/~kstanley/
have phun :D
Marty
_____ /____ /|| | || MtY | ||_____|/Marty
This kind of thing is nothing new. Games like the Creatures series are based on the idea, and I can't be sure but Spores might use something like this. And, BP can work for the training, given there is some way for the feedback to indicate correctness of output. Internal feedback can be used in situations where the gameplay doesnt allow for it (like tickling and spanking pets in Creatures), so the AI would need to be told when something was good or bad. An option is to actually feed the state after a decision into the ANN, and use it to determine if something was good or not. This could yield interesting results, like virtual women who will be naughty because they like a good spanking.
(http://www.ironfroggy.com/)(http://www.ironfroggy.com/pinch)
I think there is an oversight in your original question.
Your question assumes that the brain has no idea whether it's got a 'right' or 'wrong' answer while learning.
There are two ways the brain judges right and wrong, the first as PERECil mentioned is feelings/emotions. This allows a single brain to judge the rightness of each individual action it performs. and these tend to be largely hard wired from birth (with variations in wiring between individuals).
The second is survival and procreation. This allows a community of brains to highlight the brains (and bodies) which are closer to the 'right' answer and design more little brains based on those. (In case there's any doubt, the right answer involves being alive long enough and being attractive enough to convince a partner to help you create little brains)
As such humans have become very good at identifying things that are dangerous due to the evolved pain reflex. And can use that to figure out which actions are 'right' and which are 'wrong'.
This sounds rather similar to the project Sagar_Indurkhya described. The 'teacher' network is the emotions and is evolved via genetic algorithm. (If a person is wired to enjoy being burnt alive then that would be a wrong answer and they are unlikely to propagate).
Then the more complex 'self-teaching' network which may actually be described as a prediction network which attempts to predict which action it's teacher (i.e. emotions) will consider 'correct'. People try to make plans that will make them feel good. Judging the success of this one is trickier, and as failure here is not does not necessarily remove a person from the gene pool there is a much wider variation in this sample space.
Your question assumes that the brain has no idea whether it's got a 'right' or 'wrong' answer while learning.
There are two ways the brain judges right and wrong, the first as PERECil mentioned is feelings/emotions. This allows a single brain to judge the rightness of each individual action it performs. and these tend to be largely hard wired from birth (with variations in wiring between individuals).
The second is survival and procreation. This allows a community of brains to highlight the brains (and bodies) which are closer to the 'right' answer and design more little brains based on those. (In case there's any doubt, the right answer involves being alive long enough and being attractive enough to convince a partner to help you create little brains)
As such humans have become very good at identifying things that are dangerous due to the evolved pain reflex. And can use that to figure out which actions are 'right' and which are 'wrong'.
This sounds rather similar to the project Sagar_Indurkhya described. The 'teacher' network is the emotions and is evolved via genetic algorithm. (If a person is wired to enjoy being burnt alive then that would be a wrong answer and they are unlikely to propagate).
Then the more complex 'self-teaching' network which may actually be described as a prediction network which attempts to predict which action it's teacher (i.e. emotions) will consider 'correct'. People try to make plans that will make them feel good. Judging the success of this one is trickier, and as failure here is not does not necessarily remove a person from the gene pool there is a much wider variation in this sample space.
---------------------------------------------------There are two things he who seeks wisdom must understand...Love... and Wudan!
May 25, 2006 06:29 PM
As mentioned (obliquely) by IronFroggy, probably the biggest publicist in this field and a forerunner would be Steve Grand. He designed the game "Creatures" and has IP over most of the spinoffs. I'm not sure if it's still there, but the site for his company used to be:
http://www.cyberlife-research.com/
Mainly the key is not to use the standard mathematical training techniques (which rely upon specific knowledge of a result). Rather you need to use natural self-teaching and self-categorization schemes, such as hebbian learning, which is both the simplest in the category and also considered to be the primary system the human brain uses.
Also since any neural network is inherently plastic it is possible to arrive at any final configuration from any starting point. It's just a matter of time, lots and lots of time. The main reason that the brain is unapproachable is that it contains 100 billion neurons, each with approximately 10 000 connections. Since biological neurons can grow and forge new links, an effective mathematical model would still need to link all of them (most links would just have zero strength). That's a lot of links. To make matters worse this runs in paralell with a chemical computation system that includes most emotional responses and communicates yet more stimuli. Even further, the geometry of the brain serves to encode yet more information (by controlling which links are more likely to form as well as chemical diffusion rates), and is the primary influence that genetics exerts.
The functions of the brain aren't so much mysterious and swamped in complexity.
http://www.cyberlife-research.com/
Mainly the key is not to use the standard mathematical training techniques (which rely upon specific knowledge of a result). Rather you need to use natural self-teaching and self-categorization schemes, such as hebbian learning, which is both the simplest in the category and also considered to be the primary system the human brain uses.
Also since any neural network is inherently plastic it is possible to arrive at any final configuration from any starting point. It's just a matter of time, lots and lots of time. The main reason that the brain is unapproachable is that it contains 100 billion neurons, each with approximately 10 000 connections. Since biological neurons can grow and forge new links, an effective mathematical model would still need to link all of them (most links would just have zero strength). That's a lot of links. To make matters worse this runs in paralell with a chemical computation system that includes most emotional responses and communicates yet more stimuli. Even further, the geometry of the brain serves to encode yet more information (by controlling which links are more likely to form as well as chemical diffusion rates), and is the primary influence that genetics exerts.
The functions of the brain aren't so much mysterious and swamped in complexity.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement