Let's say I want to learn the XOR function -- a classic example from the early days of ANN research. One method might be to use a multi-layer neural network which "learns" XOR. Fine. Here's another (this is an intentionally-simple example): I'll store a 2d array and say that my estimate of "a XOR b" is simply given by the value in the array, "array[a]." Then, here's my learning algorithm:
Given training input (a,b) and training output c, I'll perform the following update rule:
array[a] = gamma*c + (1 - gamma)*array[a]
where gamma is a real number between 0 and 1 which is my "learning rate." (I'm assuming "array" is an array of doubles or some other approximations of real numbers.)
This will learn XOR.
Why should I use a neural network instead of this?
;-)
Artificial Neural Networks
@danmar
No they don't. They cant model intuition, creativity and hunches. Which is the core of human biological thinking. They dont solve equations* nor can they prove theorems.
If as Sneftel suggests that ANNs in games are mostly used as classifiers (and the nonlinear nature of ANNs is not the key) then for most games Bayesian Networks would perform better, especially since game entities are trying to make decisions. With the added benefit that its reasoning is clear and you can go back and easily tune algo, game play etc based on results. kNNs too are probably sufficient (although SVM I feel is also overkill and probably poor since the training dataset probably would not be sufficient to produce best results).
Myself I am looking at ANNs for a problem where their function approximation and ability to capture interdependecies is useful, the blackbox nature of ANNs is ok since the data itself has no inherent meaning and is complex enough that any attempt using splines results in nonsense. I have used regression (as noted a simplest type of ANN) but it has not performed ideally. Of course there is a simper statistical technique I also plan to implement and test against to see which is better.
*An ANN is nothing but a directed graph connecting a bunch of functions with weighted nodes. They are an alchemical and blind way to approximate functions. I suspect I could build a 2 input 1 layer 3 output neural network with biological entities.
Using a mouse. I would have it classify colours using a series of tubes. I would show it a colour and use different sized cheeses and slight to mild shocks :( to correctly set the weights and implicitly, activation functions. Then initial choice of tube will set subsequent choice which chooses output :). It would also be able to classify new colour combos i did not train it weight.
Thus i would have trained the mouse to think like a human.
No they don't. They cant model intuition, creativity and hunches. Which is the core of human biological thinking. They dont solve equations* nor can they prove theorems.
If as Sneftel suggests that ANNs in games are mostly used as classifiers (and the nonlinear nature of ANNs is not the key) then for most games Bayesian Networks would perform better, especially since game entities are trying to make decisions. With the added benefit that its reasoning is clear and you can go back and easily tune algo, game play etc based on results. kNNs too are probably sufficient (although SVM I feel is also overkill and probably poor since the training dataset probably would not be sufficient to produce best results).
Myself I am looking at ANNs for a problem where their function approximation and ability to capture interdependecies is useful, the blackbox nature of ANNs is ok since the data itself has no inherent meaning and is complex enough that any attempt using splines results in nonsense. I have used regression (as noted a simplest type of ANN) but it has not performed ideally. Of course there is a simper statistical technique I also plan to implement and test against to see which is better.
*An ANN is nothing but a directed graph connecting a bunch of functions with weighted nodes. They are an alchemical and blind way to approximate functions. I suspect I could build a 2 input 1 layer 3 output neural network with biological entities.
Using a mouse. I would have it classify colours using a series of tubes. I would show it a colour and use different sized cheeses and slight to mild shocks :( to correctly set the weights and implicitly, activation functions. Then initial choice of tube will set subsequent choice which chooses output :). It would also be able to classify new colour combos i did not train it weight.
Thus i would have trained the mouse to think like a human.
Quote: Original post by Daerax
@danmar
No they don't. They cant model intuition, creativity and hunches. Which is the core of human biological thinking. They dont solve equations* nor can they prove theorems.
I was talking about mechanics of it, the flow of information and the type of computation can be modeled after what is known about biological/chemical information processing of neurons inside the living brain. Many different "computations" can be simulated with the results closely matching biological measurements.
Blue Brain Project
http://bluebrain.epfl.ch/
I do actually agree with you, but I would phrase it differently. I'd say, intuition, creativity and such are indeed the core of human mind, which can not be simulated as it is not really understood, it's not defined properly nor it can be objectively examined or measured.
But intelligence and logic can, which I refer to as "thinking", so I would separate intelligence (thinking) from everything else as it is the only thing we _can actually judge and measure as an external observer - it is because intelligence/logic is deterministic, while everything else is chaotic, if not completely random.
Though, in some hypothetical theory ANN could simulate intuition and creativity too... if it could only spontaneously arise and evolve by itself, just like in the real world. But really, you would need to build and train fully talking AI to be able to ask about it. You can't just put partial artificial or biological network and hope to measure some numerical or voltage output to be identified as "intuition". This can only be done with logic circuits aka intelligence and deterministic algorithms.
Imagine we train some ANN to the point you can have conversation with it, and it tells you it has emotions and intuition, it can dream and it likes music, would you believe it?
Anyway, amazingly I was right about Quake bots, many of them do use neural networks, but the best part is that it's all perfectly documented and all the source code is available. Google "neural network quake bot".
Neural Bot, Quake 2:
http://homepages.paradise.net.nz/nickamy/neuralbot/nb_about.htm
The Quake III Arena Bot:
http://www.kbs.twi.tudelft.nl/docs/MSc/2001/Waveren_Jean-Paul_van/thesis.pdf
[Edited by - danmar on September 20, 2009 2:15:04 AM]
Quote: Original post by Emergent
Let's say I want to learn the XOR function -- a classic example from the early days of ANN research. One method might be to use a multi-layer neural network which "learns" XOR. Fine. Here's another (this is an intentionally-simple example): I'll store a 2d array and say that my estimate of "a XOR b" is simply given by the value in the array, "array[a]." Then, here's my learning algorithm:
Given training input (a,b) and training output c, I'll perform the following update rule:
array[a] = gamma*c + (1 - gamma)*array[a]
where gamma is a real number between 0 and 1 which is my "learning rate." (I'm assuming "array" is an array of doubles or some other approximations of real numbers.)
This will learn XOR.
Why should I use a neural network instead of this?
;-)
In the real world you wouldn't use either method for such a trivial problem. Learning XOR is an Intro to AI class exercise for beginners.
Machine learning (via ANNs or otherwise) is applied where you are solving or approximating solutions to problems so complex that hand-coding a solution is either extremely difficult or impossible.
Like every other solution in the problem-solution space mapping, ANNs are good for some things and bad for others. There's no free lunch right? =)
http://www.no-free-lunch.org/
@danmar
So, you do know that most ANNs are essentially just a bunch of Sigmoid Functions trained on their derivatives with some random searches to optimize the arrow weights yuh? There is nothing magical about trig, hyperbolic or logistic functions. The more modern methods are more statistically influenced and ever further away from so called 'neurons'.
A decision Tree could result in behaviour just as robust as that bot's. Maybe better cause you could look at result make sense of it and go back and modify training, pruning, entropy etc functions. Indeed the simplest AI to train and code yet yielding nice behaviour could be written in terms of decision trees or even a hierarchy of such. It would also be able to react to situations it didn't train on.
A step further, harder but clearer than an ANN but still allowing for hardcore learning and allowing for random behaviour would be a bayes net. A pretty simple one would be Tree Augmented Naive Bayes.
So, you do know that most ANNs are essentially just a bunch of Sigmoid Functions trained on their derivatives with some random searches to optimize the arrow weights yuh? There is nothing magical about trig, hyperbolic or logistic functions. The more modern methods are more statistically influenced and ever further away from so called 'neurons'.
A decision Tree could result in behaviour just as robust as that bot's. Maybe better cause you could look at result make sense of it and go back and modify training, pruning, entropy etc functions. Indeed the simplest AI to train and code yet yielding nice behaviour could be written in terms of decision trees or even a hierarchy of such. It would also be able to react to situations it didn't train on.
A step further, harder but clearer than an ANN but still allowing for hardcore learning and allowing for random behaviour would be a bayes net. A pretty simple one would be Tree Augmented Naive Bayes.
Quote: Original post by EJH
In the real world you wouldn't use either method for such a trivial problem.
It wasn't intended as a practical suggestion... I was trying to give an example of a function approximator so absurdly simple that it would be impossible to view it unrealistically.
Quote: Original post by Daerax
@danmar
So, you do know that most ANNs are essentially just a bunch of Sigmoid Functions trained on their derivatives with some random searches to optimize the arrow weights yuh? There is nothing magical about trig, hyperbolic or logistic functions. The more modern methods are more statistically influenced and ever further away from so called 'neurons'.
A decision Tree could result in behaviour just as robust as that bot's. Maybe better cause you could look at result make sense of it and go back and modify training, pruning, entropy etc functions. Indeed the simplest AI to train and code yet yielding nice behaviour could be written in terms of decision trees or even a hierarchy of such. It would also be able to react to situations it didn't train on.
A step further, harder but clearer than an ANN but still allowing for hardcore learning and allowing for random behaviour would be a bayes net. A pretty simple one would be Tree Augmented Naive Bayes.
Yes, except that ANN computation can be implemented via parallel processing, its natural design, which makes it suitable to handle high-volume input. ANN can learn dynamically (real-time evolution), which _will introduce chaos or even randomness (as you explained) that is necessary for human-smart AI. Unpredictable, inventive, creative and cunning one. For that you need dynamic flexibility, recursive self-adaptivity and, of course, some randomness or chaos, not to call it 'free will'.
However, if some variant of 'decision tree' can do all that, than indeed there is no difference between them at all, and I give my vote to whichever is faster.
I prefer philosophical aspect of it, so let me ask you - do you think there is a chance intuition and creativity might spontaneously arise within some ANN, perhaps as a byproduct of "adaptive-intelligence", if we could train its logic circuitry to the point of having a meaningful conversation? I find this interesting, because the first sentence of the type "I like" would suggest this AI has emotions and since it's built on "binary search" principle it's bound to have some preferences, likes and dislikes, maybe even a wild desire. Science fiction or theoretical reality?
You use words I am not used to seeing with ANN but all of the 7 or so ML algorithms mentioned in this thread can be parallelized. Especially so if coded in a functional language and making no use of mutable state.
As well for many of the simpler ones it is actually easier to code something to train them on the data gathered during play. And it would arguably 'learn' faster since the methods are still effective on little data unlike say ANN or SVM. The problem of course is that the AI would get too good or overfit to a particular play style so a good game would try to account for this.
As for strong AI, I am a sceptic. At least for the next 200 years. But if any such thing were coded and an ANN was part of it, it would only be a small part of it. Because you see, an ANN is much too simple to represent whatever intelligence is. In that scale of things saying f o g(x), g:X -> R^n;f:R^n -> R^m; n,m in R represents all intelligence is equivalent. And that is absurd no? To me the phrase train an artificial neural net till it can talk makes little sense. It sounds impossible, how to train a something that is beyond the scope of a technique to do something not understood using an undefinable method?
As well for many of the simpler ones it is actually easier to code something to train them on the data gathered during play. And it would arguably 'learn' faster since the methods are still effective on little data unlike say ANN or SVM. The problem of course is that the AI would get too good or overfit to a particular play style so a good game would try to account for this.
As for strong AI, I am a sceptic. At least for the next 200 years. But if any such thing were coded and an ANN was part of it, it would only be a small part of it. Because you see, an ANN is much too simple to represent whatever intelligence is. In that scale of things saying f o g(x), g:X -> R^n;f:R^n -> R^m; n,m in R represents all intelligence is equivalent. And that is absurd no? To me the phrase train an artificial neural net till it can talk makes little sense. It sounds impossible, how to train a something that is beyond the scope of a technique to do something not understood using an undefinable method?
Quote: Original post by Daerax
You use words I am not used to seeing with ANN but all of the 7 or so ML algorithms mentioned in this thread can be parallelized. Especially so if coded in a functional language and making no use of mutable state.
As well for many of the simpler ones it is actually easier to code something to train them on the data gathered during play. And it would arguably 'learn' faster since the methods are still effective on little data unlike say ANN or SVM. The problem of course is that the AI would get too good or overfit to a particular play style so a good game would try to account for this.
I agree most games can produce more enjoyable AI with much simpler algorithms, easier, faster to do develop and very much easier to debug and fix. This is enough to keep ANN from being popular, and it's pretty good reason. However, I also believe ANN can practically substitute any of those algorithms, no matter how simple or complex, with the same or better efficiency.
Quote:
As for strong AI, I am a sceptic. At least for the next 200 years. But if any such thing were coded and an ANN was part of it, it would only be a small part of it. Because you see, an ANN is much too simple to represent whatever intelligence is. In that scale of things saying f o g(x), g:X -> R^n;f:R^n -> R^m; n,m in R represents all intelligence is equivalent. And that is absurd no? To me the phrase train an artificial neural net till it can talk makes little sense. It sounds impossible, how to train a something that is beyond the scope of a technique to do something not understood using an undefinable method?
This is very much like Darwinian evolution and natural selection, driven by random mutations producing ever more efficient design under given external dynamics of nature itself. Trial and error - survival of the fittest. It is a slow process and not very pragmatic one, but it inexorably leads to nothing but improvement.
If you can believe humans could evolve from single cell organisms, then it should not be so hard to believe our ANN would eventually utter some meaningful words, after many random trial and errors of course, but word for word... And the more you know, it becomes easier to learn.
ANN can be trained from complete zero to do anything, as long is it keep trying. There is a project where ANN had an eye and it could move across chess board. The eye had no idea where it was or what it was doing, but it had built-in desire to keep moving chess pieces. It was "punished" for bad moves and it was "rewarded" when it made good moves. Eventually this eye learned to play chess without ever knowing what it was doing or where it was.
Neural bot for Quake 2 - Genetic Algorithms
http://homepages.paradise.net.nz/nickamy/neuralbot/nb_about.htm
Genetic algorithms use ideas from the natural process of evolution to mould populations into a form which is well suited to their environment. This GA is called a steady-state genetic algorithm because of it's constant population size. Roughly every 1-6 minutes, The GA is run on the bots:
1. The fitness (or success) of each bot is evaluated.
2. Two parents are chosen from all the bots. The parents are chosen using one of three selection mechanisms. Generally the better the bot, the more chance it has of being chosen as a parent.
3. All the weights(connection strengths) of the two parent bot's NNs are then encoded into a chromosome, or string of 'DNA' for each parent.
4. These two pieces of DNA are sliced at certain random points and swapped around to create two new pieces of DNA from the parent DNA.
5. Two children bots are chosen - criteria: low fitness. The oven-fresh child DNA is then converted back into synapse weights with a reverse-encoding process and used to overwrite the children bots' NN weights. At this point mutation can occur.
Quote: Original post by EmergentQuote: Original post by LeChuckIsBack
There is a small but important practical use for ANN in games, that is speech recognition. Some of the latest games started to implement command input directly from human voice, quite funny to yell things in the mic if you ask me. There is a powerful SDK to easy implement it in a game but don't remember the name and it's not free.
AFAIK, most modern speech recognition is based on hidden Markov models.
Markov models have many similarities with neural networks. Both are statistical models which are represented as graphs. Where neural networks use connection strengths and functions, Markov models use probabilities for state transitions or observation probabilities. Where neural networks can be feed-forward or recurrent, so can Markov chains be left-to-right or recurrent. A key difference is that neural networks are fundamentally parallel, they excel at learning phoneme probability from highly parallel audio input. And, as in neural network where the challenge is to set the appropriate weights of connection, the Markov model challenge is finding the appropriate transition and observation probabilities.
The curious quality of recurrent neural networks is that they retain traces of the previous inputs, and in this sense incorporate context of time and memory. The previous states have an impact on the current states, even without additional learning algorithms. But what exactly is being represented, or how it is being represented, is largely unknown. It's like trying to figure out recursive equation by looking a a picture of its fractal.
Neural networks are metaphorical black boxes; we know what goes in, we know what comes out, but have no idea what is really going on inside, how knowledge and processes are actually represented. This is true for feed-forward networks and for biological brains. We can isolate locations and their functions, but inferring internal data structures has so far not been possible.
The main difference between biological networks and artificial networks is that the former engage constantly in reality testing. They are situated in a world of constant feedback. Studies of child development indicate that children are wired to interact heavily with their world, to understand physical properties with all of the senses, sensory overlap being an important notion in robotics for statistical constraint satisfaction, and learning how to achieve ends by producing various vocalizations is critical to learning language.
Although it may be a while before computers are given the same opportunities for real-world interaction as humans, this being dependent on the development of adequate sensory apparatus and successful coupling to neural or other computational machinery, we at least know that this is a correct path towards progress in developing intelligent systems.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement