Hi all,
To prefix this discussion, I know NNs aren't used much in games, presumably because they can have unpredictable results and be hard to tune. The below is an article too juicy for me not to want to discuss it somewhere, and here seemed an okay place to bring it up.
Researchers found some interesting things about the stability and continuity of the mapping between inputs and output with NNs which (for me) cast some pretty big doubts on their overall usefulness for most purposes.
As far as we can tell, these are issues that don't occur (or occur much less frequently) in organic brains. A few theories on my part about this, I'd be interested to hear other perspectives:
- Our neural net training algorithms are faulty, e.g. very different to in nature.
- The simple layered approach is faulty, e.g. in a real brain signals can bounce around many times before producing an output, rather than hit each neuron exactly once.
- Models neglect the time factor, e.g. we have a continuous stream of input rather than a single input, we may take time to make a judgement.
- Our ability to act is a crucial factor for learning, e.g. we can interact with reality to confirm and clarify theories.
I welcome your thoughts.
JT
Edit: The tone of the article may be causing confusion, so I found a link to the actual paper