Some responses to recent comments...
Original post by DaeraxQuote:
I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...
Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.
During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.
I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)
On the issue of handling time in ANNs...
Feed forward networks are very poor at handling time. Even when you provide inputs covering information at previous time, which is essentially an attempt to model the autocorrelation of the process. However, there ARE network architectures that handle time very well... they're just harder to train, because you now have the problem of ensuring that you're seeing all processes passing through a given point at a given time.
Recurrent networks can be designed to model the underlying time-space differential of the process. You can even ensure properties such as stable (non-divergent) learning. I've made some particular contributions in this area in the application of recurrent architectures to learning control problems (where you know nothing of the system you are trying to control, only the performance requirements). Having said that, I certainly wouldn't advise anyone to apply these architectures to control problems in games.
Cheers,
Timkin