quote:
Original post by Kylotan
They recognise patterns in a set of data.
That''s a little misleading Kylotan. ANNs are classifiers, nothing more, nothing less. They can be trained to partition an input space along
classification boundaries and thus, given a previously unseen instance of data that falls
within the training domain they can determine which side of each of the boundaries the datum resides, thus uniquely identifying the class membership. One of their strengths, which is also one of their weaknesses, is their ability to formulate boundaries in the presense of noisy data. The network topology dictates the extent to which the class boundaries are affected by noise.
Other than that, Kylotan is correct in saying that ANNs are useful where you need to determine how a perception relates to previous perceptions you have seen (or been trained on). This is a fundamental task in AI. However, ANNs are not the only, nor necessarily the best tool to use in every such situation.
quote:
Original post by BiTwhise
Wouldn''t it be possible to implement some sort of recurse/iterative logic by having neurons fire backwards (as in to a previous layer)? And perhaps a memory system, by having a cluster of neurons that are "charged", with a layer of control neurons that take input passes on to the cluster..
Three words:
spiked neural networks . Also known as neuronal networks. In biological neuronal systems, encoded information can be observed by the number and frequency of discharge spikes propogating in a neuron. (Side note: it''s not strictly true to say all information is encoded in spikes, because some information is also encoded in the local concentration gradients of intra-cellular and extra-cellular ions, while other information is encoded in the time varying phase synchronisation between neuronal clusters). SNNs and neuronal networks include inhibitatory and excitatory network connections in a non-layered topology, meaning that information isn''t solely propogated from inputs to outputs, but also
around network loops. In this way, they exhibit a kind of time-local memory. Of course, they''re also harder to work with than typical feed-forward networks.
I hope this helps with your understanding.
Cheers,
Timkin