Quote:
If you add a node to a network, you change the input-output mapping, just as if you add a node to a probabilistic network. In both cases, you need to re-condition the network/model on the available data.
I was speaking in terms of using the backpropagation algorithm here and the ease of its expansion with similiar nodes.
Quote:
That would only be true if you took a very naive approach to HMMs or modelling pdfs. Sparse data sets can be handled in many contexts (ANNs, HMMs, Decision Trees, etc) and in all of these (ANNs included), the accuracy of the classification is dependent on the information contained within the data. If you have less data, all of these techniques suffer. I've never found it to be true that ANNs require 'less' data than other methods.
From my experience, to be fairly confident in a pdf/HMM estimate, you need quite a bit of data. I am assuming this would be the case for gun shots. On the other hand neural networks have an ability to generalize (interpolate). HMMs/Pdfs do not have this capability--at least from my experience. If you have some links on how to generate accurate pdfs from sparse data, I'd be interested. Modeling with sparse data is something I'm looking into very closely lately.
Quote:
This is why ANNs have had so much widespread appeal (easy to implement when you don't know much about the domain) and have failed to live up to their promise as a general classification/learning tool (because its nearly impossible to obtain representative data... and when you do, certain architectures are nearly impossible to train).
This I might agree with actually to some extent. A neural network is a general classifing tool in the sense it can learn an input/output mapping. Representative data can be difficult to find and thus is a problem for every classifier not just NN's. It's all about the data. However, in the case of gun shots NN might work since the data is available. I haven't come across a NN that was impossible to train though, unless you have very bad data. To get representative data, there are techniques such as Taguchi Matricies. The key is to develop a good plan to gather representative data, and usually, that is the last thing on everyones mind. It should be the first.
I like the idea of using minimum embedding dimension as a feature. I used it when I was working with chaotic systems. It's been a while, but is it fast to compute?
One more thing I want to reiterate. A neural network is a perfectly good classifer. However, just like any modeling tool, it will not perform miracles on bad data. And just like any modeling tool, if you try to go outside the bounds of the models purpose, it will fail.
I forgot to say that you seem to be a perfect moderator for this forum, Timkin. Glad you are around. ++'s. I'm trying to help, but sometimes I'm a little sloppy. I'm counting on you to clean up my messes if you don't mind.
[Edited by - NickGeorgia on January 17, 2006 5:54:05 AM]