Advertisement

MLP + overfitting

Started by September 26, 2007 02:45 AM
13 comments, last by Idov 17 years, 1 month ago
It's not necessary for a neural network to go through a period of being trained before it becomes overtrained. Your network could be going straight to the overtrained state.
Quote: Original post by Vorpy
It's not necessary for a neural network to go through a period of being trained before it becomes overtrained. Your network could be going straight to the overtrained state.


What? how can that be?
Advertisement
The principle of developing ensembles of this sort is that each member of the ensemble must be reasonably accurate, and that there must be disagreement among the members of the ensemble. This last point sounds little counter intuitive, but you need to ensure that there is some random variation between your ensemble members. This is often done by sampling the training set and developing the individual members on slightly different training sets. If you don't try to ensure some variation among the members, you'll essentially get a large number of members that always agree. This does nothing to improve the predictions.

-Kirk
Quote: Original post by Idov
Quote: Original post by Vorpy
It's not necessary for a neural network to go through a period of being trained before it becomes overtrained. Your network could be going straight to the overtrained state.


What? how can that be?


Neural networks can be really unintuitive. It depends on the function you are trying to learn, noise in the data, etc. Have you tried using networks with different hidden layer sizes? It's possible in some cases for networks with fewer hidden nodes to overfit more than networks with too many hidden nodes (see this paper for examples).
Ok, thanks! [smile]

This topic is closed to new replies.

Advertisement