Advertisement

Backpropagation of the error

Started by March 07, 2006 07:01 PM
1 comment, last by jolyqr 18 years, 8 months ago
When one uses the backpropagation of the error, what condition makes stop the training. eg on a perceptron, i know that when the output of the network is different from the target values, the training should continue. I have some books which treat the backpropagation of the error, but they even don't talk about the stopping condition. So if someone has an idea... cheers !!!
The training of Perceptrons can go on endlessly. At some point you’ll have to decide how much error you want in the system. In other words what error you are willing to live with. Over training can give you undesirable results, But it depends largely on the application.
Advertisement
ok, i see

This topic is closed to new replies.

Advertisement