Hi Guys,
I was wondering with back prop, the error values u calculate from the formula:
error[i] = desired[i] - output[i]; (Im aware in some
literature its opposite)
does this error term get backpropated to only the previous layer of neurons or can it be used for the layer after or does a new error term have to be calculated and applied to the next layer back?
Also consider this network structure below
I 1 2 O
o o
o o
o o
o o o
o o
o o o
o
o o
Layers I and O are input and output layers respectively and 1 and 2 are hidden layers. ''o'' is a neuron. This is a forward feed NN. There are 8 outputs so there would be 8 error terms in the error used for backpropagation. right? So how will I use 8 errors to adjust the weights in layer 2? Layer 2 has two neurons and would have 6 weights all together (I think!) How would I used 8 error terms to adjust 6 weights? And let me ask first question again. When I adjusted the weights in layer 2 can I use that error term to adjust the weights in Layer 1 or do the have to be calculated again?
Do the output neurons require any weights from Layer 2?
Finally, do the input and output layers have any activation functions for their neurons. So before a neuron in the input layer fires and output to the first hidden layer, must it undergo an activation function or does it just pass its input to the layer 1 and before we extract the outputs in the output layer must it undergo an activation function or do we just get the output?
Sorry for all those questions, its a bad habit to ask soo much but its for a university project and I pressed for time even though the semester aint started yet.
Thanks in advance.
DarkStar
UK
o
[edited by - dark star on January 4, 2004 3:19:38 PM]
-------------------------------
Loves cross-posting because it works
---------------------------------------------You Only Live Once - Don't be afriad to take chances.