Input Hidden_1 Hidden_2 Output_Neuron
x(0,0) x(1,0) x(2,0) <- Biases
x(0,1) w(0,0,0) x(1,1) w(1,0,0) x(2,1) w(2,0,0)
x(0,2) w(0,1,0) w(1,1,0) w(2,1,0) x(3,0) // Total net output
w(0,2,0) w(1,2,0) w(2,2,0)
w(0,0,1) x(1,2) w(1,0,1) x(2,2)
w(0,1,1) w(1,1,1)
w(0,2,1) w(1,2,1)
d(0,n) d(1,n) d(2,n) // Deltas for each layer
Help me with my back propagation formula
I still havent been able to crack my ANN problem. Instead of posting source and ask for help, I ask if anyone could tell me if I got the maths right or not.
First off, heres how I describe different parts of the net:
w(l,f,n) - Weight for input f to neuron n in layer l.
x(l,n) - Output from neuron n in layer l. l=0 is inputs from envoirment and n=0 is the bias.
d(l,n) - Deltas for neuron n in layer l. Used in the back propagation function.
Now, heres the layout of my net:
I calculate the delta for the output neuron with this forula:
// output layer delta, we only have one output
// d(2,0) = x(3,0)(1 - x(3,0))(d - x(3,0))
Deltas for the hidden layers are calculated like this:
// d(l,n) = x(l,n) (1 - x(l,n)) w(l+1,n,n-1) d(l+1,n)
Now I alter the weights, thats done with this formula (h is learning constant):
// Deltas calculated, now alter the weights
// w(l,f,n) = h * x(l,f) * d(l,n)
I have gone through the code several times, I cannot find anything wrong. So Ive made the conclusion that I must have missunderstood the maths. Could anyone tell me if these formulas would work in a neural net? If the math is correct, at least I know the error is somewhere in the code.
Anyone, please^2 help me out with the math part. Thanks in advance. [smile]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement