Noob-a-fied
Hey Im trying to understand the basics of back propagation.
I'm noobing it up a lot here, ala VBscript and nothing is optimized for jack or...you know what.
Here is what is happening Im putting in 8 inputs between 0 and 1, with 2 outputs also between 0 and 1. Random weights have been generated for all layers
The weights going from The input to the hidden layer have been summed and multiplied by their randomly assigned weights (first time around)
The activation sigmoid function is then ran on the summed weights*input of each neuron in the hidden layer, the value of the activation functions are kept in a seperate array, same series for the output neurons.
This is where i start losing it...
Next the error of the entire network is calculated by adding together
the difference of output neuron 1 (after activation function)and the answer desired for it+the output of output neuron 2 and the desired answer for it.
the error for each output neuron is then calculated with respect to the entire network by ....outputneuronafteractivation*(1-outputneuronafteractivation)*(desiredvalue-outputneuronafteractivation)
The delta for the weight change is the Learningrate(eta?)*errorforoutputneuron*neuronbefore activation....
this is where i start really dropping the ball...well i drop it here
and the math isnt so confusing as what goes where, can someone so an interation math and all, in lay terms just using number and no notation, i can understand it but after a lot of staring and googling...please no c++
thank you
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement