Advertisement

BackPropagation Help

Started by August 08, 2011 11:05 PM
11 comments, last by Adaline 13 years, 6 months ago
How are computed your initial weights ? ( a random value between -1 and 1 can be a good choice)

How do you compute error ?

if (a-b)<0 then wanted output is just 0
if (a-b)>0 then wanted output is 1

Did you add a bias to your units ?
You have to add another weight to your units that represent bias, do you do it ?

potential = weight1*a + weight2*b + weight3*1 (or -1, it doesn't matter)
output= sigmoid(potential)

Is it what you do ?
thats the code :

Sub NN()

'const

e = 2.718281828
alpha = 0.25

'get Data


a = Cells(2, 1)
b = Cells(2, 2)


truth = a - b


W13 = Cells(4, 1)
W14 = Cells(4, 2)
W23 = Cells(4, 3)
W24 = Cells(4, 4)
W35 = Cells(6, 2)
W45 = Cells(6, 3)


'answer
Sum1 = a + b
Sum2 = a + b



f1 = 1 / (1 + e ^ (-Sum1))
f2 = 1 / (1 + e ^ (-Sum2))

Sum3 = f1 + W13 + f2 * W23
Sum4 = f1 * W14 + f2 * W24

f3 = 1 / (1 + e ^ (-Sum3))
f4 = 1 / (1 + e ^ (-Sum4))

Sum5 = f3 * W35 + f4 * W45
f5 = 1 / (1 + e ^ (-Sum5))
answer = -1 + f5 * 2

Cells(2, 4) = answer


'backPropagate
err5 = (truth - answer + 1) / 2
err3 = err5 * W35
err4 = err5 * W45
err1 = err3 * W13 + err4 * W14
err2 = err3 * W23 + err4 * W24

Cells(2, 5) = err5


'update


W13 = W13 + alpha * (f3 * (1 - f3)) * (f1 * W13) * err3
W23 = W23 + alpha * (f3 * (1 - f3)) * (f2 * W23) * err3

W14 = W14 + alpha * (f4 * (1 - f4)) * (f1 * W14) * err4
W24 = W24 + alpha * (f4 * (1 - f4)) * (f2 * W24) * err4


W35 = W35 + alpha * (f5 * (1 - f5)) * (f3 * W35) * err5
W45 = W45 + alpha * (f5 * (1 - f5)) * (f4 * W45) * err5


'show weghits

Cells(4, 1) = W13
Cells(4, 2) = W14
Cells(4, 3) = W23
Cells(4, 4) = W24
Cells(6, 2) = W35
Cells(6, 3) = W45
End Sub



actually i didnt add any bias , for i didnt see it in some of the written algorithems . so how does it sepose to be ?
aint the bias an error that we dont know wich is exprcted to be zero ? what should i do with it ?
Advertisement
Hello :)

I quoted your code, i put in bold my changes :

Sub NN()

'const

e = 2.718281828
alpha = 0.25

'get Data


a = Cells(2, 1)
b = Cells(2, 2)


if (a-b)<0 then truth=0 else truth=1


W13 = Cells(4, 1)
W14 = Cells(4, 2)
W23 = Cells(4, 3)
W24 = Cells(4, 4)
W35 = Cells(6, 2)
W45 = Cells(6, 3)


f1 = 1 / (1 + e ^ (-a))
f2 = 1 / (1 + e ^ (-
b))

Sum3 = f1
* W13 + f2 * W23
Sum4 = f1 * W14 + f2 * W24

f3 = 1 / (1 + e ^ (-Sum3))
f4 = 1 / (1 + e ^ (-Sum4))

Sum5 = f3 * W35 + f4 * W45
f5 = 1 / (1 + e ^ (-Sum5))
answer =f5

Cells(2, 4) = answer


'backPropagate
err5 = truth - answer
err3 = err5 * W35
err4 = err5 * W45
err1 = err3 * W13 + err4 * W14
err2 = err3 * W23 + err4 * W24

Cells(2, 5) = err5


'update


W13 = W13 + alpha * (f3 * (1 - f3)) * f1 * err3
W23 = W23 + alpha * (f3 * (1 - f3)) *f2* err3

W14 = W14 + alpha * (f4 * (1 - f4)) * f1 * err4
W24 = W24 + alpha * (f4 * (1 - f4)) *f2* err4


W35 = W35 + alpha * (f5 * (1 - f5)) * f3 * err5
W45 = W45 + alpha * (f5 * (1 - f5)) * f4 * err5


'show weghits

Cells(4, 1) = W13
Cells(4, 2) = W14
Cells(4, 3) = W23
Cells(4, 4) = W24
Cells(6, 2) = W35
Cells(6, 3) = W45
End Sub


In this case you don't need bias but in the general case you must add one. It can be seen as an added constant input (it translates the activation function along x axis)
To get things simpler maybe you can compute values that have not to be passes threw sigmoid in input layer (in range -1 to 1 for example)

Let me know if it works better now :rolleyes:

EDIT :
If you doubt about the capability of this net to learn this, a single unit with 2 inputs (and not bias...) can even do it : say the weight that brings a is W (>0) then the other weight is -W (whatever abs(W) is) .
When you'll be able to make this work, you can use bigger nets that approximate (a-b) with the accuracy you want (another option is to use linear activation function in output layer, it would fit better to the problem here and could effectively compute a-b)

Maybe you could study Adaline network and single layer perceptron . Before putting units in complex networks, manipulate as far as you can these units (try to change activation function, solve different problems with your net, try to figure out why there are multi layer networks (what are the limits of a single layer net ?)

Few years ago, I made a system than can learn to play to the snake game (with limitations). It's in VB.NET, let me know if you're interested I'll send you

EDIT : when you'll come back from vacations, I'll post a pseudo code that teaches to a single unit to converge towards (a-b) ....
BTW good vacations ! :cool:

This topic is closed to new replies.

Advertisement