Advertisement

Weighing Neural Nets

Started by March 17, 2006 11:38 AM
5 comments, last by Degski 18 years, 7 months ago
On AI_junkie I leaerned about Neural Nets and hoiw to train them using Genetic ALgorithims. How would I train them manually: In one tutorial on GameDev they programmed a neural net without a genetic a;lgorithim. Is it just a process of tinkering with values??
Another popular way of adjusting weights is backpropagation.

Advertisement
I have now read this method but what if i have 2 desired outputs for the network like :

Inputs: 1,5 Desired Output: 9
Inputs: 2,5 Desired Output: 6

How would Backpropagation work here??
Quote: Original post by brwarner
I have now read this method but what if i have 2 desired outputs for the network like :

Inputs: 1,5 Desired Output: 9
Inputs: 2,5 Desired Output: 6

How would Backpropagation work here??


It would learn both values. I don't see what's special about this example.
I wasnt sure if u could use 2 values from the graphic explination at wikipedia.
Is there any major speed difference between the two?
Advertisement
I don't understand what you're saying, what do you mean by manually train them. In a fully connected network every weight change will effect all values downstream, unless you're having something trivial like two input and one output node, this is an impossible task other than to do it by th methods found in the literature, backprop, secong order (conjugate gradient, levenberg marquardt etc), and genetic algorithms. for the latter to work though you'll need some kind of measure of preformance for it to work. Simulated annealing is another one (but basically random search as is the GA by the way). For a network to produce 6 or 9 as outputs, you'll need linear output units, normally it's between 0 and 1 or -1 and 1 (you could scale it up of course)

Quote: I have now read this method but what if i have 2 desired outputs for the network like :

Inputs: 1,5 Desired Output: 9
Inputs: 2,5 Desired Output: 6

How would Backpropagation work here??

With the right structure (see above) the inputs 1,5 would generate 9 and 2,5 would generate 6 after training. For input 2,5 you would probably get something like 7.5 (that's called generalisation, and is the objective of BP-learning) But since in you're training you kept the second input constant at 5 (and there are no other samples), some thing like 1,3 would give some unpredictable (and utterly useless) result.

This topic is closed to new replies.

Advertisement