Neural Networks - three little questions
Hi, I'm coding multilayer feedforward neural network thingie after Artifical Intelligence: Modern Approach. I have three little questions: 1) In multilayer networks, does EVERY neuron (excluding input neurons) have a link to the bias neuron, whose values is always -1? 2) And naturally we don't "backpropagate" the deltas to bias neurons either? Only to the True neurons on the previous hidden layer, aight? 3) Given a 2d function f(x,y) = 1 if f>0 and f>1, 0 otherwise, x,y ranging from [-1,1], should a 2-2-1 neural network learn how it behaves? Mine doesn't (mine doesn't form a rectangle, only a triangle) so there might be some bug in my implementation :( Thanks in advance! Links to good tutorials about the subject are welcome too, of course. -- Mikko
Quote:
Original post by uutee
1) In multilayer networks, does EVERY neuron (excluding input neurons) have a link to the bias neuron, whose values is always -1?
No, topology can be arbitrary. Bias neurons are recommended though.
Quote:
2) And naturally we don't "backpropagate" the deltas to bias neurons either? Only to the True neurons on the previous hidden layer, aight?
You have to update the weight between the bias neuron and whatever is connected to that neuron, but otherwise, you don't need to change it.
Quote:
3) Given a 2d function f(x,y) = 1 if f>0 and f>1, 0 otherwise, x,y ranging from [-1,1], should a 2-2-1 neural network learn how it behaves? Mine doesn't (mine doesn't form a rectangle, only a triangle) so there might be some bug in my implementation :(
Neural networks are arbitrary function approximators, but the ability of a network to converge will depend on its topology. You may have a bug in your code, or you might just have a poor topology, or you might just not be training it enough. Try several topologies, with at least 10,000 training iterations. If nothing works, then you probably have a bug in your code.
Good luck.
Quote:
Original post by uutee
3) Given a 2d function f(x,y) = 1 if f>0 and f>1, 0 otherwise, x,y ranging from [-1,1]
I'm presuming this is a typo. How can f(x,y)=1 when f>0? It can only do that for one value of f... f=1. I take it you meant something else. Could you elaborate please?
Timkin
I think what he meant was f(x,y)=1 if f<0 and f>1
not sure so please don't kill me...
not sure so please don't kill me...
Quote:
Original post by uutee
3) Given a 2d function f(x,y) = 1 if f>0 and f>1, 0 otherwise, x,y ranging from [-1,1]
i think that he's talking about a recurrent netwokr, where f is the previous value in time
Cartman's definition of sexual harrasement:"When you are trying to have intercourse with a lady friend, and some other guy comes up and tickles your balls from behind"(watch South Park, it rocks)
Original post by cwhiteQuote:
Original post by uutee
1) In multilayer networks, does EVERY neuron (excluding input neurons) have a link to the bias neuron, whose values is always -1?
No, topology can be arbitrary. Bias neurons are recommended though.Quote:
For most nontrivial problems they are neede. NNs without can only learn problems that are linearly separable. For example the XOR function cant be approximated without bias neurons...
--Spencer"All in accordance with the prophecy..."
3) Given a 2d function f(x,y) = 1 if f>0 and f>1, 0 otherwise, x,y ranging from [-1,1]
That's not a function.
I think you meant to say:
3) Given a 2d function f(x,y) = 1 if x>0 and y>1, 0 otherwise, x,y ranging from [-1,1]
That's not a function.
I think you meant to say:
3) Given a 2d function f(x,y) = 1 if x>0 and y>1, 0 otherwise, x,y ranging from [-1,1]
-----------------Always look on the bright side of Life!
Quote:
Original post by Spencer
For most nontrivial problems they are neede. NNs without can only learn problems that are linearly separable. For example the XOR function cant be approximated without bias neurons...
Please do some research before posting inaccurate information. Bias neurons can make a neural network much more effective at learning xor, but they are not necessary. Having multiple layers is what allows a feed-forward network to classify non-linearly separable problems. All it takes is a simple experiment with several neural network topologies, some with bias neurons and some without, to show that the topologies without bias neurons can still learn XOR, just less effectively.
Neural Network wikipedia
ooops...my bad...sorry
--Spencer"All in accordance with the prophecy..."
NAND is a linearly separable function. All you need to do to make a single layer perceptron learn the NAND function is give the output neuron a negative threshold value. I suppose this threshold is what you are calling the "fixed non-zero bias term instead of a trainable bias." However, this is not the same as requiring a bias node in the network.
Furthermore, the original question was whether or not every node needs a link to the bias neuron. This is definitely not the case.
Furthermore, the original question was whether or not every node needs a link to the bias neuron. This is definitely not the case.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement