Advertisement

Weights for a neural net doing AND

Started by February 11, 2006 09:20 AM
8 comments, last by sidhantdash 18 years, 9 months ago
My net looks like this:

in1 -    - h1 -    
      ><        > o1  
in2 -    - h2 - 
Two input values to the net, two hidden neurons, and one output neuron. I was trying to figure out which weights should be in there ideally if the net was to perform the AND operation. But I couldnt figure it out. Anyone care to help?
The weights should be initially random, and then you derive optimal weights by training the network until you find a satisfactory balance between learning and overlearning.
Advertisement
Sorry I meant how the weights would look in the ideal situation. I am uncertain if the nets processing functions work correctly, so if I know the optimal weights, I can set them manually. Then run the net and see if I get the correct output.

What would the ideal weights look like for a net designed like that in order for it to successfully do the AND logical operation?

The two hidden weights have three weights each, one for each input and one for the bias. The output one has three to, one for each hidden and one for the bias.
Need more information.

What kind of neurons do you have? What is the relation betweem its weights, inputs and output?

What is your definition of a non-boolean "AND" operator? (min? multiplication? another?)

Regardless of that, this is not a good way to validate a learning scheme. A correct way is to compare the output of your trained system with

a) The nominal output of your training set, given your training set input

b) The nominal output of another set (called the validation set), built with the same underlying relation as the training set, where its input does not intersect with the input of the training set.

The "ideal" weights are those who minimize the difference between the output of your system and the nominal output of both a) and b).

Im not trying to validate the learning scheme, just the process function. Something isnt working in my class and I start by validating the process function. The function takes two floats as inputs to the net and return a float which is the total output of the net.

Quote: Original post by Steadtler
What is your definition of a non-boolean "AND" operator? (min? multiplication? another?)


By AND I mean this:

0 0 - 0
0 1 - 0
1 0 - 0
1 1 - 1

So if I input 1 and 1 to the net it should output 1... however, it does not. Its either because my training function is bad or the process function is bad. If I skipped the training by setting the weights manually and then processed with the ideal set of weights and the net still gave bad output, I would know the process function doesnt work as intended. On the other hand, if it would give the correct output, I would know the process function worked and the problem would be in the training function.

Quote: Original post by Steadtler
What kind of neurons do you have? What is the relation betweem its weights, inputs and output?


I didnt know there were any variations. My neurons have a set of inputs, and one weight tied to every input. It has an additional input, which I call a bias, which also has a weight. Then I add up the inputs multiplied with the weights:

i1*w1 + i2*w2 + i3*w3 ... in*wn

After that, I run a hardlimiter or a sigmoid function on the value, and that whats the individual neuron outputs.

I thought this was pretty much standard. :S Am I really this horrible at explaining? I get the feeling I would find answers faster by simply guessing. :P

Thanks for the help so far! [smile]
Quote: Original post by Mizipzor
The function takes two floats as inputs to the net and return a float which is the total output of the net.

So if I input 1 and 1 to the net it should output 1... however, it does not.


You use floats to map a boolean function. What is if I input 0.5 and 0.9 ? What if the output is 0.354271?

Quote: Original post by Mizipzor
I didnt know there were any variations.


There are as many variations as you can imagine! Nobody forces anyone to do things like most people do...

Quote: Original post by Mizipzor
My neurons have a set of inputs, and one weight tied to every input. It has an additional input, which I call a bias, which also has a weight. Then I add up the inputs multiplied with the weights:
i1*w1 + i2*w2 + i3*w3 ... in*wn


So its a weighted sum of the input. Where does the input of the bias comes from?

Quote: Original post by Mizipzor
After that, I run a hardlimiter or a sigmoid function on the value, and that whats the individual neuron outputs.


No idea what a hardlimiter is. Which one is it, hard limiter or sigmoid? What are the parameters of the sigmoid ?

Quote: Original post by Mizipzor
I thought this was pretty much standard. :S Am I really this horrible at explaining? I get the feeling I would find answers faster by simply guessing. :P


You are not bad at explaining, I just ran through too may years of academia to give a straight answer ;). More seriously, it is hard to answer your question because:

1) We need *all* the information about your system
2) Your problem is ill-formed. Probably has an infinity of solutions...

But just for fun, let me try to get one of them.

Advertisement
Never mind, I came up with another way of checking my training routine. I manually set every weight in the net to 1. Then, it was easy to calculate by hand. A tried it on paper with a net with 2 hidden layers with 2 neurons per hidden layer. Then, according to my math, the answer is supposed to be 0,9476583038. But when I ran it through the net, the net gave me 0,720733. Therefor, Ive come to the conclusion that there is something funny going on in the nets process function.
Ok, lets forget about the biais and the sigmoid/hardlimiters
Lets suppose you map the and operator by a multiplication.
I assume the first two neuron get the input in the same order.

Here is your system:

i1 and i2 are the inputs.
w_ij is the weight for the input i of neuron j

Correct me if Im wrong, but your system would be:

w_13( i1*w_11 + i2*w_21 ) + w_23( i1*w_12 + i2*w_22 )

So we just need to solve to following equation to get the perfect weights:

i1*w_11*w_13 + i2*w_21*w_13 + i1*w_12*w_23 + i2*w_22*w_23 = i1*i2

But as you can see, the left-hand of our equation is *linear* in regard of i1 and i2, which leads me to believe that there is no solution to this system.

But I may be wrong, I never did any NN.
The AND function can be computed with a single neuron. If your inputs are x and y, s is a sigmoid function and K is a huge number, you can get a good approximation to AND with

s(K*x+K*y-1.5*K)

Let's check all 4 values:
x y K*x+K*y-1.5*K
0 0 -1.5*K
0 1 -.5*K
1 0 -.5*K
1 1 +.5*K

If K is big enough, the sigmoid function will map -1.5*K and -.5*K to about 0 and +.5*K to about 1.

With two extra neurons in the hidden layer, you can just set up the hidden layer in such a way that it just reproduces the inputs.

However, this doesn't seem like the right way to check if your network is working properly. I would output all the computations to a log, where you can check by hand what is going on.
I trained my net to learn the AND operator. My NN has the same structure as you have described, uses a sigmoid learning function, and has a learning rate set to 0.2. The weights that solve the AND problem are as follows

w0 = -2.707 //w0,w1,bias1 are inputs to 1st hidden neuron
w1 = -2.707
bias1 = 1.892
w2 = -2.777 //w2,w3 and bias2 are similarly for the hid neuron 2
w3 = -2.777
bias2 = 1.974
w4 = -5.232 //w4,w5 and bias3 are for the o/p neuron
w5 = -5.465
bias3 = 4.476

These set of weights do solve the AND problem. However, as has been already pointed out you DO NOT need a MLP to solve AND. It is a linearly separable problem, and can be solved WITHOUT using a hidden layer.

And yes, I do not 'feed' 0 into my net. I use -1 instead. So in my training set 1,-1,0 would correspond to 1 AND 0 = 0.

Tell me if that helped.

This topic is closed to new replies.

Advertisement