Mexican hat, for kohonen network
Hello,
I''m trying to solve the traveling salesman problem, by using a kohonen network.
What I did at first, was to change the weight of the neuron winner, and the 2 others wich are near. But in order to improve it, i would like to apply a Mexican hat fonction.
But i didn''t find it. I found a function wich is quite similar, but not exactly. This function depends of the distance between the neuron, and the coordonates of the input.
D : Distance
alpha = 1 / ( 1 + exp(D²/rate))
If you have a better function, please, say me.
Cheers
exp(-x^2) ?
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Yes, it''s better, but i''m looking for a function....
I''will draw it.
|
___|___
/ | \
-----------------|---------------
\___/ | \___/
|
|
I''will draw it.
|
___|___
/ | \
-----------------|---------------
\___/ | \___/
|
|
Pfff the spaces didn''t appair.
I wille explain. Thx a-axis is the distance. I would like that when you are near the 0, the y is high. After, ''y'' decreases, to became neagtive, and then, go upper, to became null with a long distance.
I wille explain. Thx a-axis is the distance. I would like that when you are near the 0, the y is high. After, ''y'' decreases, to became neagtive, and then, go upper, to became null with a long distance.
You can enclose text in
Edited by - Dactylos on February 11, 2002 4:42:53 PM
and
tags to make the spaces and stuff appear as written (just don't write the spaces between the [ and ] brackets: | ___|___ / | \ -----------------|--------------- \___/ | \___/ | |
Edited by - Dactylos on February 11, 2002 4:42:53 PM
It''s not exactly what you want, but how about: e-x^2cos(x)
It has the basic characteristics you want but oscillates about zero as it heads to -ve and +ve infinity.
Timkin
It has the basic characteristics you want but oscillates about zero as it heads to -ve and +ve infinity.
Timkin
Okay, I found a function that fits the bill!
f(x) = -(x-a)(x+a)e-x^2
If you select a to be close to zero - say a=1 - then you get good results. As a gets larger the brim of the Sombrero disappears and you are left with just the peak. As a gets closer to zero the brim gets deeper so that it is bigger than the peak. Play around with it a little to find the behaviour you want.
If you want to create an asymmetric function, then change one of the constants ''a'' for a different constant ''b''.
Cheers,
Timkin
f(x) = -(x-a)(x+a)e-x^2
If you select a to be close to zero - say a=1 - then you get good results. As a gets larger the brim of the Sombrero disappears and you are left with just the peak. As a gets closer to zero the brim gets deeper so that it is bigger than the peak. Play around with it a little to find the behaviour you want.
If you want to create an asymmetric function, then change one of the constants ''a'' for a different constant ''b''.
Cheers,
Timkin
sin(x)/x ?
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." — Brian W. Kernighan
Thank you very much Timkin, it was exactly what i was looking for.
Cheers.
Cheers.
What I''m doing, is that during the learning process, the neighborhood became every time smaller. So, I have a rate wich increase during this time.
alpha = -(rate * x² - 1) * exp(-x²)
So, the difference between the neuron winner, and the others, became every time more important.
I think, i will use this one. Thanks for your help.
alpha = -(rate * x² - 1) * exp(-x²)
So, the difference between the neuron winner, and the others, became every time more important.
I think, i will use this one. Thanks for your help.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement