Advertisement

Encoding ANN weights for GA's

Started by February 09, 2006 02:35 AM
5 comments, last by Mizipzor 18 years, 9 months ago
I cant get the back propagation to work in my net and it feels like its about time to try something else. Im going to try and see if I can get the net to work by training it through genetic algorithms instead. Problem is, I dont know how I should encode the weights into genes. Binary encoded genes seems to be the most used so since the tutorials use them, I aim to use them to. Makes it easier to understand the tutorials. :P But the weights are floats, and can be negative. They arent always of the same length either, so I have to take that into account. Or decide a max length and clip the rest if to long or add zeros if to short. Is there a good way to encode this? These are some weights that my net came up with: -0.4129 0.974715 0.830363 -0.0423444 1.24596 0.11153 9.30595e-005 2.65706 0.523019 -0.000332503 2.60382 0.57919 -18.1935 -1.29954 -1.10098
Try something simple: if you ensure that all networks have the same structure, crossover and mutation operators can work directly on each floating point weight and respect its scale and sign.
For example: crossover can swap one weight with the one in the other parent or replace a weight with a random value uniformly distributed between those in the two parents; mutation can add to a weight a random amount between either +x% and +y% or -x% and -y% of the old value (i.e. no small variations).


For networks with a very adaptive structure you can use genetic programming techniques; see
http://www.cs.utexas.edu/~nn/
for papers about encoding network structures meaningfully and
http://www.nerogame.org/
for an application.

Lorenzo Gatti

Omae Wa Mou Shindeiru

Advertisement
So I can treat every wegight as a chromosome in the gene? For example, the crossover value was randomised to 5, I would do the crossover after the fifth line in the weights above?
This is how I solved the problem with using floats in genes =)

class Gene{public:	inline Gene ()	{		m_min	= 0.0f;		m_max	= 0.0f;		m_value = rand();	}	inline Gene ( float min, float max )	{		m_min	= min;		m_max	= max;		m_value = rand();	}	float value ()	{ 		return ((float)m_value / (float)RAND_MAX) * (m_max - m_min) + m_min;	}	void cross ( Gene& gene )	{		int bin;		int dir = (random() >= 0.5f) ? 1: -1;		for ( int i=random(0, 14); i<=14 && i>=0; i+=dir )		{			bin = 1 << i;			if ( m_value & bin ^ gene.m_value & bin )			{				if ( m_value & bin )					bin = -bin;								m_value			+= bin;				gene.m_value	-= bin;			}		}		}	void mutate ()	{		int bin = 1 << random(0, 14);		m_value += ( m_value & bin ) ? -bin: bin;	}	void randomize ()	{		m_value = rand();	}	void constraints ( float min, float max )	{		m_min = min;		m_max = max;	}	void operator = ( Gene& gene )	{		m_value	= gene.m_value;		m_min	= gene.m_min;		m_max	= gene.m_max;	}	void print ()	{		printf("value: %f\n", value());	}private:	int		m_value;	float	m_min;	float	m_max;};


This works great and you can even set min/max limits to your gene.
Quote: Original post by Mizipzor
So I can treat every wegight as a chromosome in the gene? For example, the crossover value was randomised to 5, I would do the crossover after the fifth line in the weights above?


I don't care about bogus biological terms like chromosome and gene and crossover: they are an unfortunate cultural baggage of evolutionary algorithms and they tend to restrict creativity.
Keep in mind that you are not simulating actual genetics, but applying whatever heuristic alterations you find useful to produce new and hopefully improved solutions from a population of old solutions.
You need to design the mutation or crossover and selection strategies for your specific case; the order of weights in a list is meaningless (proximity doesn't reflect network structure) and therefore you shouldn't care about it.
What makes sense for neural networks is changing individual weights, taking into account that small changes have negligible effects but accumulate, that large changes are lethal and that the complex interactions between different weights may be a source of instability.
In the other posts, I proposed various ways to alter individual weights (assuming every generation changes only one) and glSmurf exemplified mutation and destructive crossover along very similar lines (if the code isn't clear, flipping or replacing some of the 14 least significant bits of what could be considered the mantissa of the floating point weight).
glSmurf's class naming notwithstanding, there are neither genes nor cromosomes in these suggestions, only a bunch of numbers: stop worrying about useless complications.

Lorenzo Gatti

Omae Wa Mou Shindeiru

check out the "Neural Networks in Plain English" tutorial on my website. It demonstrates the use of a GA to evolve the network weights.
Advertisement
@LorenzoGatti: Point taken, thanks for the tip. [smile]

@fup: Nice tutorial, I read it through I followed your style when building the class that handles the evolving of my net weights. Ive ran into some pretty serious problems however, maybe you can help me out? Ill start a new thread about it.

This topic is closed to new replies.

Advertisement