Recurrent Neural Networks
Cool! I''ve got the net running. It can have reccurent connections, loops, links from output to input, from input to input, from a neuron to itself. Evolution is also quite a hard thing. Nets either start increasing neurons and links quickly and without need (why would standart XOR need 12 neurons???) or stay at initial configuration... Hmmm. might need some more dubuging
quote:
Hmmm. might need some more dubuging
No offense. But do you mean debugging? If so, why debug? The problem is the evolution algoritm, not the code.
Anyway, cool stuff!
regards
PS. One way to solve the problem with to much "do-nothing-stuff" ('introns' in biologic terms, I think) is to penalise bigger nets in the fitness function.
/Mankind gave birth to God.
[edited by - silvren on January 3, 2003 3:37:21 PM]
/Mankind gave birth to God.
If you are using your network as a controller of some sort just use your network as normal.
If you are using it as a classifier and you are training it via datasets then you should run your inputs through the network for as many steps as it takes until you get a stable output. You also have to make sure you flush the network (zero all the inputs/outputs) between each input vector.
ai-junkie.com
If you are using it as a classifier and you are training it via datasets then you should run your inputs through the network for as many steps as it takes until you get a stable output. You also have to make sure you flush the network (zero all the inputs/outputs) between each input vector.
ai-junkie.com
My Website: ai-junkie.com | My Books: 'Programming Game AI by Example' & 'AI Techniques for Game Programming'
Yep, I''m flushing it.
Before doing any serious benchmarking tests, I''m working at evolving simple XOR structure. I know reccurent nets aren''t the best nets to solve it, but this is only a test that can be performed quickly, and that''s what''s important when tuning the algo.
The network is now evolving (near perfect at 700th generation with 150 nets-population), but it creates many unnecessary links and even 3-5 hidden neurons...
I''ve been thinking about penalizing size, but then, how do I choose how much to penalize for each task? If restricted too much, the network won''t generate vital neurons, and if fitness-modifications are weak, it won''t help.
In one paper I found another brilliant approach. Use several sub-populations. E.g. if several networks have a structure, which is quite different from others, why should it be penalized? Maybe it''s correct. What can be done is placing them in the sub-populations, that can compete between themselves, and if they dont'' show improvement in ,say, 50 generations, delete em.
And I need debugging (just misspelt the word in the previous post)
to hammer this algo. Right now everything shatters into as many populations as there''re nets. But I''ll fix it.
fup: recurrent nets can be evolved better because you''ve got more flexibility evolving the structure.
Yeah, I''ve done steps-running(usefull for some continous environments) and running till every output gets a result(Necessary for classification of a single input set)
Plus it''s just my own small reseach for myself, and reseaching normal nets isn''t interesting =)
Before doing any serious benchmarking tests, I''m working at evolving simple XOR structure. I know reccurent nets aren''t the best nets to solve it, but this is only a test that can be performed quickly, and that''s what''s important when tuning the algo.
The network is now evolving (near perfect at 700th generation with 150 nets-population), but it creates many unnecessary links and even 3-5 hidden neurons...
I''ve been thinking about penalizing size, but then, how do I choose how much to penalize for each task? If restricted too much, the network won''t generate vital neurons, and if fitness-modifications are weak, it won''t help.
In one paper I found another brilliant approach. Use several sub-populations. E.g. if several networks have a structure, which is quite different from others, why should it be penalized? Maybe it''s correct. What can be done is placing them in the sub-populations, that can compete between themselves, and if they dont'' show improvement in ,say, 50 generations, delete em.
And I need debugging (just misspelt the word in the previous post)
to hammer this algo. Right now everything shatters into as many populations as there''re nets. But I''ll fix it.
fup: recurrent nets can be evolved better because you''ve got more flexibility evolving the structure.
Yeah, I''ve done steps-running(usefull for some continous environments) and running till every output gets a result(Necessary for classification of a single input set)
Plus it''s just my own small reseach for myself, and reseaching normal nets isn''t interesting =)
quote: Original post by Halo Vortex
Yep, I''m flushing it.
Before doing any serious benchmarking tests, I''m working at evolving simple XOR structure. I know reccurent nets aren''t the best nets to solve it, but this is only a test that can be performed quickly, and that''s what''s important when tuning the algo.
project the data you are xorring nonlinearly into a higher
dimension. it could (and will be in xor''s case, if you see some
trouble doing it manually) be linearly separable there...
instead of tuning your algo you might want to tune your
approach. preprocessing the data is as important as building
your classifier/learner (if not more important). ofcourse,
unless you are having fun just hacking around.
"Project the data you are xorring nonlinearly into a higher
dimension" , - you mean adding additional neurons? I know it. But evolving is good when it does nearly everything automatically. So The ALGO should add it. There''re NO mathematical formulas to calculate amount of neurons for all the tasks in the university, so why not let evolution do that GUESSING thing?
As for preprocessing, yep, I understand it. As for XOR, how are you going to preprocess inputs? =)
Silveren, I''ve got a question to you. How should I penalise bigger nets. Can you recommend any nice approaches.
All the penalties I use either don''t effect the avalanche of new neurons and links or restrict them so that no new structure survives more than 1 generation.
Even such a mild penalty as this
fitness=fitness*3/sqrt(9+networksize); is too strong. making it a bit less strict (e.g. 9/sqrt(81+networksize) makes restrictions useless.
dimension" , - you mean adding additional neurons? I know it. But evolving is good when it does nearly everything automatically. So The ALGO should add it. There''re NO mathematical formulas to calculate amount of neurons for all the tasks in the university, so why not let evolution do that GUESSING thing?
As for preprocessing, yep, I understand it. As for XOR, how are you going to preprocess inputs? =)
Silveren, I''ve got a question to you. How should I penalise bigger nets. Can you recommend any nice approaches.
All the penalties I use either don''t effect the avalanche of new neurons and links or restrict them so that no new structure survives more than 1 generation.
Even such a mild penalty as this
fitness=fitness*3/sqrt(9+networksize); is too strong. making it a bit less strict (e.g. 9/sqrt(81+networksize) makes restrictions useless.
quote: Original post by Halo Vortex
"Project the data you are xorring nonlinearly into a higher
dimension" , - you mean adding additional neurons? I know it. But evolving is good when it does nearly everything automatically. So The ALGO should add it. There''re NO mathematical formulas to calculate amount of neurons for all the tasks in the university, so why not let evolution do that GUESSING thing?
As for preprocessing, yep, I understand it. As for XOR, how are you going to preprocess inputs? =)
no additional neurons. make a separate preprocessing subsystem.
in most cases, if you give your nn just the raw inputs your
system will be quite inefficient. read any intro book about
classifiers and you will see. and the simpler your nn is the
better.
and for "xor", take "and" and "or" from the inputs and use
those as the inputs for your nn (i could be wrong here, this is
from my memory).
i fully agree that the should be one great überalgo that could
do all the tricks, but as far as i know it''s really not
possible (unless the tricks are trivial and few). you could
guess, but be prepared for a loooooong guessing game.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement