Advertisement

training multi-lobed ANN's

Started by April 14, 2003 07:25 AM
11 comments, last by Sander 21 years, 7 months ago
I found a little demo on ai-junkie wich uses an ANN to control minesweepers. The ANN takes a vector to the closest mine and it''s current movement vector and outputs a new movement. I want to expand this example by adding a new ''brainlobe'' to the ANN. This ANN should take a scan of the enviroment and output a vector to the closest mine. This will replace the first 2 inputs in the original ANN. My question: How should I train this ANN? Should I train the 2 lobes separately and the add the optimized lobes together to give me the final brain? Or should I create the 2 ANN''s together and train the entire brain all at once? What are the pro''s and con''s of each method? Sander Maréchal [Lone Wolves GD][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

If the behaviour of the new lobe is completely independent of the behaviour of the old lobe then they can be trained in isolation (and indeed run in isolation), otherwise you need to train them together as one network.

In your problem the output of the new lobe is correlated in time to the output of the old lobe because the choice of a new movement vector will alter the next estimated vector to the target mine. This is a sequential Markov Decision Problem. The fact that you only have two nodes representing the link between the two niches of the network will reduce the size of the training set, but you still have a large problem to deal with.

You might want to consider looking into the work fup has done in training ANN controllers using genetic algorithms.

Cheers,

Timkin
Advertisement
fup''s demo is exactly the one I want to expand on :-) I too want to use GA to evolve the ANN''s. I''m just wandering how to go about it:

1) From the population, use GA to give a population of sensor ANN''s and movement ANN''s (this means calculating the fitness for both parts of the brain and cross/mutate as applicable). Then pair those ANN''s to give you a new population of minesweeperes.

2) Use GA on the complete 2-lobed brain and cross/mutate/populate as usual.

Thanks in advance.

Sander Maréchal
[Lone Wolves GD][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

While I have my own opinions on this, I''d trust fup''s perspective more, since he has done more research into this area than I have.

I will say one thing though... ignoring the correlation between the subnetworks does seem to work, as Rodney Brooks has showed in his work on creating complex agent behaviours from interactions of many different agent functions, each taking care of only a few inputs and outputs (often just one of each).

Timkin
I read a paper recently (don''t have the reference handy) in which the authors were trying to develop agents consisting of two independent pieces. Similar to your lobular NN idea. They approached it by making two independent populations and the fitness scores were assigned by have each member of each population pair with a number of random members of the opposite population. The pieces had constraints in place such that you could always link two from either population.

So, for each member of population 1, 20 (I forget how many actually) members of population 2 were selected and the average performance was used to establish the fitness of population 1 members. Then, for population 2, 20 members of population 1 were selected, etc. etc. Then, each population went through selection, crossover, and mutation.

The effect was coevolution of the two populations. I wondered if using a niching technique or some level of speciation controlling the link between members of the two populations would help.

This may be too advanced for what you want to do, but interesting none the less. 8^)

-Kirk
If anyone is interested, the paper I mentioned in my last post is:

Cooperative coevolution: An architecture for evolving coadapted subcomponents. Mitchell A. Potter, Kenneth A. DeJong. Evolutionary Computation 8(1):1-29 (2000).

-Kirk
Advertisement
Ooh, if Ken DeJong co-authored, it''d be worth a read...

Timkin
Timkin, are you a DeJong fan? 8^) I''ve seen a few of his papers and they''ve been pretty good by my standards, but I''m not aware of the reputation.

-Kirk
Thanks a lot KirkD. I''m very curious about the method.

Sander Maréchal
[Lone Wolves GD][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

smarechal: My advice is for you to experiment with both approaches. You will gain much more understanding this way, which will benefit you in any future probems you may tackle. Generally speaking though, you will find it easier to train several small networks than one large one.

KirkD: Thanks for the link. I''ve not seen that one before.



ai-junkie.com

This topic is closed to new replies.

Advertisement