Advertisement

Neural net with multiple outputs

Started by January 22, 2006 02:29 AM
4 comments, last by Timkin 18 years, 10 months ago
There is another thread in here where the discussion is about an AI learns the game lunar lander. The net controlling the lunar lander requires several inputs (velocity, angle, gravity, distance to goal, etc) but also several outputs; thrust and momentum thrust. Whats the best way to deal with multiple outputs? A single net with two output nodes or two seperate nets, one for each output? Another example; Im thinking of letting the enemies in my game be controlled by neural nets. They are going to have several outputs, every output is a behavoiur flag (attacking, wandering, searching for player, fleeing, etc). Each output is rounded of to see if the flag is active or not, if more than one is active, I pick the one with the highest non-rounded-of output. What do you think is the best way to handle nets with more than one output?
Hi Mizi, I think the answer to the question of "one neural net or two?" is you can do it both ways. For a single neural net you just have to add an output node, however the error is calculated over all the outputs in the training. For two neural networks you have a single output to generate an error. Either way, I think you will be fine. My preference is to use the former when possible since you usually save on number of weight parameters.
Advertisement
Quote: Original post by Mizipzor
Whats the best way to deal with multiple outputs? A single net with two output nodes or two seperate nets, one for each output?


The research literature in intelligent control (typically implemented as ann-based control architectures) suggests that decomposing the network with N outputs into N networks with 1 output each gives marginally better performance... but where you gain is a significant reduction in training complexity. Certainly a priori intuition supports this notion, since each network doesn't have to store competing mappings within the same set of nodes. What you also avoid by separating your networks is the 'moving target' problem. Hidden nodes are trying to converge on optimal values which are typically different for each output channel. One solution to this is to enforce niching in the network, which is structurally equivalent to loosely coupled separate networks. The extreme of this is to indeed run separate networks. Check out the work by Narendra... he looked at this problem many years ago.

Cheers,

Timkin

[Edited by - Timkin on January 23, 2006 12:07:29 AM]
^
|
|
Just like i said (either or), except Timkin used an Austrailian accent. No fair!
In grad school we had an ANN assignment for a letter recognizer using only backprop networks.

I was one of several that made the (rather stupid, in retrospect) mistake of trying to make a fairly large network with 26 outputs.

The teacher explained some issues and pros/cons after the assignment.

The biggest killer, (as the Australian accent pointed out [grin]) is the size of the network and amount of training needed. For that network I needed several internal layers and a whole lot of nodes.

There is a (very slight) slight benefit of being able to better differentiate between similar letters (like /-\ becoming H or A) but comes with a big penalty for rarely used letters like Q and Z. The benefits can be achieved much easier through coupled networks and/or including Markov chains as a spell-guesser to figure out which of the similarly-ranked items to use in a near tie, completely avoiding the problems.
Quote: Original post by NickGeorgia
Just like i said (either or), except Timkin used an Austrailian accent. No fair!


I don't have an accent... it's the rest of the world that speaks with one! 8P

This topic is closed to new replies.

Advertisement