Advertisement

3 vs 4 layers

Started by May 22, 2002 08:04 AM
3 comments, last by pdstatha 22 years, 6 months ago
I''ve been given a sample exam question for my neural nets module which is worth 25 marks. I''ve tried to find my tutor so that he could supply me with an answer, but as per usual he''s not in. Anyway here it is. a) How does one choose the number of units in the input, hidden and output layer of a MLP? b) Any function can be represented by the multi-layer perceptron of no more than three layers why might one use more than four layer? I can vaguely answer (a) and just about scrounge enough marks of that, but the only thing I can think of for (b) is that it would quicker?? And I really don''t think that answer would get me many marks!!! Can someone help???
One mathematically proven property of a three layer perceptron network is that it is a universal approximator. That is, that it can approximate any function to a degree of accuracy dependant on the size of the hidden layer. While this is true, the number of neurons and connections in the hidden layer required to approximate a function to a given degree of accuracy is not necessary the optimal number for any number of hidden layers. i.e. to approximate a function f() to within a deviation of 0.1 (an arbitrary value) you may use less neurons and connections on an 8 layer network than on any other network. I''m sure it''s impossible to tell in any accurate way the optimal number of layers a network should have to approximate any arbitrary function except by trial and error with the function in question.
So you might be able to approximate the function to within 0.1 using 30 neurons and 200 connections on an 8 layer network wheras it would require 75 neurons and 350 connections on a 3 layer network.

So the answer would be that a 3 layer network may not be the most optimal in terms of number of connections, number of neurons, time for teaching (learning backprop or whatever) time for processing a single answer or any other relevant optimisation criteria you can think of.

Hope this helps,

Mike
Advertisement
Is this a trick question? I coulda sworn a perceptron referred to a single layer neural net (with no hidden layers). And therefore a multi-layer perceptron is a non-existent beast
Of course i''m no expert, but it strikes me as funny
I haven''t used them much, but from what i''ve read...
a) With a little luck, and using various techniques such as growing the net (starting with a small net, and adding nodes until you have an efficient net) and shrinking the net (starting with an overly large net, and removing useless nodes until you have an efficient net). I can''t remember the names for these techniques unfortunately.
b) Dunno
Mike D has given a great answer to part b.

The answer to part a is that there is *no* rule of thumb for choosing the amount of neurons in the hidden layer. It has to be done by trial and error. Choosing the amount of input and output neurons should be obvious within the context of the problem you are designing the net for.

For further clarification of both parts of the question plz read the comp.ai.neural-networks faq. You can find it here:

ftp://ftp.sas.com/pub/neural/FAQ.html



Stimulate
Just to add to fup''s post a little... the number of input and output nodes (units) generally depends on the size of the input and output state spaces respectively.

Timkin

This topic is closed to new replies.

Advertisement