Advertisement

Better Neural Networks?

Started by February 02, 2006 11:41 AM
2 comments, last by GameDev.net 18 years, 9 months ago
1) Introduction Forgive the length of this thread. I decided to organize it into sections for easier reading. I would like to do image classification with neural networks. For this, I know that feed-forward neural networks can be used. Even single-layer ones (perceptron-like). The problem I am running into is that my classification is going to be quite complex (eg: recognizing the same object from different perspectives) and I anticipate that this would take a neural feed-forward network several layers. 2) My problem Now, just to analize a 64x64 black and white image (that's 4096 pixels), I might need say, 5 layers. The problem is that, from what I know, in a standard feed-forward network, each neuron is linked to all neurons of the successive layer. Hence, if I have just two layers, there will be 4096 x 4096 links (over 16 million links). This is obviously pretty bad, because the number of links is exponential. Now if I want to add colors (which would be important in my problem), I need 3 times more input neurons, so 12288 on the first level. But if I have 2 levels of that, this makes 150M links! Assuming each link can be compressed into 32 bits (which is somewhat inconvenient), this makes for over 600MB of link data, just for a two layer neural network to analyze my color image. But even there, I would probably want more than two layers, and more definition than 64x64. If I wanted to support RGB color 256x256 images, I would have 196608 input neurons. If I had a second layer with that many neurons, this means 3.87 x 10^10 links. More than my modest 2GB of RAM can possibly hope to hold. 3) Neural networks using less memory So now, I've been thinking about this, and there isn't much hope for this approach. If I used the disk, this would still be painfully slow and complex. But then I thought... This is not even how the human brain works. In our brain, neurons are not organized in layers, nor do they link to all other neurons in their proximity. I read in an article that some of our neurons were linked to as much as about 4000 nearby neurons, but most probably are not! So I began to wonder, would there be a way to build a layered (for simplicity) neural network where neurons are only linked to a small subset of the subsequent layer. Say, each neuron has at most 1024 links? This way, with an input matrix of size 196608 and a second layer with the same number of neurons, I have 196608x1024 links = 201 M links, which is big, but still manageable. If I could work with even less links, say 64 links per neuron, then this would be 196608x64 links, so 12.6M links, which is very manageable... And assuming 8 bytes per link, would still only be ~100 MB of data, possibly allowing me as much as 15 layers and 3 million neurons (realistically). The neurons could even spread their links around, so that a neuron is linked to all the neurons "facing" it on the next layer, but also to a few farther neurons, to allow the important information to propagate faster. The obvious question is, has this been done? Are there neural networks organized so that neurons only link with the nearby ones, so they can have more neurons and take less memory. What about learning algorithms for those? Would the regular back-propagation algorithm still work, or would new ones need to be implemented? 4) Time-dependent neural networks? I am also interested in knowing if there are neural networks that do not work like the typical feed-forward network. That is, typically, neural networks work in a static time frame. The input is fed, the computation is performed (signals travel instantly), and the output is obtained. However, are there artificial neural networks that work on a frame-based mechanism. I imagined something like this, which I feel is more organic and perhaps closer to the way our brain tissue works: You feed input signals every frame to some selected input neurons, and neurons only fire if their threshold is reached. Until that happens, they keep accumulating charge. Each frame, a neuron can send discharges to the ones it is immediately connected to, but the signals do not travel multiple levels. The network can also have feedback loops in its organization, where a neuron can send a signal, and indirectly trigger another signal being sent to itself from elsewhere... Which means that the processing does not necessarily ever stop. Output may periodically be output from some selected output neurons. This ANN would have the advantage that it could have "memory" of past events, and process things continuously (such as ongoing sounds). It would not be restricted to processing static input samples. The main problem is that I have no idea how such an ANN would be trained, apart from something inefficient like this (which seems very inefficient): 1) Reset ANN (neutralize still ongoing signals, but preserve weights) 2) Perform random variations to ANN weights 3) Feed training input set into ANN 4) Measure and rate performance again, if improved, keep modifications, otherwise reset to previous weight configuration 5) Neutralize still ongoing signals while preserving weights 6) Goto #1

Looking for a serious game project?
www.xgameproject.com
Both reducing the number of links and recurrent neural networks (which often use discrete time steps) are things that have been done very frequently in neural network research. Feed-forward networks are often fully connected between layers, but this is not necessary.

A feed-forward network that is not fully connected can be trained using the same algorithms as a fully connected feed-forward network, such as backpropagation and its variants. In fact these algorithms tend to work with connections that skip layers too. They tend to be designed for any acyclic network.

Recurrent networks are far more general than acyclic networks, so there is a lot of variety in what they look like, and how they encode their inputs and outputs, and how signals are propagated. The training algorithms tend to be more complicated as well. Some of them find ways to extend backpropagation to cyclic networks, but this has certain weaknesses. In particular ensuring that memory is used successfully can be challenging, but there has been some success in training the networks to recognize some rather complicated patterns.

As far as recognizing an object from multiple perspectives, from what I can find on google it looks like research in that area is using more specialized functions to help reduce the complexity of what the neural networks have to learn (combining geometrical transformations used in image processing with neural networks maybe).
Advertisement
Preprocessing is particularly significant when you have so much input data. The use of eigenimages and feature extraction preprocessing are fairly popular in ANN-based image classification.

Additionally, if you have large images then you presumably have many different types of objects and a large feature set. Object features are probably not unique, so that any given single feature could come from several objects (consider for instance the leg of a table... it might also be the leg of a chair). If this is the case, consider training sub-networks (or separate networks) on different subsets of the domain and ensure that the features each is to classify are orthogonal.

It's not an easy problem you wish to tackle... but it is doable. I don't see the need to develop new architectures... just implement the current ones intelligently.

Cheers,

Timkin
Many this type problems is because you try blind use neural networks from books.
Consier this : many types of data recuiere difrent tpyes of processing (just lika algoritms). Read some books about haw human brain processing images. For example recognizing shapes is diffrent than coloours, and probably you must use two dirffrent NN.
About too many links between neurons, in simplest shape recognishion networks you don't need to link evry neuron to evry other. Nature has already find efective method for that - read some, for example about crab's shape recognition

This topic is closed to new replies.

Advertisement