Hello, I'm a University student, starting a personal project (not a school project) dealing with artificial intelligence.
Last summer I made a small demo of a learning algorithm in flash, sort of a proof-of-concept. The algorithm successfully learns the simple task of following the mouse. The problem is scalability. According to my calculations, it takes an enormous amount of memory and computing power to do any useful task. So I wanted to introduce the project on GameDev to anyone interested in helping develop this system. I still don't know what it could be used for, but my guess is, something :). The main motivation for this is for fun/hobby, and to build something cool.
My vision it to work out a scalable neural network with pattern recognition and memory using a distributed computing model, similar to SETI@home (Google it if you're not familiar). Users will run this small program that is linked to a central server, giving the network access to a small fraction of the user's hard drive and idle processor time. This memory and computing power would provide a huge working space for the neural network, allowing it to learn complex patterns, perhaps photo recognition, music/speech recognition, Language cognition, or something of that sort.
I'm looking to collaborate with programmers, concept designers, creative thinkers, psychologists, and anyone interested in making something cool.
If you're interested, shoot me a PM or email: romsa9@gmail.com
Thanks,
Roman
Distributed Artificial Neural Network Project
I am a psychology/neuroscience student and also amateur programmer. I am also working on a scalable ANN model, as mentioned in the other thread. I don't think I will have much time to write any code but I can definitely exchange ideas with you relating to theory, design etc.
Here is an interesting (and long) lecture by Jeff Hawkins on using NNets for advanced visual pattern recognition (the principles relate to other forms of advanced pattern recognition). By the sounds of your project I think you will find it interesting. The really good stuff is in the later half of the video but the earlier half lays down important contextual information.
">Jeff Hawkins AI
Here is an interesting (and long) lecture by Jeff Hawkins on using NNets for advanced visual pattern recognition (the principles relate to other forms of advanced pattern recognition). By the sounds of your project I think you will find it interesting. The really good stuff is in the later half of the video but the earlier half lays down important contextual information.
">Jeff Hawkins AI
Quote: Original post by rstd
Last summer I made a small demo of a learning algorithm in flash, sort of a proof-of-concept. The algorithm successfully learns the simple task of following the mouse. The problem is scalability. According to my calculations, it takes an enormous amount of memory and computing power to do any useful task.
Flash and high performance computing just don't mix.
Before claiming that performance reasons required distributed collaborative processing, implement a good learning algorithm in an efficient way and on an appropriate platform (like FORTRAN or C++ with a good compiler).
Omae Wa Mou Shindeiru
Quote: Original post by LorenzoGattiAgreed. Additionally I wonder how well a distributed neural network would work with home users, and inhowfar such a thing makes a lot of sense other than being "cool" for being distributed and free.
Flash and high performance computing just don't mix.
In the case of SETI, you can send one clipped radio image taken from one particular quadrant of the sky to one machine, process it, send the result back. If the target machine crashes or is shut down, the clip is gone in the worst case, but so what. You can always process yesterday's unprocessed clips again on a different machine.
In a neural network, one (or several) neuron needs the input of another one in the (usually) previous layer. Which means each layer can only be as fast as the slowest machine participating.
Admittedly, you can somewhat attenuate this in principle by running many neurons on one machine (though if you take away 500 MB of RAM, burn 50% CPU at all times, and cause 2 GB traffic per day, users will be unlikely to to help you). On the other hand, every across-machines-boundary adds at least two round trip times plus the time needed to transfer data, so around half a second or more for "average home users".
A single big server in your university could process an awful lot of neurons locally in that half second. You would really need ten thousands or hundred thousands of machines to make up for that. Well, in my opinion, I might be wrong.
First Memory is cheap these days so its less of a problem per computer node.
AI usually has the most demand on CPU processing and that is what you would gain by farming it out to clusters (or grid/cloud).
The communication on the internet and even on a dedicated LAN is still a major bottleneck so you would want to minimize that traffic if possible.
If the input data isnt too bulky, replicating it to large number of nodes across a network wont be too costly (though still slow so realtime processing would be hard, but non realtime batched processing woulnd be a problem).
NN solutions do not always require a single large monolithic net and solutions can often be made using a number of smaller specialized NN networks each one trained for a particular function. Each ones out put would be very small and could be marshaled at a higher meta level of the complete solver. Smaller simpler NNs are also usually much easier to train.
Each of these small specialized NN would be 'Agents' that could live intact on a distributed processing node and all their own NxN interconnects would all be local memory access (or at worst cross multi-cpu/core shared memory).
A high level NN might be used to decide whether processing using a particular Agent (or set of agents) is required and each processing node would have a task list and multiple agents in residence and could process only on the NN agents required.
To have you large NN problems process efficiently you have to avoid the bottlenecks. If you need more CPU then you cannot have the communications part of a distributed system nullify the CPU increase. Deciding on the data flow between Local versus Remote will depend on the problem type being solved.
--------------------------------------------[size="1"]Ratings are Opinion, not Fact
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement