I thought of using CUDA and Quad SLI to process a neural network awhile ago but I have been busy with other things. I know that a lot of you don't have much faith in neural networks but I think they can be very useful in a programming language.
I want to make a visual programming language in which you link inputs and outputs of feed forward neural network functions together to make whole systems. So to write a function you just give it input then the desired output.
There is no limit to the possible uses such a language could have but to really be useful it would need to be very fast. The most demanding yet useful function would be a visual recognition function and I guess this would require around 245,760,000 neurons since a 640x480 with 256 colors is 307,200 bytes with 100 layers to do the processing.
To be usable it would need to be able to run a neural network of this size in about one millisecond. I'm not sure if this is possible but if it is then CUDA with Quad SLI would be the fastest way that I know of to do it. Do you think it would be that fast?
[Edited by - SteveDeFacto on September 23, 2010 3:48:26 AM]
CUDA and Quad SLI to process a neural network.
Quote:
Original post by SteveDeFacto
I want to make a visual programming language in which you link inputs and outputs of feed forward neural network functions together to make whole systems. [...] The most demanding yet useful function would be a visual recognition function and I guess this would require around 245,760,000 neurons
If anything ever sounded unsuited to a visual programming language, this does. How are you even going to visualise that many neurons, never mind have the user join them up?
Quote:
Original post by Kylotan Quote:
Original post by SteveDeFacto
I want to make a visual programming language in which you link inputs and outputs of feed forward neural network functions together to make whole systems. [...] The most demanding yet useful function would be a visual recognition function and I guess this would require around 245,760,000 neurons
If anything ever sounded unsuited to a visual programming language, this does. How are you even going to visualise that many neurons, never mind have the user join them up?
What are you talking about? You program a matrix of neurons using standard binary types such as longs and floats. You let the matrix figure out how to interpret the information. All you do is link binary variables from the matrices.
Quote:
Original post by LorenzoGatti
100 layers? Please explain.
That's just a guess and I'm sure you can do it with less but if you could get that many neurons then I'm pretty sure you can do anything with it.
Quote:
Original post by SteveDeFacto Quote:
Original post by LorenzoGatti
100 layers? Please explain.
That's just a guess and I'm sure you can do it with less but if you could get that many neurons then I'm pretty sure you can do anything with it.
And I am pretty sure you are wrong. In the words of A. K. Dewdney: "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool."
Quote:
Original post by alvaroIn the words of A. K. Dewdney: "Although neural nets do solve a few toy problems, their powers of computation are so limited that I am surprised anyone takes them seriously as a general problem-solving tool."
Is it just me, or is that AK Dewdney quote "world is flat" levels of wrong? I'm not even in research and I'm aware of a few dozen commercial industrial applications that utilize ANNs in image processing, robotics, prediction, controllers, chemistry, particle physics, etc. A simple google search reveals hundreds. Maybe he stated that quote back before Internet search?
Also, does he mean something specific by "general problem solving tool"? Can there be such a thing as a "general problem solving tool" when we already know that any given problem solving algorithm can only gain performance in a class of problems by losing in other classes of problems?
Evaluating the ANN by itself is no big deal. The real "meat" is in the training algorithm.
Unfortunately, the training algorithms inevitably suck, because the error function is hopelessly nonconvex with respect to the weights.
I DO like your idea of a graphical programming language that runs on the GPU, and that you wire together visually as a network, that explicitly encodes parallelism. A stream processing language. Like Labview or Simulink on the GPU. That'd be cool.
[Edited by - Emergent on October 12, 2010 8:51:11 PM]
Unfortunately, the training algorithms inevitably suck, because the error function is hopelessly nonconvex with respect to the weights.
I DO like your idea of a graphical programming language that runs on the GPU, and that you wire together visually as a network, that explicitly encodes parallelism. A stream processing language. Like Labview or Simulink on the GPU. That'd be cool.
[Edited by - Emergent on October 12, 2010 8:51:11 PM]
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement
Recommended Tutorials
Advertisement