Simulating neurons to create artificial intelligence
Are you guys aware of the artificial intelligence system?
http://www.intelligencerealm.com/aisystem/system.php
This seems to me to be the most promising project I've seen for creating true AI.
Anyone can volunteer and download a program called BOINC, which allows the project to borrow some of your computer power.
To date, the project has simulated 666,377,000,000 neurons.
I've been running the BOINC program for the project. Ya barely notice BOINC is there, really.
There's a lot of information on the site and the forums.
What do you guys think?
There are programs out there that utilize some percentage of a client's cpu then pay the client per cpu-hour or on a work-unit completion basis.
The website you have pasted a link to is a scam website which uses your cpu in this fashion. The difference is you won't see a penny. The owners of the website are seeing all the profits and lying to you about what your cpu is being used for.
The website you have pasted a link to is a scam website which uses your cpu in this fashion. The difference is you won't see a penny. The owners of the website are seeing all the profits and lying to you about what your cpu is being used for.
http://www.sharpnova.com
Although I admittedly have no experience in simulating billions of neurons, I don't think the internet is the right tool here. Therefore, I'm also inclined to think that it might be a scam site (or, a project that will lead nowhere).
In the days of terabyte disks, those 600G neurons they talk about would fit onto 3-4 hard disks which you could buy, including RAID controller, for a few hundred.
Now, using that disk array as virtual memory would give you a random access time around 5-8ms. From a network connection that communicates with random unknown hosts on the globe using idle time, you can expect access times of 100-500ms.
On the other hand, if some tasks can be made to execute sequentially on large contiguous blocks, such as summing up all neurons in one layer, your disk array will not have to do much seeking at all. So, you will not be bound by seek times, but rather by the bulk transfer rate of your disk array, which is huge.
I assume, since every host runs 500,000 neurons, you could pool some of that work too, but you would still have to transmit roughly 2MB of data to feed the neurons first... doesn't look so good to me.
Having said that, regardless of all of the above, I'd never install a program from someone unknown who asks me to, anyway.
Mind you, this might be a serious research project, but it might equally well be "Ghostnet 2.0" or simply a spambot network. There is no (easy) way you could know.
In the days of terabyte disks, those 600G neurons they talk about would fit onto 3-4 hard disks which you could buy, including RAID controller, for a few hundred.
Now, using that disk array as virtual memory would give you a random access time around 5-8ms. From a network connection that communicates with random unknown hosts on the globe using idle time, you can expect access times of 100-500ms.
On the other hand, if some tasks can be made to execute sequentially on large contiguous blocks, such as summing up all neurons in one layer, your disk array will not have to do much seeking at all. So, you will not be bound by seek times, but rather by the bulk transfer rate of your disk array, which is huge.
I assume, since every host runs 500,000 neurons, you could pool some of that work too, but you would still have to transmit roughly 2MB of data to feed the neurons first... doesn't look so good to me.
Having said that, regardless of all of the above, I'd never install a program from someone unknown who asks me to, anyway.
Mind you, this might be a serious research project, but it might equally well be "Ghostnet 2.0" or simply a spambot network. There is no (easy) way you could know.
Quote: Original post by samoth
In the days of terabyte disks, those 600G neurons they talk about would fit onto 3-4 hard disks which you could buy, including RAID controller, for a few hundred.
But then, if you build a supercomputer around that, thousands or even hundred-thousands of CPUs would want to have access to those disks, and accesstime drastically grows (reminds me of the "processing power increases faster than memory access time"-problem).
edit-begin:
Also, having so few disks makes the simulation very prone to hardware errors. If you allow me to cite L. Peter Deutsch, Bill Joy and Tom Lyon:
Quote: Fallacies of Distributed Computing
1. The network is reliable.
2. Latency is zero.
3. Bandwidth is infinite.
4. The network is secure.
5. Topology doesn't change.
6. There is one administrator.
7. Transport cost is zero.
8. The network is homogeneous.
The first three arguments approximately fit to the 4 disk raid conjunction (I am saying approximately because you actually mentioned latencies, but you ignored concurrent access).
Hmmm, do I see right that they really miss hardware errors, or is that included in Numbers 1 and 5?
edit-end.
I would make it so, that each CPU can work with the data it has as long as possible, and give each CPU some working RAM, which would then yield a similar topology as already is.
[Edited by - phresnel on April 8, 2009 10:22:50 AM]
Quote: Original post by AlphaCoder
There are programs out there that utilize some percentage of a client's cpu then pay the client per cpu-hour or on a work-unit completion basis.
The website you have pasted a link to is a scam website which uses your cpu in this fashion. The difference is you won't see a penny. The owners of the website are seeing all the profits and lying to you about what your cpu is being used for.
Are you sure about this?
There's a lot of information in the forums and updates are posted every now and then. I suspected the site is a bit vague in places but it seemed legitamit enough to me.
There are a number of projects that use the BOINC program.
http://en.wikipedia.org/wiki/List_of_distributed_computing_projects#Berkeley_Open_Infrastructure_for_Network_Computing_.28BOINC.29
Quote: Original post by AlphaCoder
The website you have pasted a link to is a scam website which uses your cpu in this fashion. The difference is you won't see a penny. The owners of the website are seeing all the profits and lying to you about what your cpu is being used for.
Okay, you didn't say "I think it is a scam website" or "they may be lying". You say that they are lying. So your definitive statements lead me to believe you know this to be true based on some persuasive information you have. Please share.
I agree with the above responses that it is likely a dead-end, but I'm not going to call that a scam.
This presentation will give you a decent Idea as to the state REAL neuron simulation by a researcher who is actually doing it.
http://video.google.com/videoplay?docid=-2874207418572601262
http://video.google.com/videoplay?docid=-2874207418572601262
I posted that video and asked him what he thought, I'll post his answer here. Not gonna post a direct link, 'cause I don't really want people posting there accusing the project of being fake. Quote...
I didn't had time to go through the video.
Each neuron is different indeed. If you look into the source code you will see that we split the neurons into small objects and that allows to build any type of neuron. The following analogy should help as to what we are doing: take for example the text of a book; each page is different, but if you break that book, into pages, then sentences, than words and then letters you can come up with a device like the keyboard that can define the entire book contents. We did the same thing, we broke the brain into neurons, then dendrites, axon hillocks, synapses, axons and we built a device, a program in our case, that allows us to define any type of neuron.
In regards to micro-columns or positioning of neurons into layers and areas that can be done by representing the neurons in a 3D structure. We are using OpenGL for that. In the Project Development section there is a picture with a basic network configuration. If you add 3D coordinates to each object (e.g. neurons, dendrites, axons...) then you can position a neuronal object any way you want.
In regards to the role of micro-columns in computations it should be noted that
the C. Elegans worm has around 400 neurons and they are not positioned in micro-columns, yet its nervous system is capable of representing information.
Grouping together neurons has great benefits, among the most important would be the reduction of levels of support obtained by grouping. A significant number of resources and energy of the body is put in maintenance of brain's cells. Which is better: to have neurons all over the place in the brain, neurons that participate in similar processes, at the expense of having to maintain long and slow "wires" (axonal segments that are providing the inter-connections between neurons) or to have them bundled up together in areas, layers, macro-columns and micro-columns with short and fast wires? As electric signals move through the wires, there are delays, they are called axonal delays. The transmission speed is pretty good, but the longer the wire the greater the delay.
There are various opinions as to the role of microcolumns. Mountcastle was a
long-time supporter in regards to microcolumn research. I read some of Casanova papers about that (http://en.wikipedia.org/wiki/Manuel_Casanova).
Markham from Blue Brain is also looking into microcolumn simulations.
The assumption that neuronal positioning is essential for neuronal computations is an unfounded assumption.
Thanks,
Ovidiu
End quote.
I don't think a scammer would post such a detailed response, and he usually does post these detailed responses to questions.
I didn't had time to go through the video.
Each neuron is different indeed. If you look into the source code you will see that we split the neurons into small objects and that allows to build any type of neuron. The following analogy should help as to what we are doing: take for example the text of a book; each page is different, but if you break that book, into pages, then sentences, than words and then letters you can come up with a device like the keyboard that can define the entire book contents. We did the same thing, we broke the brain into neurons, then dendrites, axon hillocks, synapses, axons and we built a device, a program in our case, that allows us to define any type of neuron.
In regards to micro-columns or positioning of neurons into layers and areas that can be done by representing the neurons in a 3D structure. We are using OpenGL for that. In the Project Development section there is a picture with a basic network configuration. If you add 3D coordinates to each object (e.g. neurons, dendrites, axons...) then you can position a neuronal object any way you want.
In regards to the role of micro-columns in computations it should be noted that
the C. Elegans worm has around 400 neurons and they are not positioned in micro-columns, yet its nervous system is capable of representing information.
Grouping together neurons has great benefits, among the most important would be the reduction of levels of support obtained by grouping. A significant number of resources and energy of the body is put in maintenance of brain's cells. Which is better: to have neurons all over the place in the brain, neurons that participate in similar processes, at the expense of having to maintain long and slow "wires" (axonal segments that are providing the inter-connections between neurons) or to have them bundled up together in areas, layers, macro-columns and micro-columns with short and fast wires? As electric signals move through the wires, there are delays, they are called axonal delays. The transmission speed is pretty good, but the longer the wire the greater the delay.
There are various opinions as to the role of microcolumns. Mountcastle was a
long-time supporter in regards to microcolumn research. I read some of Casanova papers about that (http://en.wikipedia.org/wiki/Manuel_Casanova).
Markham from Blue Brain is also looking into microcolumn simulations.
The assumption that neuronal positioning is essential for neuronal computations is an unfounded assumption.
Thanks,
Ovidiu
End quote.
I don't think a scammer would post such a detailed response, and he usually does post these detailed responses to questions.
They use OpenGL to position 500,000 neurons in 3D........... is this a neuron display system, or a simulation?
Here's his answer to that.
OpenGL has multiple roles in visualization:
1) visualization of neurons in 3D
2) navigation (e.g. up, down, left, right, zoom in, zoom out) through networks
3) display of results (i.e. single or trains of spikes) and functions
4) interacting with the simulation (e.g. defining simulation properties like time step or duration)
Visualization is a critical component because without it, browsing through large databases filled with neuronal data and making changes to that data would be very slow. Imagine having to modify different properties of various neuron types that are distributed over many machines and databases with millions of rows to fit the network for a specific outcome.
In computational neuroscience right now most simulations are in the range of hundreds or thousands of neurons, precisely because there are very few tools that are integrated with the basic data. Neuronal networks in brain form specific circuits, that do not share the same biophysical properties (i.e. conductance, voltage....). There are algorithms for fitting but there is a lot of work left to be done on the integration side.
OpenGL has multiple roles in visualization:
1) visualization of neurons in 3D
2) navigation (e.g. up, down, left, right, zoom in, zoom out) through networks
3) display of results (i.e. single or trains of spikes) and functions
4) interacting with the simulation (e.g. defining simulation properties like time step or duration)
Visualization is a critical component because without it, browsing through large databases filled with neuronal data and making changes to that data would be very slow. Imagine having to modify different properties of various neuron types that are distributed over many machines and databases with millions of rows to fit the network for a specific outcome.
In computational neuroscience right now most simulations are in the range of hundreds or thousands of neurons, precisely because there are very few tools that are integrated with the basic data. Neuronal networks in brain form specific circuits, that do not share the same biophysical properties (i.e. conductance, voltage....). There are algorithms for fitting but there is a lot of work left to be done on the integration side.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement