Quote: Roland Piquepaille writes "We're using computers for so long now that I guess that many of you think that our brains are working like clusters of computers. Like them, we can do several things 'simultaneously' with our 'processors.' But each of these processors, in our brain or in a cluster of computers, is supposed to act sequentially. Not so fast! According to a new study from Cornell University, this is not true, and our mental processing is continuous. By tracking mouse movements of students working with their computers, the researchers found that our learning process was similar to other biological organisms: we're not learning through a series of 0's and 1's. Instead, our brain is cascading through shades of grey."Full article can be read here: http://www.news.cornell.edu/stories/June05/new.mind.model.ssl.html
Our brains really DON'T work like computers!
Very interesting reading but too short if you ask me.
From slashdot:
Shades of grey? Whatever! as I hear the young folks say these days...
What they fail to understand is that computers can simulate continuous values quite nicely. For instance, you can simulate continuous-time, continuous valued differential equations. These can model things like falling bricks... which I hope live in a continuous world. These leaves some hope for simulation of the brain. One thing that might be a problem is that a computer has a finite amount of storage... is a brain's capacity finite? who knows?
We make logical decisions all the time. Should I eat that? Should I play tennis? Do I walk to the grocery store? Sounds like a lot of 1's and 0's to me. In essence, we do not know how the brain works in terms of intelligence. We can't even measure it very well.
If I were to try to model the brain, I would use a hybrid model which is a combination of discrete events and continuous time dynamics. This of course can be simulated on a computer. Coming up with a model, well... errr... ummm...
What they fail to understand is that computers can simulate continuous values quite nicely. For instance, you can simulate continuous-time, continuous valued differential equations. These can model things like falling bricks... which I hope live in a continuous world. These leaves some hope for simulation of the brain. One thing that might be a problem is that a computer has a finite amount of storage... is a brain's capacity finite? who knows?
We make logical decisions all the time. Should I eat that? Should I play tennis? Do I walk to the grocery store? Sounds like a lot of 1's and 0's to me. In essence, we do not know how the brain works in terms of intelligence. We can't even measure it very well.
If I were to try to model the brain, I would use a hybrid model which is a combination of discrete events and continuous time dynamics. This of course can be simulated on a computer. Coming up with a model, well... errr... ummm...
So we need to upgrade computers from binary to base 256! Then we'll be good to go? hehe
Groundbreaking Discovery, eh, muhuahaha.
I second that [grin]
Seriously, computer simulated neural network also works in "shades of gray". And even pretty much any machine-learning software. (If one going to say that signals in brain is "truly continuous", in brain signals is _not_ "truly continuous" too. You can count all molecules of neurotransmitter released, and that's discrete number).
[Edited by - Dmytry on June 30, 2005 7:24:49 AM]
Quote: Original post by DrEvil
So we need to upgrade computers from binary to base 256! Then we'll be good to go? hehe
I second that [grin]
Seriously, computer simulated neural network also works in "shades of gray". And even pretty much any machine-learning software. (If one going to say that signals in brain is "truly continuous", in brain signals is _not_ "truly continuous" too. You can count all molecules of neurotransmitter released, and that's discrete number).
[Edited by - Dmytry on June 30, 2005 7:24:49 AM]
Note that everyone says, "simulates" shades of grey.
No matter how good a simulation is, it is still a simulation, especially since we're working on an inherently binary and discrete platform. Continuity and randomness are things that a computer really can't do. There will always be rounding errors and such. Yes, we can simulate continuous numbers, but the real number space cannot be represented on a computer without massive storage space. So, in the end, all we're left with are "simulations" of the real thing. And as most math people know, no matter how small the margin of error is, as computational complexity increases, the error will eventually compound to the point that it becomes significant.
Changing systems from base 2 to base 256 won't really solve the problem completely, not to mention it presents a whole new set of problems itself. There's a reason why computing is fundamentally base 2.
So, in every aspect, comparing a computer and a human brain is like comparing apples and oranges. The two just inherently process different thing. They work on completely different levels. So, this research, though may not be a totally new idea, goes a way in reinforcing what many already know.
No matter how good a simulation is, it is still a simulation, especially since we're working on an inherently binary and discrete platform. Continuity and randomness are things that a computer really can't do. There will always be rounding errors and such. Yes, we can simulate continuous numbers, but the real number space cannot be represented on a computer without massive storage space. So, in the end, all we're left with are "simulations" of the real thing. And as most math people know, no matter how small the margin of error is, as computational complexity increases, the error will eventually compound to the point that it becomes significant.
Changing systems from base 2 to base 256 won't really solve the problem completely, not to mention it presents a whole new set of problems itself. There's a reason why computing is fundamentally base 2.
So, in every aspect, comparing a computer and a human brain is like comparing apples and oranges. The two just inherently process different thing. They work on completely different levels. So, this research, though may not be a totally new idea, goes a way in reinforcing what many already know.
It seems as if you come from a mathematical background. I agree with a lot of what you have to say especially when it comes to errors. A lot has to do with the fact we do not know how to model the brain. Just because it's a brain doesn't mean there is a nth order chaotic system involved. It bothers me that people assume that the brain is complicated. We may just not have the tools to understand it very well at the moment. Therefore, I would think, it would be unfair to count out computers just yet. Kinda like putting the cart in front of the horse and throwing our hands up.
Quote: Original post by WeirdoFu
Note that everyone says, "simulates" shades of grey.
And as most math people know, no matter how small the margin of error is, as computational complexity increases, the error will eventually compound to the point that it becomes significant.
To operate a brain simulation (or a chemical simulation, etc..) there may be a minimal required percision. I'm not saying there is, just that it's a possibility. There is no law stating that to simulate a convincing brain we need to have a quantum level model.
I'd also bet that the brain is capabable of ignoring it's own noise, so must have some concept of margin of error.
Still, I agree with you, mostly. :)
Will
------------------http://www.nentari.com
Personally, I feel that to build a believable model of parts of the brain is not very hard. For example, building a model of memory may actually be pretty simple. The hardest part, however, is the cognitive model.
For memory, the human brain is similar to a relational database. However, what is actually stored is fairly interesting. Human minds go through a process of abstraction when it comes to storage and interpretation. So, let's say if you saw a vase with a specific design, you may start out in short term memory remembering exactly what the vase looks like. However, due to the limites storage of short term memory, the memory is either eventually dumped, or if reinforced enough, abstracted and compressed. So, if the short term memory is the uncompressed data buffer, then long term memory is the abstracted relational database where information is abstracted into fundamental pieces and stored seperately. When data is required, an entry point is found and data reconstructed incrementally. So, this is why we're good at describing to people what things look like, but many time can't paint an exact image. This is partially because some details were lost in the abstract process. Since long term memory is also limited, overtime, memory pieces get further abstracted and merged with other entries or completely discarded. This is what I think the model of human memory is like, in general. Can a computer simulate it? Maybe. The catch is all in the representation and data abstraction, since these operations aren't just discrete, but symbolic. And interestingly enough, people don't seem to abstract things in the same wau either.
I guess now that I've rambled around a bit, I finally realized what I really wanted to say. The problem with simulating a real brain is that its not discrete or continuous. The human brain works on a symbolic level, which can be discrete or continuous, or neither.
For memory, the human brain is similar to a relational database. However, what is actually stored is fairly interesting. Human minds go through a process of abstraction when it comes to storage and interpretation. So, let's say if you saw a vase with a specific design, you may start out in short term memory remembering exactly what the vase looks like. However, due to the limites storage of short term memory, the memory is either eventually dumped, or if reinforced enough, abstracted and compressed. So, if the short term memory is the uncompressed data buffer, then long term memory is the abstracted relational database where information is abstracted into fundamental pieces and stored seperately. When data is required, an entry point is found and data reconstructed incrementally. So, this is why we're good at describing to people what things look like, but many time can't paint an exact image. This is partially because some details were lost in the abstract process. Since long term memory is also limited, overtime, memory pieces get further abstracted and merged with other entries or completely discarded. This is what I think the model of human memory is like, in general. Can a computer simulate it? Maybe. The catch is all in the representation and data abstraction, since these operations aren't just discrete, but symbolic. And interestingly enough, people don't seem to abstract things in the same wau either.
I guess now that I've rambled around a bit, I finally realized what I really wanted to say. The problem with simulating a real brain is that its not discrete or continuous. The human brain works on a symbolic level, which can be discrete or continuous, or neither.
Sounds reasonable to me Fu. Though I'm a little bit fuzzy on what you mean by symbolic. I believe like you said earlier that it really depends on what parts you want to model and how accurately (what level of abstraction) etc. Cognitive, memory, or otherwise.
Physically spoken it seems more and more that our world is not continous! It seems continous to us but it really is not. Like in Quantummechanics the energieniveu are not continous but discreet the new physical theories let us see that our world is discreet. Space and time are not continous but discreet. There exist the smallest unit of space and resulting of this there exists the smallest amount of time. There is no smaller unit! So everything around us and we too is discreet. Maybe not in terms of 0 and 1 but if we choose 0 and 1 to represent this smallest units it will work fine because it does not matter which system one uses to represent something.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement